1
linux/fs/cramfs
Phillip Lougher 8bb0269160 [PATCH] corrupted cramfs filesystems cause kernel oops
Steve Grubb's fzfuzzer tool (http://people.redhat.com/sgrubb/files/
fsfuzzer-0.6.tar.gz) generates corrupt Cramfs filesystems which cause
Cramfs to kernel oops in cramfs_uncompress_block().  The cause of the oops
is an unchecked corrupted block length field read by cramfs_readpage().

This patch adds a sanity check to cramfs_readpage() which checks that the
block length field is sensible.  The (PAGE_CACHE_SIZE << 1) size check is
intentional, even though the uncompressed data is not going to be larger
than PAGE_CACHE_SIZE, gzip sometimes generates compressed data larger than
the original source data.  Mkcramfs checks that the compressed size is
always less than or equal to PAGE_CACHE_SIZE << 1.  Of course Cramfs could
use the original uncompressed data in this case, but it doesn't.

Signed-off-by: Phillip Lougher <phillip@lougher.org.uk>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-12-07 08:39:36 -08:00
..
inode.c [PATCH] corrupted cramfs filesystems cause kernel oops 2006-12-07 08:39:36 -08:00
Makefile
README
uncompress.c [PATCH] cramfs: make cramfs_uncompress_exit() return void 2006-09-29 09:18:20 -07:00

Notes on Filesystem Layout
--------------------------

These notes describe what mkcramfs generates.  Kernel requirements are
a bit looser, e.g. it doesn't care if the <file_data> items are
swapped around (though it does care that directory entries (inodes) in
a given directory are contiguous, as this is used by readdir).

All data is currently in host-endian format; neither mkcramfs nor the
kernel ever do swabbing.  (See section `Block Size' below.)

<filesystem>:
	<superblock>
	<directory_structure>
	<data>

<superblock>: struct cramfs_super (see cramfs_fs.h).

<directory_structure>:
	For each file:
		struct cramfs_inode (see cramfs_fs.h).
		Filename.  Not generally null-terminated, but it is
		 null-padded to a multiple of 4 bytes.

The order of inode traversal is described as "width-first" (not to be
confused with breadth-first); i.e. like depth-first but listing all of
a directory's entries before recursing down its subdirectories: the
same order as `ls -AUR' (but without the /^\..*:$/ directory header
lines); put another way, the same order as `find -type d -exec
ls -AU1 {} \;'.

Beginning in 2.4.7, directory entries are sorted.  This optimization
allows cramfs_lookup to return more quickly when a filename does not
exist, speeds up user-space directory sorts, etc.

<data>:
	One <file_data> for each file that's either a symlink or a
	 regular file of non-zero st_size.

<file_data>:
	nblocks * <block_pointer>
	 (where nblocks = (st_size - 1) / blksize + 1)
	nblocks * <block>
	padding to multiple of 4 bytes

The i'th <block_pointer> for a file stores the byte offset of the
*end* of the i'th <block> (i.e. one past the last byte, which is the
same as the start of the (i+1)'th <block> if there is one).  The first
<block> immediately follows the last <block_pointer> for the file.
<block_pointer>s are each 32 bits long.

The order of <file_data>'s is a depth-first descent of the directory
tree, i.e. the same order as `find -size +0 \( -type f -o -type l \)
-print'.


<block>: The i'th <block> is the output of zlib's compress function
applied to the i'th blksize-sized chunk of the input data.
(For the last <block> of the file, the input may of course be smaller.)
Each <block> may be a different size.  (See <block_pointer> above.)
<block>s are merely byte-aligned, not generally u32-aligned.


Holes
-----

This kernel supports cramfs holes (i.e. [efficient representation of]
blocks in uncompressed data consisting entirely of NUL bytes), but by
default mkcramfs doesn't test for & create holes, since cramfs in
kernels up to at least 2.3.39 didn't support holes.  Run mkcramfs
with -z if you want it to create files that can have holes in them.


Tools
-----

The cramfs user-space tools, including mkcramfs and cramfsck, are
located at <http://sourceforge.net/projects/cramfs/>.


Future Development
==================

Block Size
----------

(Block size in cramfs refers to the size of input data that is
compressed at a time.  It's intended to be somewhere around
PAGE_CACHE_SIZE for cramfs_readpage's convenience.)

The superblock ought to indicate the block size that the fs was
written for, since comments in <linux/pagemap.h> indicate that
PAGE_CACHE_SIZE may grow in future (if I interpret the comment
correctly).

Currently, mkcramfs #define's PAGE_CACHE_SIZE as 4096 and uses that
for blksize, whereas Linux-2.3.39 uses its PAGE_CACHE_SIZE, which in
turn is defined as PAGE_SIZE (which can be as large as 32KB on arm).
This discrepancy is a bug, though it's not clear which should be
changed.

One option is to change mkcramfs to take its PAGE_CACHE_SIZE from
<asm/page.h>.  Personally I don't like this option, but it does
require the least amount of change: just change `#define
PAGE_CACHE_SIZE (4096)' to `#include <asm/page.h>'.  The disadvantage
is that the generated cramfs cannot always be shared between different
kernels, not even necessarily kernels of the same architecture if
PAGE_CACHE_SIZE is subject to change between kernel versions
(currently possible with arm and ia64).

The remaining options try to make cramfs more sharable.

One part of that is addressing endianness.  The two options here are
`always use little-endian' (like ext2fs) or `writer chooses
endianness; kernel adapts at runtime'.  Little-endian wins because of
code simplicity and little CPU overhead even on big-endian machines.

The cost of swabbing is changing the code to use the le32_to_cpu
etc. macros as used by ext2fs.  We don't need to swab the compressed
data, only the superblock, inodes and block pointers.


The other part of making cramfs more sharable is choosing a block
size.  The options are:

  1. Always 4096 bytes.

  2. Writer chooses blocksize; kernel adapts but rejects blocksize >
     PAGE_CACHE_SIZE.

  3. Writer chooses blocksize; kernel adapts even to blocksize >
     PAGE_CACHE_SIZE.

It's easy enough to change the kernel to use a smaller value than
PAGE_CACHE_SIZE: just make cramfs_readpage read multiple blocks.

The cost of option 1 is that kernels with a larger PAGE_CACHE_SIZE
value don't get as good compression as they can.

The cost of option 2 relative to option 1 is that the code uses
variables instead of #define'd constants.  The gain is that people
with kernels having larger PAGE_CACHE_SIZE can make use of that if
they don't mind their cramfs being inaccessible to kernels with
smaller PAGE_CACHE_SIZE values.

Option 3 is easy to implement if we don't mind being CPU-inefficient:
e.g. get readpage to decompress to a buffer of size MAX_BLKSIZE (which
must be no larger than 32KB) and discard what it doesn't need.
Getting readpage to read into all the covered pages is harder.

The main advantage of option 3 over 1, 2, is better compression.  The
cost is greater complexity.  Probably not worth it, but I hope someone
will disagree.  (If it is implemented, then I'll re-use that code in
e2compr.)


Another cost of 2 and 3 over 1 is making mkcramfs use a different
block size, but that just means adding and parsing a -b option.


Inode Size
----------

Given that cramfs will probably be used for CDs etc. as well as just
silicon ROMs, it might make sense to expand the inode a little from
its current 12 bytes.  Inodes other than the root inode are followed
by filename, so the expansion doesn't even have to be a multiple of 4
bytes.