wok-current diff linux-libre/stuff/001-squashfs-decompressors-add-xz-decompressor-module.patch @ rev 13150

get-flash-plugin: be busybox/sed compatible
author Pascal Bellard <pascal.bellard@slitaz.org>
date Wed Jul 11 11:13:01 2012 +0200 (2012-07-11)
parents
children
line diff
     1.1 --- /dev/null	Thu Jan 01 00:00:00 1970 +0000
     1.2 +++ b/linux-libre/stuff/001-squashfs-decompressors-add-xz-decompressor-module.patch	Wed Jul 11 11:13:01 2012 +0200
     1.3 @@ -0,0 +1,3934 @@
     1.4 +From: Lasse Collin <lasse.collin@tukaani.org>
     1.5 +Date: Thu, 2 Dec 2010 19:14:19 +0000 (+0200)
     1.6 +Subject: Decompressors: Add XZ decompressor module
     1.7 +X-Git-Url: http://git.kernel.org/?p=linux%2Fkernel%2Fgit%2Fpkl%2Fsquashfs-xz.git;a=commitdiff_plain;h=3dbc3fe7878e53b43064a12d4ab31ca4c18ce85f
     1.8 +
     1.9 +Decompressors: Add XZ decompressor module
    1.10 +
    1.11 +In userspace, the .lzma format has become mostly a legacy
    1.12 +file format that got superseded by the .xz format. Similarly,
    1.13 +LZMA Utils was superseded by XZ Utils.
    1.14 +
    1.15 +These patches add support for XZ decompression into
    1.16 +the kernel. Most of the code is as is from XZ Embedded
    1.17 +<http://tukaani.org/xz/embedded.html>. It was written for
    1.18 +the Linux kernel but is usable in other projects too.
    1.19 +
    1.20 +Advantages of XZ over the current LZMA code in the kernel:
    1.21 +  - Nice API that can be used by other kernel modules; it's
    1.22 +    not limited to kernel, initramfs, and initrd decompression.
    1.23 +  - Integrity check support (CRC32)
    1.24 +  - BCJ filters improve compression of executable code on
    1.25 +    certain architectures. These together with LZMA2 can
    1.26 +    produce a few percent smaller kernel or Squashfs images
    1.27 +    than plain LZMA without making the decompression slower.
    1.28 +
    1.29 +This patch: Add the main decompression code (xz_dec), testing
    1.30 +module (xz_dec_test), wrapper script (xz_wrap.sh) for the xz
    1.31 +command line tool, and documentation. The xz_dec module is
    1.32 +enough to have a usable XZ decompressor e.g. for Squashfs.
    1.33 +
    1.34 +Signed-off-by: Lasse Collin <lasse.collin@tukaani.org>
    1.35 +---
    1.36 +
    1.37 +diff --git a/Documentation/xz.txt b/Documentation/xz.txt
    1.38 +new file mode 100644
    1.39 +index 0000000..68329ac
    1.40 +--- /dev/null
    1.41 ++++ b/Documentation/xz.txt
    1.42 +@@ -0,0 +1,122 @@
    1.43 ++
    1.44 ++XZ data compression in Linux
    1.45 ++============================
    1.46 ++
    1.47 ++Introduction
    1.48 ++
    1.49 ++    XZ is a general purpose data compression format with high compression
    1.50 ++    ratio and relatively fast decompression. The primary compression
    1.51 ++    algorithm (filter) is LZMA2. Additional filters can be used to improve
    1.52 ++    compression ratio even further. E.g. Branch/Call/Jump (BCJ) filters
    1.53 ++    improve compression ratio of executable data.
    1.54 ++
    1.55 ++    The XZ decompressor in Linux is called XZ Embedded. It supports
    1.56 ++    the LZMA2 filter and optionally also BCJ filters. CRC32 is supported
    1.57 ++    for integrity checking. The home page of XZ Embedded is at
    1.58 ++    <http://tukaani.org/xz/embedded.html>, where you can find the
    1.59 ++    latest version and also information about using the code outside
    1.60 ++    the Linux kernel.
    1.61 ++
    1.62 ++    For userspace, XZ Utils provide a zlib-like compression library
    1.63 ++    and a gzip-like command line tool. XZ Utils can be downloaded from
    1.64 ++    <http://tukaani.org/xz/>.
    1.65 ++
    1.66 ++XZ related components in the kernel
    1.67 ++
    1.68 ++    The xz_dec module provides XZ decompressor with single-call (buffer
    1.69 ++    to buffer) and multi-call (stateful) APIs. The usage of the xz_dec
    1.70 ++    module is documented in include/linux/xz.h.
    1.71 ++
    1.72 ++    The xz_dec_test module is for testing xz_dec. xz_dec_test is not
    1.73 ++    useful unless you are hacking the XZ decompressor. xz_dec_test
    1.74 ++    allocates a char device major dynamically to which one can write
    1.75 ++    .xz files from userspace. The decompressed output is thrown away.
    1.76 ++    Keep an eye on dmesg to see diagnostics printed by xz_dec_test.
    1.77 ++    See the xz_dec_test source code for the details.
    1.78 ++
    1.79 ++    For decompressing the kernel image, initramfs, and initrd, there
    1.80 ++    is a wrapper function in lib/decompress_unxz.c. Its API is the
    1.81 ++    same as in other decompress_*.c files, which is defined in
    1.82 ++    include/linux/decompress/generic.h.
    1.83 ++
    1.84 ++    scripts/xz_wrap.sh is a wrapper for the xz command line tool found
    1.85 ++    from XZ Utils. The wrapper sets compression options to values suitable
    1.86 ++    for compressing the kernel image.
    1.87 ++
    1.88 ++    For kernel makefiles, two commands are provided for use with
    1.89 ++    $(call if_needed). The kernel image should be compressed with
    1.90 ++    $(call if_needed,xzkern) which will use a BCJ filter and a big LZMA2
    1.91 ++    dictionary. It will also append a four-byte trailer containing the
    1.92 ++    uncompressed size of the file, which is needed by the boot code.
    1.93 ++    Other things should be compressed with $(call if_needed,xzmisc)
    1.94 ++    which will use no BCJ filter and 1 MiB LZMA2 dictionary.
    1.95 ++
    1.96 ++Notes on compression options
    1.97 ++
    1.98 ++    Since the XZ Embedded supports only streams with no integrity check or
    1.99 ++    CRC32, make sure that you don't use some other integrity check type
   1.100 ++    when encoding files that are supposed to be decoded by the kernel. With
   1.101 ++    liblzma, you need to use either LZMA_CHECK_NONE or LZMA_CHECK_CRC32
   1.102 ++    when encoding. With the xz command line tool, use --check=none or
   1.103 ++    --check=crc32.
   1.104 ++
   1.105 ++    Using CRC32 is strongly recommended unless there is some other layer
   1.106 ++    which will verify the integrity of the uncompressed data anyway.
   1.107 ++    Double checking the integrity would probably be waste of CPU cycles.
   1.108 ++    Note that the headers will always have a CRC32 which will be validated
   1.109 ++    by the decoder; you can only change the integrity check type (or
   1.110 ++    disable it) for the actual uncompressed data.
   1.111 ++
   1.112 ++    In userspace, LZMA2 is typically used with dictionary sizes of several
   1.113 ++    megabytes. The decoder needs to have the dictionary in RAM, thus big
   1.114 ++    dictionaries cannot be used for files that are intended to be decoded
   1.115 ++    by the kernel. 1 MiB is probably the maximum reasonable dictionary
   1.116 ++    size for in-kernel use (maybe more is OK for initramfs). The presets
   1.117 ++    in XZ Utils may not be optimal when creating files for the kernel,
   1.118 ++    so don't hesitate to use custom settings. Example:
   1.119 ++
   1.120 ++        xz --check=crc32 --lzma2=dict=512KiB inputfile
   1.121 ++
   1.122 ++    An exception to above dictionary size limitation is when the decoder
   1.123 ++    is used in single-call mode. Decompressing the kernel itself is an
   1.124 ++    example of this situation. In single-call mode, the memory usage
   1.125 ++    doesn't depend on the dictionary size, and it is perfectly fine to
   1.126 ++    use a big dictionary: for maximum compression, the dictionary should
   1.127 ++    be at least as big as the uncompressed data itself.
   1.128 ++
   1.129 ++Future plans
   1.130 ++
   1.131 ++    Creating a limited XZ encoder may be considered if people think it is
   1.132 ++    useful. LZMA2 is slower to compress than e.g. Deflate or LZO even at
   1.133 ++    the fastest settings, so it isn't clear if LZMA2 encoder is wanted
   1.134 ++    into the kernel.
   1.135 ++
   1.136 ++    Support for limited random-access reading is planned for the
   1.137 ++    decompression code. I don't know if it could have any use in the
   1.138 ++    kernel, but I know that it would be useful in some embedded projects
   1.139 ++    outside the Linux kernel.
   1.140 ++
   1.141 ++Conformance to the .xz file format specification
   1.142 ++
   1.143 ++    There are a couple of corner cases where things have been simplified
   1.144 ++    at expense of detecting errors as early as possible. These should not
   1.145 ++    matter in practice all, since they don't cause security issues. But
   1.146 ++    it is good to know this if testing the code e.g. with the test files
   1.147 ++    from XZ Utils.
   1.148 ++
   1.149 ++Reporting bugs
   1.150 ++
   1.151 ++    Before reporting a bug, please check that it's not fixed already
   1.152 ++    at upstream. See <http://tukaani.org/xz/embedded.html> to get the
   1.153 ++    latest code.
   1.154 ++
   1.155 ++    Report bugs to <lasse.collin@tukaani.org> or visit #tukaani on
   1.156 ++    Freenode and talk to Larhzu. I don't actively read LKML or other
   1.157 ++    kernel-related mailing lists, so if there's something I should know,
   1.158 ++    you should email to me personally or use IRC.
   1.159 ++
   1.160 ++    Don't bother Igor Pavlov with questions about the XZ implementation
   1.161 ++    in the kernel or about XZ Utils. While these two implementations
   1.162 ++    include essential code that is directly based on Igor Pavlov's code,
   1.163 ++    these implementations aren't maintained nor supported by him.
   1.164 ++
   1.165 +diff --git a/include/linux/xz.h b/include/linux/xz.h
   1.166 +new file mode 100644
   1.167 +index 0000000..64cffa6
   1.168 +--- /dev/null
   1.169 ++++ b/include/linux/xz.h
   1.170 +@@ -0,0 +1,264 @@
   1.171 ++/*
   1.172 ++ * XZ decompressor
   1.173 ++ *
   1.174 ++ * Authors: Lasse Collin <lasse.collin@tukaani.org>
   1.175 ++ *          Igor Pavlov <http://7-zip.org/>
   1.176 ++ *
   1.177 ++ * This file has been put into the public domain.
   1.178 ++ * You can do whatever you want with this file.
   1.179 ++ */
   1.180 ++
   1.181 ++#ifndef XZ_H
   1.182 ++#define XZ_H
   1.183 ++
   1.184 ++#ifdef __KERNEL__
   1.185 ++#	include <linux/stddef.h>
   1.186 ++#	include <linux/types.h>
   1.187 ++#else
   1.188 ++#	include <stddef.h>
   1.189 ++#	include <stdint.h>
   1.190 ++#endif
   1.191 ++
   1.192 ++/* In Linux, this is used to make extern functions static when needed. */
   1.193 ++#ifndef XZ_EXTERN
   1.194 ++#	define XZ_EXTERN extern
   1.195 ++#endif
   1.196 ++
   1.197 ++/**
   1.198 ++ * enum xz_mode - Operation mode
   1.199 ++ *
   1.200 ++ * @XZ_SINGLE:              Single-call mode. This uses less RAM than
   1.201 ++ *                          than multi-call modes, because the LZMA2
   1.202 ++ *                          dictionary doesn't need to be allocated as
   1.203 ++ *                          part of the decoder state. All required data
   1.204 ++ *                          structures are allocated at initialization,
   1.205 ++ *                          so xz_dec_run() cannot return XZ_MEM_ERROR.
   1.206 ++ * @XZ_PREALLOC:            Multi-call mode with preallocated LZMA2
   1.207 ++ *                          dictionary buffer. All data structures are
   1.208 ++ *                          allocated at initialization, so xz_dec_run()
   1.209 ++ *                          cannot return XZ_MEM_ERROR.
   1.210 ++ * @XZ_DYNALLOC:            Multi-call mode. The LZMA2 dictionary is
   1.211 ++ *                          allocated once the required size has been
   1.212 ++ *                          parsed from the stream headers. If the
   1.213 ++ *                          allocation fails, xz_dec_run() will return
   1.214 ++ *                          XZ_MEM_ERROR.
   1.215 ++ *
   1.216 ++ * It is possible to enable support only for a subset of the above
   1.217 ++ * modes at compile time by defining XZ_DEC_SINGLE, XZ_DEC_PREALLOC,
   1.218 ++ * or XZ_DEC_DYNALLOC. The xz_dec kernel module is always compiled
   1.219 ++ * with support for all operation modes, but the preboot code may
   1.220 ++ * be built with fewer features to minimize code size.
   1.221 ++ */
   1.222 ++enum xz_mode {
   1.223 ++	XZ_SINGLE,
   1.224 ++	XZ_PREALLOC,
   1.225 ++	XZ_DYNALLOC
   1.226 ++};
   1.227 ++
   1.228 ++/**
   1.229 ++ * enum xz_ret - Return codes
   1.230 ++ * @XZ_OK:                  Everything is OK so far. More input or more
   1.231 ++ *                          output space is required to continue. This
   1.232 ++ *                          return code is possible only in multi-call mode
   1.233 ++ *                          (XZ_PREALLOC or XZ_DYNALLOC).
   1.234 ++ * @XZ_STREAM_END:          Operation finished successfully.
   1.235 ++ * @XZ_UNSUPPORTED_CHECK:   Integrity check type is not supported. Decoding
   1.236 ++ *                          is still possible in multi-call mode by simply
   1.237 ++ *                          calling xz_dec_run() again.
   1.238 ++ *                          Note that this return value is used only if
   1.239 ++ *                          XZ_DEC_ANY_CHECK was defined at build time,
   1.240 ++ *                          which is not used in the kernel. Unsupported
   1.241 ++ *                          check types return XZ_OPTIONS_ERROR if
   1.242 ++ *                          XZ_DEC_ANY_CHECK was not defined at build time.
   1.243 ++ * @XZ_MEM_ERROR:           Allocating memory failed. This return code is
   1.244 ++ *                          possible only if the decoder was initialized
   1.245 ++ *                          with XZ_DYNALLOC. The amount of memory that was
   1.246 ++ *                          tried to be allocated was no more than the
   1.247 ++ *                          dict_max argument given to xz_dec_init().
   1.248 ++ * @XZ_MEMLIMIT_ERROR:      A bigger LZMA2 dictionary would be needed than
   1.249 ++ *                          allowed by the dict_max argument given to
   1.250 ++ *                          xz_dec_init(). This return value is possible
   1.251 ++ *                          only in multi-call mode (XZ_PREALLOC or
   1.252 ++ *                          XZ_DYNALLOC); the single-call mode (XZ_SINGLE)
   1.253 ++ *                          ignores the dict_max argument.
   1.254 ++ * @XZ_FORMAT_ERROR:        File format was not recognized (wrong magic
   1.255 ++ *                          bytes).
   1.256 ++ * @XZ_OPTIONS_ERROR:       This implementation doesn't support the requested
   1.257 ++ *                          compression options. In the decoder this means
   1.258 ++ *                          that the header CRC32 matches, but the header
   1.259 ++ *                          itself specifies something that we don't support.
   1.260 ++ * @XZ_DATA_ERROR:          Compressed data is corrupt.
   1.261 ++ * @XZ_BUF_ERROR:           Cannot make any progress. Details are slightly
   1.262 ++ *                          different between multi-call and single-call
   1.263 ++ *                          mode; more information below.
   1.264 ++ *
   1.265 ++ * In multi-call mode, XZ_BUF_ERROR is returned when two consecutive calls
   1.266 ++ * to XZ code cannot consume any input and cannot produce any new output.
   1.267 ++ * This happens when there is no new input available, or the output buffer
   1.268 ++ * is full while at least one output byte is still pending. Assuming your
   1.269 ++ * code is not buggy, you can get this error only when decoding a compressed
   1.270 ++ * stream that is truncated or otherwise corrupt.
   1.271 ++ *
   1.272 ++ * In single-call mode, XZ_BUF_ERROR is returned only when the output buffer
   1.273 ++ * is too small or the compressed input is corrupt in a way that makes the
   1.274 ++ * decoder produce more output than the caller expected. When it is
   1.275 ++ * (relatively) clear that the compressed input is truncated, XZ_DATA_ERROR
   1.276 ++ * is used instead of XZ_BUF_ERROR.
   1.277 ++ */
   1.278 ++enum xz_ret {
   1.279 ++	XZ_OK,
   1.280 ++	XZ_STREAM_END,
   1.281 ++	XZ_UNSUPPORTED_CHECK,
   1.282 ++	XZ_MEM_ERROR,
   1.283 ++	XZ_MEMLIMIT_ERROR,
   1.284 ++	XZ_FORMAT_ERROR,
   1.285 ++	XZ_OPTIONS_ERROR,
   1.286 ++	XZ_DATA_ERROR,
   1.287 ++	XZ_BUF_ERROR
   1.288 ++};
   1.289 ++
   1.290 ++/**
   1.291 ++ * struct xz_buf - Passing input and output buffers to XZ code
   1.292 ++ * @in:         Beginning of the input buffer. This may be NULL if and only
   1.293 ++ *              if in_pos is equal to in_size.
   1.294 ++ * @in_pos:     Current position in the input buffer. This must not exceed
   1.295 ++ *              in_size.
   1.296 ++ * @in_size:    Size of the input buffer
   1.297 ++ * @out:        Beginning of the output buffer. This may be NULL if and only
   1.298 ++ *              if out_pos is equal to out_size.
   1.299 ++ * @out_pos:    Current position in the output buffer. This must not exceed
   1.300 ++ *              out_size.
   1.301 ++ * @out_size:   Size of the output buffer
   1.302 ++ *
   1.303 ++ * Only the contents of the output buffer from out[out_pos] onward, and
   1.304 ++ * the variables in_pos and out_pos are modified by the XZ code.
   1.305 ++ */
   1.306 ++struct xz_buf {
   1.307 ++	const uint8_t *in;
   1.308 ++	size_t in_pos;
   1.309 ++	size_t in_size;
   1.310 ++
   1.311 ++	uint8_t *out;
   1.312 ++	size_t out_pos;
   1.313 ++	size_t out_size;
   1.314 ++};
   1.315 ++
   1.316 ++/**
   1.317 ++ * struct xz_dec - Opaque type to hold the XZ decoder state
   1.318 ++ */
   1.319 ++struct xz_dec;
   1.320 ++
   1.321 ++/**
   1.322 ++ * xz_dec_init() - Allocate and initialize a XZ decoder state
   1.323 ++ * @mode:       Operation mode
   1.324 ++ * @dict_max:   Maximum size of the LZMA2 dictionary (history buffer) for
   1.325 ++ *              multi-call decoding. This is ignored in single-call mode
   1.326 ++ *              (mode == XZ_SINGLE). LZMA2 dictionary is always 2^n bytes
   1.327 ++ *              or 2^n + 2^(n-1) bytes (the latter sizes are less common
   1.328 ++ *              in practice), so other values for dict_max don't make sense.
   1.329 ++ *              In the kernel, dictionary sizes of 64 KiB, 128 KiB, 256 KiB,
   1.330 ++ *              512 KiB, and 1 MiB are probably the only reasonable values,
   1.331 ++ *              except for kernel and initramfs images where a bigger
   1.332 ++ *              dictionary can be fine and useful.
   1.333 ++ *
   1.334 ++ * Single-call mode (XZ_SINGLE): xz_dec_run() decodes the whole stream at
   1.335 ++ * once. The caller must provide enough output space or the decoding will
   1.336 ++ * fail. The output space is used as the dictionary buffer, which is why
   1.337 ++ * there is no need to allocate the dictionary as part of the decoder's
   1.338 ++ * internal state.
   1.339 ++ *
   1.340 ++ * Because the output buffer is used as the workspace, streams encoded using
   1.341 ++ * a big dictionary are not a problem in single-call mode. It is enough that
   1.342 ++ * the output buffer is big enough to hold the actual uncompressed data; it
   1.343 ++ * can be smaller than the dictionary size stored in the stream headers.
   1.344 ++ *
   1.345 ++ * Multi-call mode with preallocated dictionary (XZ_PREALLOC): dict_max bytes
   1.346 ++ * of memory is preallocated for the LZMA2 dictionary. This way there is no
   1.347 ++ * risk that xz_dec_run() could run out of memory, since xz_dec_run() will
   1.348 ++ * never allocate any memory. Instead, if the preallocated dictionary is too
   1.349 ++ * small for decoding the given input stream, xz_dec_run() will return
   1.350 ++ * XZ_MEMLIMIT_ERROR. Thus, it is important to know what kind of data will be
   1.351 ++ * decoded to avoid allocating excessive amount of memory for the dictionary.
   1.352 ++ *
   1.353 ++ * Multi-call mode with dynamically allocated dictionary (XZ_DYNALLOC):
   1.354 ++ * dict_max specifies the maximum allowed dictionary size that xz_dec_run()
   1.355 ++ * may allocate once it has parsed the dictionary size from the stream
   1.356 ++ * headers. This way excessive allocations can be avoided while still
   1.357 ++ * limiting the maximum memory usage to a sane value to prevent running the
   1.358 ++ * system out of memory when decompressing streams from untrusted sources.
   1.359 ++ *
   1.360 ++ * On success, xz_dec_init() returns a pointer to struct xz_dec, which is
   1.361 ++ * ready to be used with xz_dec_run(). If memory allocation fails,
   1.362 ++ * xz_dec_init() returns NULL.
   1.363 ++ */
   1.364 ++XZ_EXTERN struct xz_dec *xz_dec_init(enum xz_mode mode, uint32_t dict_max);
   1.365 ++
   1.366 ++/**
   1.367 ++ * xz_dec_run() - Run the XZ decoder
   1.368 ++ * @s:          Decoder state allocated using xz_dec_init()
   1.369 ++ * @b:          Input and output buffers
   1.370 ++ *
   1.371 ++ * The possible return values depend on build options and operation mode.
   1.372 ++ * See enum xz_ret for details.
   1.373 ++ *
   1.374 ++ * Note that if an error occurs in single-call mode (return value is not
   1.375 ++ * XZ_STREAM_END), b->in_pos and b->out_pos are not modified and the
   1.376 ++ * contents of the output buffer from b->out[b->out_pos] onward are
   1.377 ++ * undefined. This is true even after XZ_BUF_ERROR, because with some filter
   1.378 ++ * chains, there may be a second pass over the output buffer, and this pass
   1.379 ++ * cannot be properly done if the output buffer is truncated. Thus, you
   1.380 ++ * cannot give the single-call decoder a too small buffer and then expect to
   1.381 ++ * get that amount valid data from the beginning of the stream. You must use
   1.382 ++ * the multi-call decoder if you don't want to uncompress the whole stream.
   1.383 ++ */
   1.384 ++XZ_EXTERN enum xz_ret xz_dec_run(struct xz_dec *s, struct xz_buf *b);
   1.385 ++
   1.386 ++/**
   1.387 ++ * xz_dec_reset() - Reset an already allocated decoder state
   1.388 ++ * @s:          Decoder state allocated using xz_dec_init()
   1.389 ++ *
   1.390 ++ * This function can be used to reset the multi-call decoder state without
   1.391 ++ * freeing and reallocating memory with xz_dec_end() and xz_dec_init().
   1.392 ++ *
   1.393 ++ * In single-call mode, xz_dec_reset() is always called in the beginning of
   1.394 ++ * xz_dec_run(). Thus, explicit call to xz_dec_reset() is useful only in
   1.395 ++ * multi-call mode.
   1.396 ++ */
   1.397 ++XZ_EXTERN void xz_dec_reset(struct xz_dec *s);
   1.398 ++
   1.399 ++/**
   1.400 ++ * xz_dec_end() - Free the memory allocated for the decoder state
   1.401 ++ * @s:          Decoder state allocated using xz_dec_init(). If s is NULL,
   1.402 ++ *              this function does nothing.
   1.403 ++ */
   1.404 ++XZ_EXTERN void xz_dec_end(struct xz_dec *s);
   1.405 ++
   1.406 ++/*
   1.407 ++ * Standalone build (userspace build or in-kernel build for boot time use)
   1.408 ++ * needs a CRC32 implementation. For normal in-kernel use, kernel's own
   1.409 ++ * CRC32 module is used instead, and users of this module don't need to
   1.410 ++ * care about the functions below.
   1.411 ++ */
   1.412 ++#ifndef XZ_INTERNAL_CRC32
   1.413 ++#	ifdef __KERNEL__
   1.414 ++#		define XZ_INTERNAL_CRC32 0
   1.415 ++#	else
   1.416 ++#		define XZ_INTERNAL_CRC32 1
   1.417 ++#	endif
   1.418 ++#endif
   1.419 ++
   1.420 ++#if XZ_INTERNAL_CRC32
   1.421 ++/*
   1.422 ++ * This must be called before any other xz_* function to initialize
   1.423 ++ * the CRC32 lookup table.
   1.424 ++ */
   1.425 ++XZ_EXTERN void xz_crc32_init(void);
   1.426 ++
   1.427 ++/*
   1.428 ++ * Update CRC32 value using the polynomial from IEEE-802.3. To start a new
   1.429 ++ * calculation, the third argument must be zero. To continue the calculation,
   1.430 ++ * the previously returned value is passed as the third argument.
   1.431 ++ */
   1.432 ++XZ_EXTERN uint32_t xz_crc32(const uint8_t *buf, size_t size, uint32_t crc);
   1.433 ++#endif
   1.434 ++#endif
   1.435 +diff --git a/lib/Kconfig b/lib/Kconfig
   1.436 +index fa9bf2c..6090314 100644
   1.437 +--- a/lib/Kconfig
   1.438 ++++ b/lib/Kconfig
   1.439 +@@ -106,6 +106,8 @@ config LZO_COMPRESS
   1.440 + config LZO_DECOMPRESS
   1.441 + 	tristate
   1.442 + 
   1.443 ++source "lib/xz/Kconfig"
   1.444 ++
   1.445 + #
   1.446 + # These all provide a common interface (hence the apparent duplication with
   1.447 + # ZLIB_INFLATE; DECOMPRESS_GZIP is just a wrapper.)
   1.448 +diff --git a/lib/Makefile b/lib/Makefile
   1.449 +index e6a3763..f2f98dd 100644
   1.450 +--- a/lib/Makefile
   1.451 ++++ b/lib/Makefile
   1.452 +@@ -69,6 +69,7 @@ obj-$(CONFIG_ZLIB_DEFLATE) += zlib_deflate/
   1.453 + obj-$(CONFIG_REED_SOLOMON) += reed_solomon/
   1.454 + obj-$(CONFIG_LZO_COMPRESS) += lzo/
   1.455 + obj-$(CONFIG_LZO_DECOMPRESS) += lzo/
   1.456 ++obj-$(CONFIG_XZ_DEC) += xz/
   1.457 + obj-$(CONFIG_RAID6_PQ) += raid6/
   1.458 + 
   1.459 + lib-$(CONFIG_DECOMPRESS_GZIP) += decompress_inflate.o
   1.460 +diff --git a/lib/xz/Kconfig b/lib/xz/Kconfig
   1.461 +new file mode 100644
   1.462 +index 0000000..e3b6e18
   1.463 +--- /dev/null
   1.464 ++++ b/lib/xz/Kconfig
   1.465 +@@ -0,0 +1,59 @@
   1.466 ++config XZ_DEC
   1.467 ++	tristate "XZ decompression support"
   1.468 ++	select CRC32
   1.469 ++	help
   1.470 ++	  LZMA2 compression algorithm and BCJ filters are supported using
   1.471 ++	  the .xz file format as the container. For integrity checking,
   1.472 ++	  CRC32 is supported. See Documentation/xz.txt for more information.
   1.473 ++
   1.474 ++config XZ_DEC_X86
   1.475 ++	bool "x86 BCJ filter decoder" if EMBEDDED
   1.476 ++	default y
   1.477 ++	depends on XZ_DEC
   1.478 ++	select XZ_DEC_BCJ
   1.479 ++
   1.480 ++config XZ_DEC_POWERPC
   1.481 ++	bool "PowerPC BCJ filter decoder" if EMBEDDED
   1.482 ++	default y
   1.483 ++	depends on XZ_DEC
   1.484 ++	select XZ_DEC_BCJ
   1.485 ++
   1.486 ++config XZ_DEC_IA64
   1.487 ++	bool "IA-64 BCJ filter decoder" if EMBEDDED
   1.488 ++	default y
   1.489 ++	depends on XZ_DEC
   1.490 ++	select XZ_DEC_BCJ
   1.491 ++
   1.492 ++config XZ_DEC_ARM
   1.493 ++	bool "ARM BCJ filter decoder" if EMBEDDED
   1.494 ++	default y
   1.495 ++	depends on XZ_DEC
   1.496 ++	select XZ_DEC_BCJ
   1.497 ++
   1.498 ++config XZ_DEC_ARMTHUMB
   1.499 ++	bool "ARM-Thumb BCJ filter decoder" if EMBEDDED
   1.500 ++	default y
   1.501 ++	depends on XZ_DEC
   1.502 ++	select XZ_DEC_BCJ
   1.503 ++
   1.504 ++config XZ_DEC_SPARC
   1.505 ++	bool "SPARC BCJ filter decoder" if EMBEDDED
   1.506 ++	default y
   1.507 ++	depends on XZ_DEC
   1.508 ++	select XZ_DEC_BCJ
   1.509 ++
   1.510 ++config XZ_DEC_BCJ
   1.511 ++	bool
   1.512 ++	default n
   1.513 ++
   1.514 ++config XZ_DEC_TEST
   1.515 ++	tristate "XZ decompressor tester"
   1.516 ++	default n
   1.517 ++	depends on XZ_DEC
   1.518 ++	help
   1.519 ++	  This allows passing .xz files to the in-kernel XZ decoder via
   1.520 ++	  a character special file. It calculates CRC32 of the decompressed
   1.521 ++	  data and writes diagnostics to the system log.
   1.522 ++
   1.523 ++	  Unless you are developing the XZ decoder, you don't need this
   1.524 ++	  and should say N.
   1.525 +diff --git a/lib/xz/Makefile b/lib/xz/Makefile
   1.526 +new file mode 100644
   1.527 +index 0000000..a7fa769
   1.528 +--- /dev/null
   1.529 ++++ b/lib/xz/Makefile
   1.530 +@@ -0,0 +1,5 @@
   1.531 ++obj-$(CONFIG_XZ_DEC) += xz_dec.o
   1.532 ++xz_dec-y := xz_dec_syms.o xz_dec_stream.o xz_dec_lzma2.o
   1.533 ++xz_dec-$(CONFIG_XZ_DEC_BCJ) += xz_dec_bcj.o
   1.534 ++
   1.535 ++obj-$(CONFIG_XZ_DEC_TEST) += xz_dec_test.o
   1.536 +diff --git a/lib/xz/xz_crc32.c b/lib/xz/xz_crc32.c
   1.537 +new file mode 100644
   1.538 +index 0000000..34532d1
   1.539 +--- /dev/null
   1.540 ++++ b/lib/xz/xz_crc32.c
   1.541 +@@ -0,0 +1,59 @@
   1.542 ++/*
   1.543 ++ * CRC32 using the polynomial from IEEE-802.3
   1.544 ++ *
   1.545 ++ * Authors: Lasse Collin <lasse.collin@tukaani.org>
   1.546 ++ *          Igor Pavlov <http://7-zip.org/>
   1.547 ++ *
   1.548 ++ * This file has been put into the public domain.
   1.549 ++ * You can do whatever you want with this file.
   1.550 ++ */
   1.551 ++
   1.552 ++/*
   1.553 ++ * This is not the fastest implementation, but it is pretty compact.
   1.554 ++ * The fastest versions of xz_crc32() on modern CPUs without hardware
   1.555 ++ * accelerated CRC instruction are 3-5 times as fast as this version,
   1.556 ++ * but they are bigger and use more memory for the lookup table.
   1.557 ++ */
   1.558 ++
   1.559 ++#include "xz_private.h"
   1.560 ++
   1.561 ++/*
   1.562 ++ * STATIC_RW_DATA is used in the pre-boot environment on some architectures.
   1.563 ++ * See <linux/decompress/mm.h> for details.
   1.564 ++ */
   1.565 ++#ifndef STATIC_RW_DATA
   1.566 ++#	define STATIC_RW_DATA static
   1.567 ++#endif
   1.568 ++
   1.569 ++STATIC_RW_DATA uint32_t xz_crc32_table[256];
   1.570 ++
   1.571 ++XZ_EXTERN void xz_crc32_init(void)
   1.572 ++{
   1.573 ++	const uint32_t poly = 0xEDB88320;
   1.574 ++
   1.575 ++	uint32_t i;
   1.576 ++	uint32_t j;
   1.577 ++	uint32_t r;
   1.578 ++
   1.579 ++	for (i = 0; i < 256; ++i) {
   1.580 ++		r = i;
   1.581 ++		for (j = 0; j < 8; ++j)
   1.582 ++			r = (r >> 1) ^ (poly & ~((r & 1) - 1));
   1.583 ++
   1.584 ++		xz_crc32_table[i] = r;
   1.585 ++	}
   1.586 ++
   1.587 ++	return;
   1.588 ++}
   1.589 ++
   1.590 ++XZ_EXTERN uint32_t xz_crc32(const uint8_t *buf, size_t size, uint32_t crc)
   1.591 ++{
   1.592 ++	crc = ~crc;
   1.593 ++
   1.594 ++	while (size != 0) {
   1.595 ++		crc = xz_crc32_table[*buf++ ^ (crc & 0xFF)] ^ (crc >> 8);
   1.596 ++		--size;
   1.597 ++	}
   1.598 ++
   1.599 ++	return ~crc;
   1.600 ++}
   1.601 +diff --git a/lib/xz/xz_dec_bcj.c b/lib/xz/xz_dec_bcj.c
   1.602 +new file mode 100644
   1.603 +index 0000000..e51e255
   1.604 +--- /dev/null
   1.605 ++++ b/lib/xz/xz_dec_bcj.c
   1.606 +@@ -0,0 +1,561 @@
   1.607 ++/*
   1.608 ++ * Branch/Call/Jump (BCJ) filter decoders
   1.609 ++ *
   1.610 ++ * Authors: Lasse Collin <lasse.collin@tukaani.org>
   1.611 ++ *          Igor Pavlov <http://7-zip.org/>
   1.612 ++ *
   1.613 ++ * This file has been put into the public domain.
   1.614 ++ * You can do whatever you want with this file.
   1.615 ++ */
   1.616 ++
   1.617 ++#include "xz_private.h"
   1.618 ++
   1.619 ++/*
   1.620 ++ * The rest of the file is inside this ifdef. It makes things a little more
   1.621 ++ * convenient when building without support for any BCJ filters.
   1.622 ++ */
   1.623 ++#ifdef XZ_DEC_BCJ
   1.624 ++
   1.625 ++struct xz_dec_bcj {
   1.626 ++	/* Type of the BCJ filter being used */
   1.627 ++	enum {
   1.628 ++		BCJ_X86 = 4,        /* x86 or x86-64 */
   1.629 ++		BCJ_POWERPC = 5,    /* Big endian only */
   1.630 ++		BCJ_IA64 = 6,       /* Big or little endian */
   1.631 ++		BCJ_ARM = 7,        /* Little endian only */
   1.632 ++		BCJ_ARMTHUMB = 8,   /* Little endian only */
   1.633 ++		BCJ_SPARC = 9       /* Big or little endian */
   1.634 ++	} type;
   1.635 ++
   1.636 ++	/*
   1.637 ++	 * Return value of the next filter in the chain. We need to preserve
   1.638 ++	 * this information across calls, because we must not call the next
   1.639 ++	 * filter anymore once it has returned XZ_STREAM_END.
   1.640 ++	 */
   1.641 ++	enum xz_ret ret;
   1.642 ++
   1.643 ++	/* True if we are operating in single-call mode. */
   1.644 ++	bool single_call;
   1.645 ++
   1.646 ++	/*
   1.647 ++	 * Absolute position relative to the beginning of the uncompressed
   1.648 ++	 * data (in a single .xz Block). We care only about the lowest 32
   1.649 ++	 * bits so this doesn't need to be uint64_t even with big files.
   1.650 ++	 */
   1.651 ++	uint32_t pos;
   1.652 ++
   1.653 ++	/* x86 filter state */
   1.654 ++	uint32_t x86_prev_mask;
   1.655 ++
   1.656 ++	/* Temporary space to hold the variables from struct xz_buf */
   1.657 ++	uint8_t *out;
   1.658 ++	size_t out_pos;
   1.659 ++	size_t out_size;
   1.660 ++
   1.661 ++	struct {
   1.662 ++		/* Amount of already filtered data in the beginning of buf */
   1.663 ++		size_t filtered;
   1.664 ++
   1.665 ++		/* Total amount of data currently stored in buf  */
   1.666 ++		size_t size;
   1.667 ++
   1.668 ++		/*
   1.669 ++		 * Buffer to hold a mix of filtered and unfiltered data. This
   1.670 ++		 * needs to be big enough to hold Alignment + 2 * Look-ahead:
   1.671 ++		 *
   1.672 ++		 * Type         Alignment   Look-ahead
   1.673 ++		 * x86              1           4
   1.674 ++		 * PowerPC          4           0
   1.675 ++		 * IA-64           16           0
   1.676 ++		 * ARM              4           0
   1.677 ++		 * ARM-Thumb        2           2
   1.678 ++		 * SPARC            4           0
   1.679 ++		 */
   1.680 ++		uint8_t buf[16];
   1.681 ++	} temp;
   1.682 ++};
   1.683 ++
   1.684 ++#ifdef XZ_DEC_X86
   1.685 ++/*
   1.686 ++ * This is used to test the most significant byte of a memory address
   1.687 ++ * in an x86 instruction.
   1.688 ++ */
   1.689 ++static inline int bcj_x86_test_msbyte(uint8_t b)
   1.690 ++{
   1.691 ++	return b == 0x00 || b == 0xFF;
   1.692 ++}
   1.693 ++
   1.694 ++static size_t bcj_x86(struct xz_dec_bcj *s, uint8_t *buf, size_t size)
   1.695 ++{
   1.696 ++	static const bool mask_to_allowed_status[8]
   1.697 ++		= { true, true, true, false, true, false, false, false };
   1.698 ++
   1.699 ++	static const uint8_t mask_to_bit_num[8] = { 0, 1, 2, 2, 3, 3, 3, 3 };
   1.700 ++
   1.701 ++	size_t i;
   1.702 ++	size_t prev_pos = (size_t)-1;
   1.703 ++	uint32_t prev_mask = s->x86_prev_mask;
   1.704 ++	uint32_t src;
   1.705 ++	uint32_t dest;
   1.706 ++	uint32_t j;
   1.707 ++	uint8_t b;
   1.708 ++
   1.709 ++	if (size <= 4)
   1.710 ++		return 0;
   1.711 ++
   1.712 ++	size -= 4;
   1.713 ++	for (i = 0; i < size; ++i) {
   1.714 ++		if ((buf[i] & 0xFE) != 0xE8)
   1.715 ++			continue;
   1.716 ++
   1.717 ++		prev_pos = i - prev_pos;
   1.718 ++		if (prev_pos > 3) {
   1.719 ++			prev_mask = 0;
   1.720 ++		} else {
   1.721 ++			prev_mask = (prev_mask << (prev_pos - 1)) & 7;
   1.722 ++			if (prev_mask != 0) {
   1.723 ++				b = buf[i + 4 - mask_to_bit_num[prev_mask]];
   1.724 ++				if (!mask_to_allowed_status[prev_mask]
   1.725 ++						|| bcj_x86_test_msbyte(b)) {
   1.726 ++					prev_pos = i;
   1.727 ++					prev_mask = (prev_mask << 1) | 1;
   1.728 ++					continue;
   1.729 ++				}
   1.730 ++			}
   1.731 ++		}
   1.732 ++
   1.733 ++		prev_pos = i;
   1.734 ++
   1.735 ++		if (bcj_x86_test_msbyte(buf[i + 4])) {
   1.736 ++			src = get_unaligned_le32(buf + i + 1);
   1.737 ++			while (true) {
   1.738 ++				dest = src - (s->pos + (uint32_t)i + 5);
   1.739 ++				if (prev_mask == 0)
   1.740 ++					break;
   1.741 ++
   1.742 ++				j = mask_to_bit_num[prev_mask] * 8;
   1.743 ++				b = (uint8_t)(dest >> (24 - j));
   1.744 ++				if (!bcj_x86_test_msbyte(b))
   1.745 ++					break;
   1.746 ++
   1.747 ++				src = dest ^ (((uint32_t)1 << (32 - j)) - 1);
   1.748 ++			}
   1.749 ++
   1.750 ++			dest &= 0x01FFFFFF;
   1.751 ++			dest |= (uint32_t)0 - (dest & 0x01000000);
   1.752 ++			put_unaligned_le32(dest, buf + i + 1);
   1.753 ++			i += 4;
   1.754 ++		} else {
   1.755 ++			prev_mask = (prev_mask << 1) | 1;
   1.756 ++		}
   1.757 ++	}
   1.758 ++
   1.759 ++	prev_pos = i - prev_pos;
   1.760 ++	s->x86_prev_mask = prev_pos > 3 ? 0 : prev_mask << (prev_pos - 1);
   1.761 ++	return i;
   1.762 ++}
   1.763 ++#endif
   1.764 ++
   1.765 ++#ifdef XZ_DEC_POWERPC
   1.766 ++static size_t bcj_powerpc(struct xz_dec_bcj *s, uint8_t *buf, size_t size)
   1.767 ++{
   1.768 ++	size_t i;
   1.769 ++	uint32_t instr;
   1.770 ++
   1.771 ++	for (i = 0; i + 4 <= size; i += 4) {
   1.772 ++		instr = get_unaligned_be32(buf + i);
   1.773 ++		if ((instr & 0xFC000003) == 0x48000001) {
   1.774 ++			instr &= 0x03FFFFFC;
   1.775 ++			instr -= s->pos + (uint32_t)i;
   1.776 ++			instr &= 0x03FFFFFC;
   1.777 ++			instr |= 0x48000001;
   1.778 ++			put_unaligned_be32(instr, buf + i);
   1.779 ++		}
   1.780 ++	}
   1.781 ++
   1.782 ++	return i;
   1.783 ++}
   1.784 ++#endif
   1.785 ++
   1.786 ++#ifdef XZ_DEC_IA64
   1.787 ++static size_t bcj_ia64(struct xz_dec_bcj *s, uint8_t *buf, size_t size)
   1.788 ++{
   1.789 ++	static const uint8_t branch_table[32] = {
   1.790 ++		0, 0, 0, 0, 0, 0, 0, 0,
   1.791 ++		0, 0, 0, 0, 0, 0, 0, 0,
   1.792 ++		4, 4, 6, 6, 0, 0, 7, 7,
   1.793 ++		4, 4, 0, 0, 4, 4, 0, 0
   1.794 ++	};
   1.795 ++
   1.796 ++	/*
   1.797 ++	 * The local variables take a little bit stack space, but it's less
   1.798 ++	 * than what LZMA2 decoder takes, so it doesn't make sense to reduce
   1.799 ++	 * stack usage here without doing that for the LZMA2 decoder too.
   1.800 ++	 */
   1.801 ++
   1.802 ++	/* Loop counters */
   1.803 ++	size_t i;
   1.804 ++	size_t j;
   1.805 ++
   1.806 ++	/* Instruction slot (0, 1, or 2) in the 128-bit instruction word */
   1.807 ++	uint32_t slot;
   1.808 ++
   1.809 ++	/* Bitwise offset of the instruction indicated by slot */
   1.810 ++	uint32_t bit_pos;
   1.811 ++
   1.812 ++	/* bit_pos split into byte and bit parts */
   1.813 ++	uint32_t byte_pos;
   1.814 ++	uint32_t bit_res;
   1.815 ++
   1.816 ++	/* Address part of an instruction */
   1.817 ++	uint32_t addr;
   1.818 ++
   1.819 ++	/* Mask used to detect which instructions to convert */
   1.820 ++	uint32_t mask;
   1.821 ++
   1.822 ++	/* 41-bit instruction stored somewhere in the lowest 48 bits */
   1.823 ++	uint64_t instr;
   1.824 ++
   1.825 ++	/* Instruction normalized with bit_res for easier manipulation */
   1.826 ++	uint64_t norm;
   1.827 ++
   1.828 ++	for (i = 0; i + 16 <= size; i += 16) {
   1.829 ++		mask = branch_table[buf[i] & 0x1F];
   1.830 ++		for (slot = 0, bit_pos = 5; slot < 3; ++slot, bit_pos += 41) {
   1.831 ++			if (((mask >> slot) & 1) == 0)
   1.832 ++				continue;
   1.833 ++
   1.834 ++			byte_pos = bit_pos >> 3;
   1.835 ++			bit_res = bit_pos & 7;
   1.836 ++			instr = 0;
   1.837 ++			for (j = 0; j < 6; ++j)
   1.838 ++				instr |= (uint64_t)(buf[i + j + byte_pos])
   1.839 ++						<< (8 * j);
   1.840 ++
   1.841 ++			norm = instr >> bit_res;
   1.842 ++
   1.843 ++			if (((norm >> 37) & 0x0F) == 0x05
   1.844 ++					&& ((norm >> 9) & 0x07) == 0) {
   1.845 ++				addr = (norm >> 13) & 0x0FFFFF;
   1.846 ++				addr |= ((uint32_t)(norm >> 36) & 1) << 20;
   1.847 ++				addr <<= 4;
   1.848 ++				addr -= s->pos + (uint32_t)i;
   1.849 ++				addr >>= 4;
   1.850 ++
   1.851 ++				norm &= ~((uint64_t)0x8FFFFF << 13);
   1.852 ++				norm |= (uint64_t)(addr & 0x0FFFFF) << 13;
   1.853 ++				norm |= (uint64_t)(addr & 0x100000)
   1.854 ++						<< (36 - 20);
   1.855 ++
   1.856 ++				instr &= (1 << bit_res) - 1;
   1.857 ++				instr |= norm << bit_res;
   1.858 ++
   1.859 ++				for (j = 0; j < 6; j++)
   1.860 ++					buf[i + j + byte_pos]
   1.861 ++						= (uint8_t)(instr >> (8 * j));
   1.862 ++			}
   1.863 ++		}
   1.864 ++	}
   1.865 ++
   1.866 ++	return i;
   1.867 ++}
   1.868 ++#endif
   1.869 ++
   1.870 ++#ifdef XZ_DEC_ARM
   1.871 ++static size_t bcj_arm(struct xz_dec_bcj *s, uint8_t *buf, size_t size)
   1.872 ++{
   1.873 ++	size_t i;
   1.874 ++	uint32_t addr;
   1.875 ++
   1.876 ++	for (i = 0; i + 4 <= size; i += 4) {
   1.877 ++		if (buf[i + 3] == 0xEB) {
   1.878 ++			addr = (uint32_t)buf[i] | ((uint32_t)buf[i + 1] << 8)
   1.879 ++					| ((uint32_t)buf[i + 2] << 16);
   1.880 ++			addr <<= 2;
   1.881 ++			addr -= s->pos + (uint32_t)i + 8;
   1.882 ++			addr >>= 2;
   1.883 ++			buf[i] = (uint8_t)addr;
   1.884 ++			buf[i + 1] = (uint8_t)(addr >> 8);
   1.885 ++			buf[i + 2] = (uint8_t)(addr >> 16);
   1.886 ++		}
   1.887 ++	}
   1.888 ++
   1.889 ++	return i;
   1.890 ++}
   1.891 ++#endif
   1.892 ++
   1.893 ++#ifdef XZ_DEC_ARMTHUMB
   1.894 ++static size_t bcj_armthumb(struct xz_dec_bcj *s, uint8_t *buf, size_t size)
   1.895 ++{
   1.896 ++	size_t i;
   1.897 ++	uint32_t addr;
   1.898 ++
   1.899 ++	for (i = 0; i + 4 <= size; i += 2) {
   1.900 ++		if ((buf[i + 1] & 0xF8) == 0xF0
   1.901 ++				&& (buf[i + 3] & 0xF8) == 0xF8) {
   1.902 ++			addr = (((uint32_t)buf[i + 1] & 0x07) << 19)
   1.903 ++					| ((uint32_t)buf[i] << 11)
   1.904 ++					| (((uint32_t)buf[i + 3] & 0x07) << 8)
   1.905 ++					| (uint32_t)buf[i + 2];
   1.906 ++			addr <<= 1;
   1.907 ++			addr -= s->pos + (uint32_t)i + 4;
   1.908 ++			addr >>= 1;
   1.909 ++			buf[i + 1] = (uint8_t)(0xF0 | ((addr >> 19) & 0x07));
   1.910 ++			buf[i] = (uint8_t)(addr >> 11);
   1.911 ++			buf[i + 3] = (uint8_t)(0xF8 | ((addr >> 8) & 0x07));
   1.912 ++			buf[i + 2] = (uint8_t)addr;
   1.913 ++			i += 2;
   1.914 ++		}
   1.915 ++	}
   1.916 ++
   1.917 ++	return i;
   1.918 ++}
   1.919 ++#endif
   1.920 ++
   1.921 ++#ifdef XZ_DEC_SPARC
   1.922 ++static size_t bcj_sparc(struct xz_dec_bcj *s, uint8_t *buf, size_t size)
   1.923 ++{
   1.924 ++	size_t i;
   1.925 ++	uint32_t instr;
   1.926 ++
   1.927 ++	for (i = 0; i + 4 <= size; i += 4) {
   1.928 ++		instr = get_unaligned_be32(buf + i);
   1.929 ++		if ((instr >> 22) == 0x100 || (instr >> 22) == 0x1FF) {
   1.930 ++			instr <<= 2;
   1.931 ++			instr -= s->pos + (uint32_t)i;
   1.932 ++			instr >>= 2;
   1.933 ++			instr = ((uint32_t)0x40000000 - (instr & 0x400000))
   1.934 ++					| 0x40000000 | (instr & 0x3FFFFF);
   1.935 ++			put_unaligned_be32(instr, buf + i);
   1.936 ++		}
   1.937 ++	}
   1.938 ++
   1.939 ++	return i;
   1.940 ++}
   1.941 ++#endif
   1.942 ++
   1.943 ++/*
   1.944 ++ * Apply the selected BCJ filter. Update *pos and s->pos to match the amount
   1.945 ++ * of data that got filtered.
   1.946 ++ *
   1.947 ++ * NOTE: This is implemented as a switch statement to avoid using function
   1.948 ++ * pointers, which could be problematic in the kernel boot code, which must
   1.949 ++ * avoid pointers to static data (at least on x86).
   1.950 ++ */
   1.951 ++static void bcj_apply(struct xz_dec_bcj *s,
   1.952 ++		      uint8_t *buf, size_t *pos, size_t size)
   1.953 ++{
   1.954 ++	size_t filtered;
   1.955 ++
   1.956 ++	buf += *pos;
   1.957 ++	size -= *pos;
   1.958 ++
   1.959 ++	switch (s->type) {
   1.960 ++#ifdef XZ_DEC_X86
   1.961 ++	case BCJ_X86:
   1.962 ++		filtered = bcj_x86(s, buf, size);
   1.963 ++		break;
   1.964 ++#endif
   1.965 ++#ifdef XZ_DEC_POWERPC
   1.966 ++	case BCJ_POWERPC:
   1.967 ++		filtered = bcj_powerpc(s, buf, size);
   1.968 ++		break;
   1.969 ++#endif
   1.970 ++#ifdef XZ_DEC_IA64
   1.971 ++	case BCJ_IA64:
   1.972 ++		filtered = bcj_ia64(s, buf, size);
   1.973 ++		break;
   1.974 ++#endif
   1.975 ++#ifdef XZ_DEC_ARM
   1.976 ++	case BCJ_ARM:
   1.977 ++		filtered = bcj_arm(s, buf, size);
   1.978 ++		break;
   1.979 ++#endif
   1.980 ++#ifdef XZ_DEC_ARMTHUMB
   1.981 ++	case BCJ_ARMTHUMB:
   1.982 ++		filtered = bcj_armthumb(s, buf, size);
   1.983 ++		break;
   1.984 ++#endif
   1.985 ++#ifdef XZ_DEC_SPARC
   1.986 ++	case BCJ_SPARC:
   1.987 ++		filtered = bcj_sparc(s, buf, size);
   1.988 ++		break;
   1.989 ++#endif
   1.990 ++	default:
   1.991 ++		/* Never reached but silence compiler warnings. */
   1.992 ++		filtered = 0;
   1.993 ++		break;
   1.994 ++	}
   1.995 ++
   1.996 ++	*pos += filtered;
   1.997 ++	s->pos += filtered;
   1.998 ++}
   1.999 ++
  1.1000 ++/*
  1.1001 ++ * Flush pending filtered data from temp to the output buffer.
  1.1002 ++ * Move the remaining mixture of possibly filtered and unfiltered
  1.1003 ++ * data to the beginning of temp.
  1.1004 ++ */
  1.1005 ++static void bcj_flush(struct xz_dec_bcj *s, struct xz_buf *b)
  1.1006 ++{
  1.1007 ++	size_t copy_size;
  1.1008 ++
  1.1009 ++	copy_size = min_t(size_t, s->temp.filtered, b->out_size - b->out_pos);
  1.1010 ++	memcpy(b->out + b->out_pos, s->temp.buf, copy_size);
  1.1011 ++	b->out_pos += copy_size;
  1.1012 ++
  1.1013 ++	s->temp.filtered -= copy_size;
  1.1014 ++	s->temp.size -= copy_size;
  1.1015 ++	memmove(s->temp.buf, s->temp.buf + copy_size, s->temp.size);
  1.1016 ++}
  1.1017 ++
  1.1018 ++/*
  1.1019 ++ * The BCJ filter functions are primitive in sense that they process the
  1.1020 ++ * data in chunks of 1-16 bytes. To hide this issue, this function does
  1.1021 ++ * some buffering.
  1.1022 ++ */
  1.1023 ++XZ_EXTERN enum xz_ret xz_dec_bcj_run(struct xz_dec_bcj *s,
  1.1024 ++				     struct xz_dec_lzma2 *lzma2,
  1.1025 ++				     struct xz_buf *b)
  1.1026 ++{
  1.1027 ++	size_t out_start;
  1.1028 ++
  1.1029 ++	/*
  1.1030 ++	 * Flush pending already filtered data to the output buffer. Return
  1.1031 ++	 * immediatelly if we couldn't flush everything, or if the next
  1.1032 ++	 * filter in the chain had already returned XZ_STREAM_END.
  1.1033 ++	 */
  1.1034 ++	if (s->temp.filtered > 0) {
  1.1035 ++		bcj_flush(s, b);
  1.1036 ++		if (s->temp.filtered > 0)
  1.1037 ++			return XZ_OK;
  1.1038 ++
  1.1039 ++		if (s->ret == XZ_STREAM_END)
  1.1040 ++			return XZ_STREAM_END;
  1.1041 ++	}
  1.1042 ++
  1.1043 ++	/*
  1.1044 ++	 * If we have more output space than what is currently pending in
  1.1045 ++	 * temp, copy the unfiltered data from temp to the output buffer
  1.1046 ++	 * and try to fill the output buffer by decoding more data from the
  1.1047 ++	 * next filter in the chain. Apply the BCJ filter on the new data
  1.1048 ++	 * in the output buffer. If everything cannot be filtered, copy it
  1.1049 ++	 * to temp and rewind the output buffer position accordingly.
  1.1050 ++	 */
  1.1051 ++	if (s->temp.size < b->out_size - b->out_pos) {
  1.1052 ++		out_start = b->out_pos;
  1.1053 ++		memcpy(b->out + b->out_pos, s->temp.buf, s->temp.size);
  1.1054 ++		b->out_pos += s->temp.size;
  1.1055 ++
  1.1056 ++		s->ret = xz_dec_lzma2_run(lzma2, b);
  1.1057 ++		if (s->ret != XZ_STREAM_END
  1.1058 ++				&& (s->ret != XZ_OK || s->single_call))
  1.1059 ++			return s->ret;
  1.1060 ++
  1.1061 ++		bcj_apply(s, b->out, &out_start, b->out_pos);
  1.1062 ++
  1.1063 ++		/*
  1.1064 ++		 * As an exception, if the next filter returned XZ_STREAM_END,
  1.1065 ++		 * we can do that too, since the last few bytes that remain
  1.1066 ++		 * unfiltered are meant to remain unfiltered.
  1.1067 ++		 */
  1.1068 ++		if (s->ret == XZ_STREAM_END)
  1.1069 ++			return XZ_STREAM_END;
  1.1070 ++
  1.1071 ++		s->temp.size = b->out_pos - out_start;
  1.1072 ++		b->out_pos -= s->temp.size;
  1.1073 ++		memcpy(s->temp.buf, b->out + b->out_pos, s->temp.size);
  1.1074 ++	}
  1.1075 ++
  1.1076 ++	/*
  1.1077 ++	 * If we have unfiltered data in temp, try to fill by decoding more
  1.1078 ++	 * data from the next filter. Apply the BCJ filter on temp. Then we
  1.1079 ++	 * hopefully can fill the actual output buffer by copying filtered
  1.1080 ++	 * data from temp. A mix of filtered and unfiltered data may be left
  1.1081 ++	 * in temp; it will be taken care on the next call to this function.
  1.1082 ++	 */
  1.1083 ++	if (s->temp.size > 0) {
  1.1084 ++		/* Make b->out{,_pos,_size} temporarily point to s->temp. */
  1.1085 ++		s->out = b->out;
  1.1086 ++		s->out_pos = b->out_pos;
  1.1087 ++		s->out_size = b->out_size;
  1.1088 ++		b->out = s->temp.buf;
  1.1089 ++		b->out_pos = s->temp.size;
  1.1090 ++		b->out_size = sizeof(s->temp.buf);
  1.1091 ++
  1.1092 ++		s->ret = xz_dec_lzma2_run(lzma2, b);
  1.1093 ++
  1.1094 ++		s->temp.size = b->out_pos;
  1.1095 ++		b->out = s->out;
  1.1096 ++		b->out_pos = s->out_pos;
  1.1097 ++		b->out_size = s->out_size;
  1.1098 ++
  1.1099 ++		if (s->ret != XZ_OK && s->ret != XZ_STREAM_END)
  1.1100 ++			return s->ret;
  1.1101 ++
  1.1102 ++		bcj_apply(s, s->temp.buf, &s->temp.filtered, s->temp.size);
  1.1103 ++
  1.1104 ++		/*
  1.1105 ++		 * If the next filter returned XZ_STREAM_END, we mark that
  1.1106 ++		 * everything is filtered, since the last unfiltered bytes
  1.1107 ++		 * of the stream are meant to be left as is.
  1.1108 ++		 */
  1.1109 ++		if (s->ret == XZ_STREAM_END)
  1.1110 ++			s->temp.filtered = s->temp.size;
  1.1111 ++
  1.1112 ++		bcj_flush(s, b);
  1.1113 ++		if (s->temp.filtered > 0)
  1.1114 ++			return XZ_OK;
  1.1115 ++	}
  1.1116 ++
  1.1117 ++	return s->ret;
  1.1118 ++}
  1.1119 ++
  1.1120 ++XZ_EXTERN struct xz_dec_bcj *xz_dec_bcj_create(bool single_call)
  1.1121 ++{
  1.1122 ++	struct xz_dec_bcj *s = kmalloc(sizeof(*s), GFP_KERNEL);
  1.1123 ++	if (s != NULL)
  1.1124 ++		s->single_call = single_call;
  1.1125 ++
  1.1126 ++	return s;
  1.1127 ++}
  1.1128 ++
  1.1129 ++XZ_EXTERN enum xz_ret xz_dec_bcj_reset(struct xz_dec_bcj *s, uint8_t id)
  1.1130 ++{
  1.1131 ++	switch (id) {
  1.1132 ++#ifdef XZ_DEC_X86
  1.1133 ++	case BCJ_X86:
  1.1134 ++#endif
  1.1135 ++#ifdef XZ_DEC_POWERPC
  1.1136 ++	case BCJ_POWERPC:
  1.1137 ++#endif
  1.1138 ++#ifdef XZ_DEC_IA64
  1.1139 ++	case BCJ_IA64:
  1.1140 ++#endif
  1.1141 ++#ifdef XZ_DEC_ARM
  1.1142 ++	case BCJ_ARM:
  1.1143 ++#endif
  1.1144 ++#ifdef XZ_DEC_ARMTHUMB
  1.1145 ++	case BCJ_ARMTHUMB:
  1.1146 ++#endif
  1.1147 ++#ifdef XZ_DEC_SPARC
  1.1148 ++	case BCJ_SPARC:
  1.1149 ++#endif
  1.1150 ++		break;
  1.1151 ++
  1.1152 ++	default:
  1.1153 ++		/* Unsupported Filter ID */
  1.1154 ++		return XZ_OPTIONS_ERROR;
  1.1155 ++	}
  1.1156 ++
  1.1157 ++	s->type = id;
  1.1158 ++	s->ret = XZ_OK;
  1.1159 ++	s->pos = 0;
  1.1160 ++	s->x86_prev_mask = 0;
  1.1161 ++	s->temp.filtered = 0;
  1.1162 ++	s->temp.size = 0;
  1.1163 ++
  1.1164 ++	return XZ_OK;
  1.1165 ++}
  1.1166 ++
  1.1167 ++#endif
  1.1168 +diff --git a/lib/xz/xz_dec_lzma2.c b/lib/xz/xz_dec_lzma2.c
  1.1169 +new file mode 100644
  1.1170 +index 0000000..ea5fa4f
  1.1171 +--- /dev/null
  1.1172 ++++ b/lib/xz/xz_dec_lzma2.c
  1.1173 +@@ -0,0 +1,1171 @@
  1.1174 ++/*
  1.1175 ++ * LZMA2 decoder
  1.1176 ++ *
  1.1177 ++ * Authors: Lasse Collin <lasse.collin@tukaani.org>
  1.1178 ++ *          Igor Pavlov <http://7-zip.org/>
  1.1179 ++ *
  1.1180 ++ * This file has been put into the public domain.
  1.1181 ++ * You can do whatever you want with this file.
  1.1182 ++ */
  1.1183 ++
  1.1184 ++#include "xz_private.h"
  1.1185 ++#include "xz_lzma2.h"
  1.1186 ++
  1.1187 ++/*
  1.1188 ++ * Range decoder initialization eats the first five bytes of each LZMA chunk.
  1.1189 ++ */
  1.1190 ++#define RC_INIT_BYTES 5
  1.1191 ++
  1.1192 ++/*
  1.1193 ++ * Minimum number of usable input buffer to safely decode one LZMA symbol.
  1.1194 ++ * The worst case is that we decode 22 bits using probabilities and 26
  1.1195 ++ * direct bits. This may decode at maximum of 20 bytes of input. However,
  1.1196 ++ * lzma_main() does an extra normalization before returning, thus we
  1.1197 ++ * need to put 21 here.
  1.1198 ++ */
  1.1199 ++#define LZMA_IN_REQUIRED 21
  1.1200 ++
  1.1201 ++/*
  1.1202 ++ * Dictionary (history buffer)
  1.1203 ++ *
  1.1204 ++ * These are always true:
  1.1205 ++ *    start <= pos <= full <= end
  1.1206 ++ *    pos <= limit <= end
  1.1207 ++ *
  1.1208 ++ * In multi-call mode, also these are true:
  1.1209 ++ *    end == size
  1.1210 ++ *    size <= size_max
  1.1211 ++ *    allocated <= size
  1.1212 ++ *
  1.1213 ++ * Most of these variables are size_t to support single-call mode,
  1.1214 ++ * in which the dictionary variables address the actual output
  1.1215 ++ * buffer directly.
  1.1216 ++ */
  1.1217 ++struct dictionary {
  1.1218 ++	/* Beginning of the history buffer */
  1.1219 ++	uint8_t *buf;
  1.1220 ++
  1.1221 ++	/* Old position in buf (before decoding more data) */
  1.1222 ++	size_t start;
  1.1223 ++
  1.1224 ++	/* Position in buf */
  1.1225 ++	size_t pos;
  1.1226 ++
  1.1227 ++	/*
  1.1228 ++	 * How full dictionary is. This is used to detect corrupt input that
  1.1229 ++	 * would read beyond the beginning of the uncompressed stream.
  1.1230 ++	 */
  1.1231 ++	size_t full;
  1.1232 ++
  1.1233 ++	/* Write limit; we don't write to buf[limit] or later bytes. */
  1.1234 ++	size_t limit;
  1.1235 ++
  1.1236 ++	/*
  1.1237 ++	 * End of the dictionary buffer. In multi-call mode, this is
  1.1238 ++	 * the same as the dictionary size. In single-call mode, this
  1.1239 ++	 * indicates the size of the output buffer.
  1.1240 ++	 */
  1.1241 ++	size_t end;
  1.1242 ++
  1.1243 ++	/*
  1.1244 ++	 * Size of the dictionary as specified in Block Header. This is used
  1.1245 ++	 * together with "full" to detect corrupt input that would make us
  1.1246 ++	 * read beyond the beginning of the uncompressed stream.
  1.1247 ++	 */
  1.1248 ++	uint32_t size;
  1.1249 ++
  1.1250 ++	/*
  1.1251 ++	 * Maximum allowed dictionary size in multi-call mode.
  1.1252 ++	 * This is ignored in single-call mode.
  1.1253 ++	 */
  1.1254 ++	uint32_t size_max;
  1.1255 ++
  1.1256 ++	/*
  1.1257 ++	 * Amount of memory currently allocated for the dictionary.
  1.1258 ++	 * This is used only with XZ_DYNALLOC. (With XZ_PREALLOC,
  1.1259 ++	 * size_max is always the same as the allocated size.)
  1.1260 ++	 */
  1.1261 ++	uint32_t allocated;
  1.1262 ++
  1.1263 ++	/* Operation mode */
  1.1264 ++	enum xz_mode mode;
  1.1265 ++};
  1.1266 ++
  1.1267 ++/* Range decoder */
  1.1268 ++struct rc_dec {
  1.1269 ++	uint32_t range;
  1.1270 ++	uint32_t code;
  1.1271 ++
  1.1272 ++	/*
  1.1273 ++	 * Number of initializing bytes remaining to be read
  1.1274 ++	 * by rc_read_init().
  1.1275 ++	 */
  1.1276 ++	uint32_t init_bytes_left;
  1.1277 ++
  1.1278 ++	/*
  1.1279 ++	 * Buffer from which we read our input. It can be either
  1.1280 ++	 * temp.buf or the caller-provided input buffer.
  1.1281 ++	 */
  1.1282 ++	const uint8_t *in;
  1.1283 ++	size_t in_pos;
  1.1284 ++	size_t in_limit;
  1.1285 ++};
  1.1286 ++
  1.1287 ++/* Probabilities for a length decoder. */
  1.1288 ++struct lzma_len_dec {
  1.1289 ++	/* Probability of match length being at least 10 */
  1.1290 ++	uint16_t choice;
  1.1291 ++
  1.1292 ++	/* Probability of match length being at least 18 */
  1.1293 ++	uint16_t choice2;
  1.1294 ++
  1.1295 ++	/* Probabilities for match lengths 2-9 */
  1.1296 ++	uint16_t low[POS_STATES_MAX][LEN_LOW_SYMBOLS];
  1.1297 ++
  1.1298 ++	/* Probabilities for match lengths 10-17 */
  1.1299 ++	uint16_t mid[POS_STATES_MAX][LEN_MID_SYMBOLS];
  1.1300 ++
  1.1301 ++	/* Probabilities for match lengths 18-273 */
  1.1302 ++	uint16_t high[LEN_HIGH_SYMBOLS];
  1.1303 ++};
  1.1304 ++
  1.1305 ++struct lzma_dec {
  1.1306 ++	/* Distances of latest four matches */
  1.1307 ++	uint32_t rep0;
  1.1308 ++	uint32_t rep1;
  1.1309 ++	uint32_t rep2;
  1.1310 ++	uint32_t rep3;
  1.1311 ++
  1.1312 ++	/* Types of the most recently seen LZMA symbols */
  1.1313 ++	enum lzma_state state;
  1.1314 ++
  1.1315 ++	/*
  1.1316 ++	 * Length of a match. This is updated so that dict_repeat can
  1.1317 ++	 * be called again to finish repeating the whole match.
  1.1318 ++	 */
  1.1319 ++	uint32_t len;
  1.1320 ++
  1.1321 ++	/*
  1.1322 ++	 * LZMA properties or related bit masks (number of literal
  1.1323 ++	 * context bits, a mask dervied from the number of literal
  1.1324 ++	 * position bits, and a mask dervied from the number
  1.1325 ++	 * position bits)
  1.1326 ++	 */
  1.1327 ++	uint32_t lc;
  1.1328 ++	uint32_t literal_pos_mask; /* (1 << lp) - 1 */
  1.1329 ++	uint32_t pos_mask;         /* (1 << pb) - 1 */
  1.1330 ++
  1.1331 ++	/* If 1, it's a match. Otherwise it's a single 8-bit literal. */
  1.1332 ++	uint16_t is_match[STATES][POS_STATES_MAX];
  1.1333 ++
  1.1334 ++	/* If 1, it's a repeated match. The distance is one of rep0 .. rep3. */
  1.1335 ++	uint16_t is_rep[STATES];
  1.1336 ++
  1.1337 ++	/*
  1.1338 ++	 * If 0, distance of a repeated match is rep0.
  1.1339 ++	 * Otherwise check is_rep1.
  1.1340 ++	 */
  1.1341 ++	uint16_t is_rep0[STATES];
  1.1342 ++
  1.1343 ++	/*
  1.1344 ++	 * If 0, distance of a repeated match is rep1.
  1.1345 ++	 * Otherwise check is_rep2.
  1.1346 ++	 */
  1.1347 ++	uint16_t is_rep1[STATES];
  1.1348 ++
  1.1349 ++	/* If 0, distance of a repeated match is rep2. Otherwise it is rep3. */
  1.1350 ++	uint16_t is_rep2[STATES];
  1.1351 ++
  1.1352 ++	/*
  1.1353 ++	 * If 1, the repeated match has length of one byte. Otherwise
  1.1354 ++	 * the length is decoded from rep_len_decoder.
  1.1355 ++	 */
  1.1356 ++	uint16_t is_rep0_long[STATES][POS_STATES_MAX];
  1.1357 ++
  1.1358 ++	/*
  1.1359 ++	 * Probability tree for the highest two bits of the match
  1.1360 ++	 * distance. There is a separate probability tree for match
  1.1361 ++	 * lengths of 2 (i.e. MATCH_LEN_MIN), 3, 4, and [5, 273].
  1.1362 ++	 */
  1.1363 ++	uint16_t dist_slot[DIST_STATES][DIST_SLOTS];
  1.1364 ++
  1.1365 ++	/*
  1.1366 ++	 * Probility trees for additional bits for match distance
  1.1367 ++	 * when the distance is in the range [4, 127].
  1.1368 ++	 */
  1.1369 ++	uint16_t dist_special[FULL_DISTANCES - DIST_MODEL_END];
  1.1370 ++
  1.1371 ++	/*
  1.1372 ++	 * Probability tree for the lowest four bits of a match
  1.1373 ++	 * distance that is equal to or greater than 128.
  1.1374 ++	 */
  1.1375 ++	uint16_t dist_align[ALIGN_SIZE];
  1.1376 ++
  1.1377 ++	/* Length of a normal match */
  1.1378 ++	struct lzma_len_dec match_len_dec;
  1.1379 ++
  1.1380 ++	/* Length of a repeated match */
  1.1381 ++	struct lzma_len_dec rep_len_dec;
  1.1382 ++
  1.1383 ++	/* Probabilities of literals */
  1.1384 ++	uint16_t literal[LITERAL_CODERS_MAX][LITERAL_CODER_SIZE];
  1.1385 ++};
  1.1386 ++
  1.1387 ++struct lzma2_dec {
  1.1388 ++	/* Position in xz_dec_lzma2_run(). */
  1.1389 ++	enum lzma2_seq {
  1.1390 ++		SEQ_CONTROL,
  1.1391 ++		SEQ_UNCOMPRESSED_1,
  1.1392 ++		SEQ_UNCOMPRESSED_2,
  1.1393 ++		SEQ_COMPRESSED_0,
  1.1394 ++		SEQ_COMPRESSED_1,
  1.1395 ++		SEQ_PROPERTIES,
  1.1396 ++		SEQ_LZMA_PREPARE,
  1.1397 ++		SEQ_LZMA_RUN,
  1.1398 ++		SEQ_COPY
  1.1399 ++	} sequence;
  1.1400 ++
  1.1401 ++	/* Next position after decoding the compressed size of the chunk. */
  1.1402 ++	enum lzma2_seq next_sequence;
  1.1403 ++
  1.1404 ++	/* Uncompressed size of LZMA chunk (2 MiB at maximum) */
  1.1405 ++	uint32_t uncompressed;
  1.1406 ++
  1.1407 ++	/*
  1.1408 ++	 * Compressed size of LZMA chunk or compressed/uncompressed
  1.1409 ++	 * size of uncompressed chunk (64 KiB at maximum)
  1.1410 ++	 */
  1.1411 ++	uint32_t compressed;
  1.1412 ++
  1.1413 ++	/*
  1.1414 ++	 * True if dictionary reset is needed. This is false before
  1.1415 ++	 * the first chunk (LZMA or uncompressed).
  1.1416 ++	 */
  1.1417 ++	bool need_dict_reset;
  1.1418 ++
  1.1419 ++	/*
  1.1420 ++	 * True if new LZMA properties are needed. This is false
  1.1421 ++	 * before the first LZMA chunk.
  1.1422 ++	 */
  1.1423 ++	bool need_props;
  1.1424 ++};
  1.1425 ++
  1.1426 ++struct xz_dec_lzma2 {
  1.1427 ++	/*
  1.1428 ++	 * The order below is important on x86 to reduce code size and
  1.1429 ++	 * it shouldn't hurt on other platforms. Everything up to and
  1.1430 ++	 * including lzma.pos_mask are in the first 128 bytes on x86-32,
  1.1431 ++	 * which allows using smaller instructions to access those
  1.1432 ++	 * variables. On x86-64, fewer variables fit into the first 128
  1.1433 ++	 * bytes, but this is still the best order without sacrificing
  1.1434 ++	 * the readability by splitting the structures.
  1.1435 ++	 */
  1.1436 ++	struct rc_dec rc;
  1.1437 ++	struct dictionary dict;
  1.1438 ++	struct lzma2_dec lzma2;
  1.1439 ++	struct lzma_dec lzma;
  1.1440 ++
  1.1441 ++	/*
  1.1442 ++	 * Temporary buffer which holds small number of input bytes between
  1.1443 ++	 * decoder calls. See lzma2_lzma() for details.
  1.1444 ++	 */
  1.1445 ++	struct {
  1.1446 ++		uint32_t size;
  1.1447 ++		uint8_t buf[3 * LZMA_IN_REQUIRED];
  1.1448 ++	} temp;
  1.1449 ++};
  1.1450 ++
  1.1451 ++/**************
  1.1452 ++ * Dictionary *
  1.1453 ++ **************/
  1.1454 ++
  1.1455 ++/*
  1.1456 ++ * Reset the dictionary state. When in single-call mode, set up the beginning
  1.1457 ++ * of the dictionary to point to the actual output buffer.
  1.1458 ++ */
  1.1459 ++static void dict_reset(struct dictionary *dict, struct xz_buf *b)
  1.1460 ++{
  1.1461 ++	if (DEC_IS_SINGLE(dict->mode)) {
  1.1462 ++		dict->buf = b->out + b->out_pos;
  1.1463 ++		dict->end = b->out_size - b->out_pos;
  1.1464 ++	}
  1.1465 ++
  1.1466 ++	dict->start = 0;
  1.1467 ++	dict->pos = 0;
  1.1468 ++	dict->limit = 0;
  1.1469 ++	dict->full = 0;
  1.1470 ++}
  1.1471 ++
  1.1472 ++/* Set dictionary write limit */
  1.1473 ++static void dict_limit(struct dictionary *dict, size_t out_max)
  1.1474 ++{
  1.1475 ++	if (dict->end - dict->pos <= out_max)
  1.1476 ++		dict->limit = dict->end;
  1.1477 ++	else
  1.1478 ++		dict->limit = dict->pos + out_max;
  1.1479 ++}
  1.1480 ++
  1.1481 ++/* Return true if at least one byte can be written into the dictionary. */
  1.1482 ++static inline bool dict_has_space(const struct dictionary *dict)
  1.1483 ++{
  1.1484 ++	return dict->pos < dict->limit;
  1.1485 ++}
  1.1486 ++
  1.1487 ++/*
  1.1488 ++ * Get a byte from the dictionary at the given distance. The distance is
  1.1489 ++ * assumed to valid, or as a special case, zero when the dictionary is
  1.1490 ++ * still empty. This special case is needed for single-call decoding to
  1.1491 ++ * avoid writing a '\0' to the end of the destination buffer.
  1.1492 ++ */
  1.1493 ++static inline uint32_t dict_get(const struct dictionary *dict, uint32_t dist)
  1.1494 ++{
  1.1495 ++	size_t offset = dict->pos - dist - 1;
  1.1496 ++
  1.1497 ++	if (dist >= dict->pos)
  1.1498 ++		offset += dict->end;
  1.1499 ++
  1.1500 ++	return dict->full > 0 ? dict->buf[offset] : 0;
  1.1501 ++}
  1.1502 ++
  1.1503 ++/*
  1.1504 ++ * Put one byte into the dictionary. It is assumed that there is space for it.
  1.1505 ++ */
  1.1506 ++static inline void dict_put(struct dictionary *dict, uint8_t byte)
  1.1507 ++{
  1.1508 ++	dict->buf[dict->pos++] = byte;
  1.1509 ++
  1.1510 ++	if (dict->full < dict->pos)
  1.1511 ++		dict->full = dict->pos;
  1.1512 ++}
  1.1513 ++
  1.1514 ++/*
  1.1515 ++ * Repeat given number of bytes from the given distance. If the distance is
  1.1516 ++ * invalid, false is returned. On success, true is returned and *len is
  1.1517 ++ * updated to indicate how many bytes were left to be repeated.
  1.1518 ++ */
  1.1519 ++static bool dict_repeat(struct dictionary *dict, uint32_t *len, uint32_t dist)
  1.1520 ++{
  1.1521 ++	size_t back;
  1.1522 ++	uint32_t left;
  1.1523 ++
  1.1524 ++	if (dist >= dict->full || dist >= dict->size)
  1.1525 ++		return false;
  1.1526 ++
  1.1527 ++	left = min_t(size_t, dict->limit - dict->pos, *len);
  1.1528 ++	*len -= left;
  1.1529 ++
  1.1530 ++	back = dict->pos - dist - 1;
  1.1531 ++	if (dist >= dict->pos)
  1.1532 ++		back += dict->end;
  1.1533 ++
  1.1534 ++	do {
  1.1535 ++		dict->buf[dict->pos++] = dict->buf[back++];
  1.1536 ++		if (back == dict->end)
  1.1537 ++			back = 0;
  1.1538 ++	} while (--left > 0);
  1.1539 ++
  1.1540 ++	if (dict->full < dict->pos)
  1.1541 ++		dict->full = dict->pos;
  1.1542 ++
  1.1543 ++	return true;
  1.1544 ++}
  1.1545 ++
  1.1546 ++/* Copy uncompressed data as is from input to dictionary and output buffers. */
  1.1547 ++static void dict_uncompressed(struct dictionary *dict, struct xz_buf *b,
  1.1548 ++			      uint32_t *left)
  1.1549 ++{
  1.1550 ++	size_t copy_size;
  1.1551 ++
  1.1552 ++	while (*left > 0 && b->in_pos < b->in_size
  1.1553 ++			&& b->out_pos < b->out_size) {
  1.1554 ++		copy_size = min(b->in_size - b->in_pos,
  1.1555 ++				b->out_size - b->out_pos);
  1.1556 ++		if (copy_size > dict->end - dict->pos)
  1.1557 ++			copy_size = dict->end - dict->pos;
  1.1558 ++		if (copy_size > *left)
  1.1559 ++			copy_size = *left;
  1.1560 ++
  1.1561 ++		*left -= copy_size;
  1.1562 ++
  1.1563 ++		memcpy(dict->buf + dict->pos, b->in + b->in_pos, copy_size);
  1.1564 ++		dict->pos += copy_size;
  1.1565 ++
  1.1566 ++		if (dict->full < dict->pos)
  1.1567 ++			dict->full = dict->pos;
  1.1568 ++
  1.1569 ++		if (DEC_IS_MULTI(dict->mode)) {
  1.1570 ++			if (dict->pos == dict->end)
  1.1571 ++				dict->pos = 0;
  1.1572 ++
  1.1573 ++			memcpy(b->out + b->out_pos, b->in + b->in_pos,
  1.1574 ++					copy_size);
  1.1575 ++		}
  1.1576 ++
  1.1577 ++		dict->start = dict->pos;
  1.1578 ++
  1.1579 ++		b->out_pos += copy_size;
  1.1580 ++		b->in_pos += copy_size;
  1.1581 ++	}
  1.1582 ++}
  1.1583 ++
  1.1584 ++/*
  1.1585 ++ * Flush pending data from dictionary to b->out. It is assumed that there is
  1.1586 ++ * enough space in b->out. This is guaranteed because caller uses dict_limit()
  1.1587 ++ * before decoding data into the dictionary.
  1.1588 ++ */
  1.1589 ++static uint32_t dict_flush(struct dictionary *dict, struct xz_buf *b)
  1.1590 ++{
  1.1591 ++	size_t copy_size = dict->pos - dict->start;
  1.1592 ++
  1.1593 ++	if (DEC_IS_MULTI(dict->mode)) {
  1.1594 ++		if (dict->pos == dict->end)
  1.1595 ++			dict->pos = 0;
  1.1596 ++
  1.1597 ++		memcpy(b->out + b->out_pos, dict->buf + dict->start,
  1.1598 ++				copy_size);
  1.1599 ++	}
  1.1600 ++
  1.1601 ++	dict->start = dict->pos;
  1.1602 ++	b->out_pos += copy_size;
  1.1603 ++	return copy_size;
  1.1604 ++}
  1.1605 ++
  1.1606 ++/*****************
  1.1607 ++ * Range decoder *
  1.1608 ++ *****************/
  1.1609 ++
  1.1610 ++/* Reset the range decoder. */
  1.1611 ++static void rc_reset(struct rc_dec *rc)
  1.1612 ++{
  1.1613 ++	rc->range = (uint32_t)-1;
  1.1614 ++	rc->code = 0;
  1.1615 ++	rc->init_bytes_left = RC_INIT_BYTES;
  1.1616 ++}
  1.1617 ++
  1.1618 ++/*
  1.1619 ++ * Read the first five initial bytes into rc->code if they haven't been
  1.1620 ++ * read already. (Yes, the first byte gets completely ignored.)
  1.1621 ++ */
  1.1622 ++static bool rc_read_init(struct rc_dec *rc, struct xz_buf *b)
  1.1623 ++{
  1.1624 ++	while (rc->init_bytes_left > 0) {
  1.1625 ++		if (b->in_pos == b->in_size)
  1.1626 ++			return false;
  1.1627 ++
  1.1628 ++		rc->code = (rc->code << 8) + b->in[b->in_pos++];
  1.1629 ++		--rc->init_bytes_left;
  1.1630 ++	}
  1.1631 ++
  1.1632 ++	return true;
  1.1633 ++}
  1.1634 ++
  1.1635 ++/* Return true if there may not be enough input for the next decoding loop. */
  1.1636 ++static inline bool rc_limit_exceeded(const struct rc_dec *rc)
  1.1637 ++{
  1.1638 ++	return rc->in_pos > rc->in_limit;
  1.1639 ++}
  1.1640 ++
  1.1641 ++/*
  1.1642 ++ * Return true if it is possible (from point of view of range decoder) that
  1.1643 ++ * we have reached the end of the LZMA chunk.
  1.1644 ++ */
  1.1645 ++static inline bool rc_is_finished(const struct rc_dec *rc)
  1.1646 ++{
  1.1647 ++	return rc->code == 0;
  1.1648 ++}
  1.1649 ++
  1.1650 ++/* Read the next input byte if needed. */
  1.1651 ++static __always_inline void rc_normalize(struct rc_dec *rc)
  1.1652 ++{
  1.1653 ++	if (rc->range < RC_TOP_VALUE) {
  1.1654 ++		rc->range <<= RC_SHIFT_BITS;
  1.1655 ++		rc->code = (rc->code << RC_SHIFT_BITS) + rc->in[rc->in_pos++];
  1.1656 ++	}
  1.1657 ++}
  1.1658 ++
  1.1659 ++/*
  1.1660 ++ * Decode one bit. In some versions, this function has been splitted in three
  1.1661 ++ * functions so that the compiler is supposed to be able to more easily avoid
  1.1662 ++ * an extra branch. In this particular version of the LZMA decoder, this
  1.1663 ++ * doesn't seem to be a good idea (tested with GCC 3.3.6, 3.4.6, and 4.3.3
  1.1664 ++ * on x86). Using a non-splitted version results in nicer looking code too.
  1.1665 ++ *
  1.1666 ++ * NOTE: This must return an int. Do not make it return a bool or the speed
  1.1667 ++ * of the code generated by GCC 3.x decreases 10-15 %. (GCC 4.3 doesn't care,
  1.1668 ++ * and it generates 10-20 % faster code than GCC 3.x from this file anyway.)
  1.1669 ++ */
  1.1670 ++static __always_inline int rc_bit(struct rc_dec *rc, uint16_t *prob)
  1.1671 ++{
  1.1672 ++	uint32_t bound;
  1.1673 ++	int bit;
  1.1674 ++
  1.1675 ++	rc_normalize(rc);
  1.1676 ++	bound = (rc->range >> RC_BIT_MODEL_TOTAL_BITS) * *prob;
  1.1677 ++	if (rc->code < bound) {
  1.1678 ++		rc->range = bound;
  1.1679 ++		*prob += (RC_BIT_MODEL_TOTAL - *prob) >> RC_MOVE_BITS;
  1.1680 ++		bit = 0;
  1.1681 ++	} else {
  1.1682 ++		rc->range -= bound;
  1.1683 ++		rc->code -= bound;
  1.1684 ++		*prob -= *prob >> RC_MOVE_BITS;
  1.1685 ++		bit = 1;
  1.1686 ++	}
  1.1687 ++
  1.1688 ++	return bit;
  1.1689 ++}
  1.1690 ++
  1.1691 ++/* Decode a bittree starting from the most significant bit. */
  1.1692 ++static __always_inline uint32_t rc_bittree(struct rc_dec *rc,
  1.1693 ++					   uint16_t *probs, uint32_t limit)
  1.1694 ++{
  1.1695 ++	uint32_t symbol = 1;
  1.1696 ++
  1.1697 ++	do {
  1.1698 ++		if (rc_bit(rc, &probs[symbol]))
  1.1699 ++			symbol = (symbol << 1) + 1;
  1.1700 ++		else
  1.1701 ++			symbol <<= 1;
  1.1702 ++	} while (symbol < limit);
  1.1703 ++
  1.1704 ++	return symbol;
  1.1705 ++}
  1.1706 ++
  1.1707 ++/* Decode a bittree starting from the least significant bit. */
  1.1708 ++static __always_inline void rc_bittree_reverse(struct rc_dec *rc,
  1.1709 ++					       uint16_t *probs,
  1.1710 ++					       uint32_t *dest, uint32_t limit)
  1.1711 ++{
  1.1712 ++	uint32_t symbol = 1;
  1.1713 ++	uint32_t i = 0;
  1.1714 ++
  1.1715 ++	do {
  1.1716 ++		if (rc_bit(rc, &probs[symbol])) {
  1.1717 ++			symbol = (symbol << 1) + 1;
  1.1718 ++			*dest += 1 << i;
  1.1719 ++		} else {
  1.1720 ++			symbol <<= 1;
  1.1721 ++		}
  1.1722 ++	} while (++i < limit);
  1.1723 ++}
  1.1724 ++
  1.1725 ++/* Decode direct bits (fixed fifty-fifty probability) */
  1.1726 ++static inline void rc_direct(struct rc_dec *rc, uint32_t *dest, uint32_t limit)
  1.1727 ++{
  1.1728 ++	uint32_t mask;
  1.1729 ++
  1.1730 ++	do {
  1.1731 ++		rc_normalize(rc);
  1.1732 ++		rc->range >>= 1;
  1.1733 ++		rc->code -= rc->range;
  1.1734 ++		mask = (uint32_t)0 - (rc->code >> 31);
  1.1735 ++		rc->code += rc->range & mask;
  1.1736 ++		*dest = (*dest << 1) + (mask + 1);
  1.1737 ++	} while (--limit > 0);
  1.1738 ++}
  1.1739 ++
  1.1740 ++/********
  1.1741 ++ * LZMA *
  1.1742 ++ ********/
  1.1743 ++
  1.1744 ++/* Get pointer to literal coder probability array. */
  1.1745 ++static uint16_t *lzma_literal_probs(struct xz_dec_lzma2 *s)
  1.1746 ++{
  1.1747 ++	uint32_t prev_byte = dict_get(&s->dict, 0);
  1.1748 ++	uint32_t low = prev_byte >> (8 - s->lzma.lc);
  1.1749 ++	uint32_t high = (s->dict.pos & s->lzma.literal_pos_mask) << s->lzma.lc;
  1.1750 ++	return s->lzma.literal[low + high];
  1.1751 ++}
  1.1752 ++
  1.1753 ++/* Decode a literal (one 8-bit byte) */
  1.1754 ++static void lzma_literal(struct xz_dec_lzma2 *s)
  1.1755 ++{
  1.1756 ++	uint16_t *probs;
  1.1757 ++	uint32_t symbol;
  1.1758 ++	uint32_t match_byte;
  1.1759 ++	uint32_t match_bit;
  1.1760 ++	uint32_t offset;
  1.1761 ++	uint32_t i;
  1.1762 ++
  1.1763 ++	probs = lzma_literal_probs(s);
  1.1764 ++
  1.1765 ++	if (lzma_state_is_literal(s->lzma.state)) {
  1.1766 ++		symbol = rc_bittree(&s->rc, probs, 0x100);
  1.1767 ++	} else {
  1.1768 ++		symbol = 1;
  1.1769 ++		match_byte = dict_get(&s->dict, s->lzma.rep0) << 1;
  1.1770 ++		offset = 0x100;
  1.1771 ++
  1.1772 ++		do {
  1.1773 ++			match_bit = match_byte & offset;
  1.1774 ++			match_byte <<= 1;
  1.1775 ++			i = offset + match_bit + symbol;
  1.1776 ++
  1.1777 ++			if (rc_bit(&s->rc, &probs[i])) {
  1.1778 ++				symbol = (symbol << 1) + 1;
  1.1779 ++				offset &= match_bit;
  1.1780 ++			} else {
  1.1781 ++				symbol <<= 1;
  1.1782 ++				offset &= ~match_bit;
  1.1783 ++			}
  1.1784 ++		} while (symbol < 0x100);
  1.1785 ++	}
  1.1786 ++
  1.1787 ++	dict_put(&s->dict, (uint8_t)symbol);
  1.1788 ++	lzma_state_literal(&s->lzma.state);
  1.1789 ++}
  1.1790 ++
  1.1791 ++/* Decode the length of the match into s->lzma.len. */
  1.1792 ++static void lzma_len(struct xz_dec_lzma2 *s, struct lzma_len_dec *l,
  1.1793 ++		     uint32_t pos_state)
  1.1794 ++{
  1.1795 ++	uint16_t *probs;
  1.1796 ++	uint32_t limit;
  1.1797 ++
  1.1798 ++	if (!rc_bit(&s->rc, &l->choice)) {
  1.1799 ++		probs = l->low[pos_state];
  1.1800 ++		limit = LEN_LOW_SYMBOLS;
  1.1801 ++		s->lzma.len = MATCH_LEN_MIN;
  1.1802 ++	} else {
  1.1803 ++		if (!rc_bit(&s->rc, &l->choice2)) {
  1.1804 ++			probs = l->mid[pos_state];
  1.1805 ++			limit = LEN_MID_SYMBOLS;
  1.1806 ++			s->lzma.len = MATCH_LEN_MIN + LEN_LOW_SYMBOLS;
  1.1807 ++		} else {
  1.1808 ++			probs = l->high;
  1.1809 ++			limit = LEN_HIGH_SYMBOLS;
  1.1810 ++			s->lzma.len = MATCH_LEN_MIN + LEN_LOW_SYMBOLS
  1.1811 ++					+ LEN_MID_SYMBOLS;
  1.1812 ++		}
  1.1813 ++	}
  1.1814 ++
  1.1815 ++	s->lzma.len += rc_bittree(&s->rc, probs, limit) - limit;
  1.1816 ++}
  1.1817 ++
  1.1818 ++/* Decode a match. The distance will be stored in s->lzma.rep0. */
  1.1819 ++static void lzma_match(struct xz_dec_lzma2 *s, uint32_t pos_state)
  1.1820 ++{
  1.1821 ++	uint16_t *probs;
  1.1822 ++	uint32_t dist_slot;
  1.1823 ++	uint32_t limit;
  1.1824 ++
  1.1825 ++	lzma_state_match(&s->lzma.state);
  1.1826 ++
  1.1827 ++	s->lzma.rep3 = s->lzma.rep2;
  1.1828 ++	s->lzma.rep2 = s->lzma.rep1;
  1.1829 ++	s->lzma.rep1 = s->lzma.rep0;
  1.1830 ++
  1.1831 ++	lzma_len(s, &s->lzma.match_len_dec, pos_state);
  1.1832 ++
  1.1833 ++	probs = s->lzma.dist_slot[lzma_get_dist_state(s->lzma.len)];
  1.1834 ++	dist_slot = rc_bittree(&s->rc, probs, DIST_SLOTS) - DIST_SLOTS;
  1.1835 ++
  1.1836 ++	if (dist_slot < DIST_MODEL_START) {
  1.1837 ++		s->lzma.rep0 = dist_slot;
  1.1838 ++	} else {
  1.1839 ++		limit = (dist_slot >> 1) - 1;
  1.1840 ++		s->lzma.rep0 = 2 + (dist_slot & 1);
  1.1841 ++
  1.1842 ++		if (dist_slot < DIST_MODEL_END) {
  1.1843 ++			s->lzma.rep0 <<= limit;
  1.1844 ++			probs = s->lzma.dist_special + s->lzma.rep0
  1.1845 ++					- dist_slot - 1;
  1.1846 ++			rc_bittree_reverse(&s->rc, probs,
  1.1847 ++					&s->lzma.rep0, limit);
  1.1848 ++		} else {
  1.1849 ++			rc_direct(&s->rc, &s->lzma.rep0, limit - ALIGN_BITS);
  1.1850 ++			s->lzma.rep0 <<= ALIGN_BITS;
  1.1851 ++			rc_bittree_reverse(&s->rc, s->lzma.dist_align,
  1.1852 ++					&s->lzma.rep0, ALIGN_BITS);
  1.1853 ++		}
  1.1854 ++	}
  1.1855 ++}
  1.1856 ++
  1.1857 ++/*
  1.1858 ++ * Decode a repeated match. The distance is one of the four most recently
  1.1859 ++ * seen matches. The distance will be stored in s->lzma.rep0.
  1.1860 ++ */
  1.1861 ++static void lzma_rep_match(struct xz_dec_lzma2 *s, uint32_t pos_state)
  1.1862 ++{
  1.1863 ++	uint32_t tmp;
  1.1864 ++
  1.1865 ++	if (!rc_bit(&s->rc, &s->lzma.is_rep0[s->lzma.state])) {
  1.1866 ++		if (!rc_bit(&s->rc, &s->lzma.is_rep0_long[
  1.1867 ++				s->lzma.state][pos_state])) {
  1.1868 ++			lzma_state_short_rep(&s->lzma.state);
  1.1869 ++			s->lzma.len = 1;
  1.1870 ++			return;
  1.1871 ++		}
  1.1872 ++	} else {
  1.1873 ++		if (!rc_bit(&s->rc, &s->lzma.is_rep1[s->lzma.state])) {
  1.1874 ++			tmp = s->lzma.rep1;
  1.1875 ++		} else {
  1.1876 ++			if (!rc_bit(&s->rc, &s->lzma.is_rep2[s->lzma.state])) {
  1.1877 ++				tmp = s->lzma.rep2;
  1.1878 ++			} else {
  1.1879 ++				tmp = s->lzma.rep3;
  1.1880 ++				s->lzma.rep3 = s->lzma.rep2;
  1.1881 ++			}
  1.1882 ++
  1.1883 ++			s->lzma.rep2 = s->lzma.rep1;
  1.1884 ++		}
  1.1885 ++
  1.1886 ++		s->lzma.rep1 = s->lzma.rep0;
  1.1887 ++		s->lzma.rep0 = tmp;
  1.1888 ++	}
  1.1889 ++
  1.1890 ++	lzma_state_long_rep(&s->lzma.state);
  1.1891 ++	lzma_len(s, &s->lzma.rep_len_dec, pos_state);
  1.1892 ++}
  1.1893 ++
  1.1894 ++/* LZMA decoder core */
  1.1895 ++static bool lzma_main(struct xz_dec_lzma2 *s)
  1.1896 ++{
  1.1897 ++	uint32_t pos_state;
  1.1898 ++
  1.1899 ++	/*
  1.1900 ++	 * If the dictionary was reached during the previous call, try to
  1.1901 ++	 * finish the possibly pending repeat in the dictionary.
  1.1902 ++	 */
  1.1903 ++	if (dict_has_space(&s->dict) && s->lzma.len > 0)
  1.1904 ++		dict_repeat(&s->dict, &s->lzma.len, s->lzma.rep0);
  1.1905 ++
  1.1906 ++	/*
  1.1907 ++	 * Decode more LZMA symbols. One iteration may consume up to
  1.1908 ++	 * LZMA_IN_REQUIRED - 1 bytes.
  1.1909 ++	 */
  1.1910 ++	while (dict_has_space(&s->dict) && !rc_limit_exceeded(&s->rc)) {
  1.1911 ++		pos_state = s->dict.pos & s->lzma.pos_mask;
  1.1912 ++
  1.1913 ++		if (!rc_bit(&s->rc, &s->lzma.is_match[
  1.1914 ++				s->lzma.state][pos_state])) {
  1.1915 ++			lzma_literal(s);
  1.1916 ++		} else {
  1.1917 ++			if (rc_bit(&s->rc, &s->lzma.is_rep[s->lzma.state]))
  1.1918 ++				lzma_rep_match(s, pos_state);
  1.1919 ++			else
  1.1920 ++				lzma_match(s, pos_state);
  1.1921 ++
  1.1922 ++			if (!dict_repeat(&s->dict, &s->lzma.len, s->lzma.rep0))
  1.1923 ++				return false;
  1.1924 ++		}
  1.1925 ++	}
  1.1926 ++
  1.1927 ++	/*
  1.1928 ++	 * Having the range decoder always normalized when we are outside
  1.1929 ++	 * this function makes it easier to correctly handle end of the chunk.
  1.1930 ++	 */
  1.1931 ++	rc_normalize(&s->rc);
  1.1932 ++
  1.1933 ++	return true;
  1.1934 ++}
  1.1935 ++
  1.1936 ++/*
  1.1937 ++ * Reset the LZMA decoder and range decoder state. Dictionary is nore reset
  1.1938 ++ * here, because LZMA state may be reset without resetting the dictionary.
  1.1939 ++ */
  1.1940 ++static void lzma_reset(struct xz_dec_lzma2 *s)
  1.1941 ++{
  1.1942 ++	uint16_t *probs;
  1.1943 ++	size_t i;
  1.1944 ++
  1.1945 ++	s->lzma.state = STATE_LIT_LIT;
  1.1946 ++	s->lzma.rep0 = 0;
  1.1947 ++	s->lzma.rep1 = 0;
  1.1948 ++	s->lzma.rep2 = 0;
  1.1949 ++	s->lzma.rep3 = 0;
  1.1950 ++
  1.1951 ++	/*
  1.1952 ++	 * All probabilities are initialized to the same value. This hack
  1.1953 ++	 * makes the code smaller by avoiding a separate loop for each
  1.1954 ++	 * probability array.
  1.1955 ++	 *
  1.1956 ++	 * This could be optimized so that only that part of literal
  1.1957 ++	 * probabilities that are actually required. In the common case
  1.1958 ++	 * we would write 12 KiB less.
  1.1959 ++	 */
  1.1960 ++	probs = s->lzma.is_match[0];
  1.1961 ++	for (i = 0; i < PROBS_TOTAL; ++i)
  1.1962 ++		probs[i] = RC_BIT_MODEL_TOTAL / 2;
  1.1963 ++
  1.1964 ++	rc_reset(&s->rc);
  1.1965 ++}
  1.1966 ++
  1.1967 ++/*
  1.1968 ++ * Decode and validate LZMA properties (lc/lp/pb) and calculate the bit masks
  1.1969 ++ * from the decoded lp and pb values. On success, the LZMA decoder state is
  1.1970 ++ * reset and true is returned.
  1.1971 ++ */
  1.1972 ++static bool lzma_props(struct xz_dec_lzma2 *s, uint8_t props)
  1.1973 ++{
  1.1974 ++	if (props > (4 * 5 + 4) * 9 + 8)
  1.1975 ++		return false;
  1.1976 ++
  1.1977 ++	s->lzma.pos_mask = 0;
  1.1978 ++	while (props >= 9 * 5) {
  1.1979 ++		props -= 9 * 5;
  1.1980 ++		++s->lzma.pos_mask;
  1.1981 ++	}
  1.1982 ++
  1.1983 ++	s->lzma.pos_mask = (1 << s->lzma.pos_mask) - 1;
  1.1984 ++
  1.1985 ++	s->lzma.literal_pos_mask = 0;
  1.1986 ++	while (props >= 9) {
  1.1987 ++		props -= 9;
  1.1988 ++		++s->lzma.literal_pos_mask;
  1.1989 ++	}
  1.1990 ++
  1.1991 ++	s->lzma.lc = props;
  1.1992 ++
  1.1993 ++	if (s->lzma.lc + s->lzma.literal_pos_mask > 4)
  1.1994 ++		return false;
  1.1995 ++
  1.1996 ++	s->lzma.literal_pos_mask = (1 << s->lzma.literal_pos_mask) - 1;
  1.1997 ++
  1.1998 ++	lzma_reset(s);
  1.1999 ++
  1.2000 ++	return true;
  1.2001 ++}
  1.2002 ++
  1.2003 ++/*********
  1.2004 ++ * LZMA2 *
  1.2005 ++ *********/
  1.2006 ++
  1.2007 ++/*
  1.2008 ++ * The LZMA decoder assumes that if the input limit (s->rc.in_limit) hasn't
  1.2009 ++ * been exceeded, it is safe to read up to LZMA_IN_REQUIRED bytes. This
  1.2010 ++ * wrapper function takes care of making the LZMA decoder's assumption safe.
  1.2011 ++ *
  1.2012 ++ * As long as there is plenty of input left to be decoded in the current LZMA
  1.2013 ++ * chunk, we decode directly from the caller-supplied input buffer until
  1.2014 ++ * there's LZMA_IN_REQUIRED bytes left. Those remaining bytes are copied into
  1.2015 ++ * s->temp.buf, which (hopefully) gets filled on the next call to this
  1.2016 ++ * function. We decode a few bytes from the temporary buffer so that we can
  1.2017 ++ * continue decoding from the caller-supplied input buffer again.
  1.2018 ++ */
  1.2019 ++static bool lzma2_lzma(struct xz_dec_lzma2 *s, struct xz_buf *b)
  1.2020 ++{
  1.2021 ++	size_t in_avail;
  1.2022 ++	uint32_t tmp;
  1.2023 ++
  1.2024 ++	in_avail = b->in_size - b->in_pos;
  1.2025 ++	if (s->temp.size > 0 || s->lzma2.compressed == 0) {
  1.2026 ++		tmp = 2 * LZMA_IN_REQUIRED - s->temp.size;
  1.2027 ++		if (tmp > s->lzma2.compressed - s->temp.size)
  1.2028 ++			tmp = s->lzma2.compressed - s->temp.size;
  1.2029 ++		if (tmp > in_avail)
  1.2030 ++			tmp = in_avail;
  1.2031 ++
  1.2032 ++		memcpy(s->temp.buf + s->temp.size, b->in + b->in_pos, tmp);
  1.2033 ++
  1.2034 ++		if (s->temp.size + tmp == s->lzma2.compressed) {
  1.2035 ++			memzero(s->temp.buf + s->temp.size + tmp,
  1.2036 ++					sizeof(s->temp.buf)
  1.2037 ++						- s->temp.size - tmp);
  1.2038 ++			s->rc.in_limit = s->temp.size + tmp;
  1.2039 ++		} else if (s->temp.size + tmp < LZMA_IN_REQUIRED) {
  1.2040 ++			s->temp.size += tmp;
  1.2041 ++			b->in_pos += tmp;
  1.2042 ++			return true;
  1.2043 ++		} else {
  1.2044 ++			s->rc.in_limit = s->temp.size + tmp - LZMA_IN_REQUIRED;
  1.2045 ++		}
  1.2046 ++
  1.2047 ++		s->rc.in = s->temp.buf;
  1.2048 ++		s->rc.in_pos = 0;
  1.2049 ++
  1.2050 ++		if (!lzma_main(s) || s->rc.in_pos > s->temp.size + tmp)
  1.2051 ++			return false;
  1.2052 ++
  1.2053 ++		s->lzma2.compressed -= s->rc.in_pos;
  1.2054 ++
  1.2055 ++		if (s->rc.in_pos < s->temp.size) {
  1.2056 ++			s->temp.size -= s->rc.in_pos;
  1.2057 ++			memmove(s->temp.buf, s->temp.buf + s->rc.in_pos,
  1.2058 ++					s->temp.size);
  1.2059 ++			return true;
  1.2060 ++		}
  1.2061 ++
  1.2062 ++		b->in_pos += s->rc.in_pos - s->temp.size;
  1.2063 ++		s->temp.size = 0;
  1.2064 ++	}
  1.2065 ++
  1.2066 ++	in_avail = b->in_size - b->in_pos;
  1.2067 ++	if (in_avail >= LZMA_IN_REQUIRED) {
  1.2068 ++		s->rc.in = b->in;
  1.2069 ++		s->rc.in_pos = b->in_pos;
  1.2070 ++
  1.2071 ++		if (in_avail >= s->lzma2.compressed + LZMA_IN_REQUIRED)
  1.2072 ++			s->rc.in_limit = b->in_pos + s->lzma2.compressed;
  1.2073 ++		else
  1.2074 ++			s->rc.in_limit = b->in_size - LZMA_IN_REQUIRED;
  1.2075 ++
  1.2076 ++		if (!lzma_main(s))
  1.2077 ++			return false;
  1.2078 ++
  1.2079 ++		in_avail = s->rc.in_pos - b->in_pos;
  1.2080 ++		if (in_avail > s->lzma2.compressed)
  1.2081 ++			return false;
  1.2082 ++
  1.2083 ++		s->lzma2.compressed -= in_avail;
  1.2084 ++		b->in_pos = s->rc.in_pos;
  1.2085 ++	}
  1.2086 ++
  1.2087 ++	in_avail = b->in_size - b->in_pos;
  1.2088 ++	if (in_avail < LZMA_IN_REQUIRED) {
  1.2089 ++		if (in_avail > s->lzma2.compressed)
  1.2090 ++			in_avail = s->lzma2.compressed;
  1.2091 ++
  1.2092 ++		memcpy(s->temp.buf, b->in + b->in_pos, in_avail);
  1.2093 ++		s->temp.size = in_avail;
  1.2094 ++		b->in_pos += in_avail;
  1.2095 ++	}
  1.2096 ++
  1.2097 ++	return true;
  1.2098 ++}
  1.2099 ++
  1.2100 ++/*
  1.2101 ++ * Take care of the LZMA2 control layer, and forward the job of actual LZMA
  1.2102 ++ * decoding or copying of uncompressed chunks to other functions.
  1.2103 ++ */
  1.2104 ++XZ_EXTERN enum xz_ret xz_dec_lzma2_run(struct xz_dec_lzma2 *s,
  1.2105 ++				       struct xz_buf *b)
  1.2106 ++{
  1.2107 ++	uint32_t tmp;
  1.2108 ++
  1.2109 ++	while (b->in_pos < b->in_size || s->lzma2.sequence == SEQ_LZMA_RUN) {
  1.2110 ++		switch (s->lzma2.sequence) {
  1.2111 ++		case SEQ_CONTROL:
  1.2112 ++			/*
  1.2113 ++			 * LZMA2 control byte
  1.2114 ++			 *
  1.2115 ++			 * Exact values:
  1.2116 ++			 *   0x00   End marker
  1.2117 ++			 *   0x01   Dictionary reset followed by
  1.2118 ++			 *          an uncompressed chunk
  1.2119 ++			 *   0x02   Uncompressed chunk (no dictionary reset)
  1.2120 ++			 *
  1.2121 ++			 * Highest three bits (s->control & 0xE0):
  1.2122 ++			 *   0xE0   Dictionary reset, new properties and state
  1.2123 ++			 *          reset, followed by LZMA compressed chunk
  1.2124 ++			 *   0xC0   New properties and state reset, followed
  1.2125 ++			 *          by LZMA compressed chunk (no dictionary
  1.2126 ++			 *          reset)
  1.2127 ++			 *   0xA0   State reset using old properties,
  1.2128 ++			 *          followed by LZMA compressed chunk (no
  1.2129 ++			 *          dictionary reset)
  1.2130 ++			 *   0x80   LZMA chunk (no dictionary or state reset)
  1.2131 ++			 *
  1.2132 ++			 * For LZMA compressed chunks, the lowest five bits
  1.2133 ++			 * (s->control & 1F) are the highest bits of the
  1.2134 ++			 * uncompressed size (bits 16-20).
  1.2135 ++			 *
  1.2136 ++			 * A new LZMA2 stream must begin with a dictionary
  1.2137 ++			 * reset. The first LZMA chunk must set new
  1.2138 ++			 * properties and reset the LZMA state.
  1.2139 ++			 *
  1.2140 ++			 * Values that don't match anything described above
  1.2141 ++			 * are invalid and we return XZ_DATA_ERROR.
  1.2142 ++			 */
  1.2143 ++			tmp = b->in[b->in_pos++];
  1.2144 ++
  1.2145 ++			if (tmp >= 0xE0 || tmp == 0x01) {
  1.2146 ++				s->lzma2.need_props = true;
  1.2147 ++				s->lzma2.need_dict_reset = false;
  1.2148 ++				dict_reset(&s->dict, b);
  1.2149 ++			} else if (s->lzma2.need_dict_reset) {
  1.2150 ++				return XZ_DATA_ERROR;
  1.2151 ++			}
  1.2152 ++
  1.2153 ++			if (tmp >= 0x80) {
  1.2154 ++				s->lzma2.uncompressed = (tmp & 0x1F) << 16;
  1.2155 ++				s->lzma2.sequence = SEQ_UNCOMPRESSED_1;
  1.2156 ++
  1.2157 ++				if (tmp >= 0xC0) {
  1.2158 ++					/*
  1.2159 ++					 * When there are new properties,
  1.2160 ++					 * state reset is done at
  1.2161 ++					 * SEQ_PROPERTIES.
  1.2162 ++					 */
  1.2163 ++					s->lzma2.need_props = false;
  1.2164 ++					s->lzma2.next_sequence
  1.2165 ++							= SEQ_PROPERTIES;
  1.2166 ++
  1.2167 ++				} else if (s->lzma2.need_props) {
  1.2168 ++					return XZ_DATA_ERROR;
  1.2169 ++
  1.2170 ++				} else {
  1.2171 ++					s->lzma2.next_sequence
  1.2172 ++							= SEQ_LZMA_PREPARE;
  1.2173 ++					if (tmp >= 0xA0)
  1.2174 ++						lzma_reset(s);
  1.2175 ++				}
  1.2176 ++			} else {
  1.2177 ++				if (tmp == 0x00)
  1.2178 ++					return XZ_STREAM_END;
  1.2179 ++
  1.2180 ++				if (tmp > 0x02)
  1.2181 ++					return XZ_DATA_ERROR;
  1.2182 ++
  1.2183 ++				s->lzma2.sequence = SEQ_COMPRESSED_0;
  1.2184 ++				s->lzma2.next_sequence = SEQ_COPY;
  1.2185 ++			}
  1.2186 ++
  1.2187 ++			break;
  1.2188 ++
  1.2189 ++		case SEQ_UNCOMPRESSED_1:
  1.2190 ++			s->lzma2.uncompressed
  1.2191 ++					+= (uint32_t)b->in[b->in_pos++] << 8;
  1.2192 ++			s->lzma2.sequence = SEQ_UNCOMPRESSED_2;
  1.2193 ++			break;
  1.2194 ++
  1.2195 ++		case SEQ_UNCOMPRESSED_2:
  1.2196 ++			s->lzma2.uncompressed
  1.2197 ++					+= (uint32_t)b->in[b->in_pos++] + 1;
  1.2198 ++			s->lzma2.sequence = SEQ_COMPRESSED_0;
  1.2199 ++			break;
  1.2200 ++
  1.2201 ++		case SEQ_COMPRESSED_0:
  1.2202 ++			s->lzma2.compressed
  1.2203 ++					= (uint32_t)b->in[b->in_pos++] << 8;
  1.2204 ++			s->lzma2.sequence = SEQ_COMPRESSED_1;
  1.2205 ++			break;
  1.2206 ++
  1.2207 ++		case SEQ_COMPRESSED_1:
  1.2208 ++			s->lzma2.compressed
  1.2209 ++					+= (uint32_t)b->in[b->in_pos++] + 1;
  1.2210 ++			s->lzma2.sequence = s->lzma2.next_sequence;
  1.2211 ++			break;
  1.2212 ++
  1.2213 ++		case SEQ_PROPERTIES:
  1.2214 ++			if (!lzma_props(s, b->in[b->in_pos++]))
  1.2215 ++				return XZ_DATA_ERROR;
  1.2216 ++
  1.2217 ++			s->lzma2.sequence = SEQ_LZMA_PREPARE;
  1.2218 ++
  1.2219 ++		case SEQ_LZMA_PREPARE:
  1.2220 ++			if (s->lzma2.compressed < RC_INIT_BYTES)
  1.2221 ++				return XZ_DATA_ERROR;
  1.2222 ++
  1.2223 ++			if (!rc_read_init(&s->rc, b))
  1.2224 ++				return XZ_OK;
  1.2225 ++
  1.2226 ++			s->lzma2.compressed -= RC_INIT_BYTES;
  1.2227 ++			s->lzma2.sequence = SEQ_LZMA_RUN;
  1.2228 ++
  1.2229 ++		case SEQ_LZMA_RUN:
  1.2230 ++			/*
  1.2231 ++			 * Set dictionary limit to indicate how much we want
  1.2232 ++			 * to be encoded at maximum. Decode new data into the
  1.2233 ++			 * dictionary. Flush the new data from dictionary to
  1.2234 ++			 * b->out. Check if we finished decoding this chunk.
  1.2235 ++			 * In case the dictionary got full but we didn't fill
  1.2236 ++			 * the output buffer yet, we may run this loop
  1.2237 ++			 * multiple times without changing s->lzma2.sequence.
  1.2238 ++			 */
  1.2239 ++			dict_limit(&s->dict, min_t(size_t,
  1.2240 ++					b->out_size - b->out_pos,
  1.2241 ++					s->lzma2.uncompressed));
  1.2242 ++			if (!lzma2_lzma(s, b))
  1.2243 ++				return XZ_DATA_ERROR;
  1.2244 ++
  1.2245 ++			s->lzma2.uncompressed -= dict_flush(&s->dict, b);
  1.2246 ++
  1.2247 ++			if (s->lzma2.uncompressed == 0) {
  1.2248 ++				if (s->lzma2.compressed > 0 || s->lzma.len > 0
  1.2249 ++						|| !rc_is_finished(&s->rc))
  1.2250 ++					return XZ_DATA_ERROR;
  1.2251 ++
  1.2252 ++				rc_reset(&s->rc);
  1.2253 ++				s->lzma2.sequence = SEQ_CONTROL;
  1.2254 ++
  1.2255 ++			} else if (b->out_pos == b->out_size
  1.2256 ++					|| (b->in_pos == b->in_size
  1.2257 ++						&& s->temp.size
  1.2258 ++						< s->lzma2.compressed)) {
  1.2259 ++				return XZ_OK;
  1.2260 ++			}
  1.2261 ++
  1.2262 ++			break;
  1.2263 ++
  1.2264 ++		case SEQ_COPY:
  1.2265 ++			dict_uncompressed(&s->dict, b, &s->lzma2.compressed);
  1.2266 ++			if (s->lzma2.compressed > 0)
  1.2267 ++				return XZ_OK;
  1.2268 ++
  1.2269 ++			s->lzma2.sequence = SEQ_CONTROL;
  1.2270 ++			break;
  1.2271 ++		}
  1.2272 ++	}
  1.2273 ++
  1.2274 ++	return XZ_OK;
  1.2275 ++}
  1.2276 ++
  1.2277 ++XZ_EXTERN struct xz_dec_lzma2 *xz_dec_lzma2_create(enum xz_mode mode,
  1.2278 ++						   uint32_t dict_max)
  1.2279 ++{
  1.2280 ++	struct xz_dec_lzma2 *s = kmalloc(sizeof(*s), GFP_KERNEL);
  1.2281 ++	if (s == NULL)
  1.2282 ++		return NULL;
  1.2283 ++
  1.2284 ++	s->dict.mode = mode;
  1.2285 ++	s->dict.size_max = dict_max;
  1.2286 ++
  1.2287 ++	if (DEC_IS_PREALLOC(mode)) {
  1.2288 ++		s->dict.buf = vmalloc(dict_max);
  1.2289 ++		if (s->dict.buf == NULL) {
  1.2290 ++			kfree(s);
  1.2291 ++			return NULL;
  1.2292 ++		}
  1.2293 ++	} else if (DEC_IS_DYNALLOC(mode)) {
  1.2294 ++		s->dict.buf = NULL;
  1.2295 ++		s->dict.allocated = 0;
  1.2296 ++	}
  1.2297 ++
  1.2298 ++	return s;
  1.2299 ++}
  1.2300 ++
  1.2301 ++XZ_EXTERN enum xz_ret xz_dec_lzma2_reset(struct xz_dec_lzma2 *s, uint8_t props)
  1.2302 ++{
  1.2303 ++	/* This limits dictionary size to 3 GiB to keep parsing simpler. */
  1.2304 ++	if (props > 39)
  1.2305 ++		return XZ_OPTIONS_ERROR;
  1.2306 ++
  1.2307 ++	s->dict.size = 2 + (props & 1);
  1.2308 ++	s->dict.size <<= (props >> 1) + 11;
  1.2309 ++
  1.2310 ++	if (DEC_IS_MULTI(s->dict.mode)) {
  1.2311 ++		if (s->dict.size > s->dict.size_max)
  1.2312 ++			return XZ_MEMLIMIT_ERROR;
  1.2313 ++
  1.2314 ++		s->dict.end = s->dict.size;
  1.2315 ++
  1.2316 ++		if (DEC_IS_DYNALLOC(s->dict.mode)) {
  1.2317 ++			if (s->dict.allocated < s->dict.size) {
  1.2318 ++				vfree(s->dict.buf);
  1.2319 ++				s->dict.buf = vmalloc(s->dict.size);
  1.2320 ++				if (s->dict.buf == NULL) {
  1.2321 ++					s->dict.allocated = 0;
  1.2322 ++					return XZ_MEM_ERROR;
  1.2323 ++				}
  1.2324 ++			}
  1.2325 ++		}
  1.2326 ++	}
  1.2327 ++
  1.2328 ++	s->lzma.len = 0;
  1.2329 ++
  1.2330 ++	s->lzma2.sequence = SEQ_CONTROL;
  1.2331 ++	s->lzma2.need_dict_reset = true;
  1.2332 ++
  1.2333 ++	s->temp.size = 0;
  1.2334 ++
  1.2335 ++	return XZ_OK;
  1.2336 ++}
  1.2337 ++
  1.2338 ++XZ_EXTERN void xz_dec_lzma2_end(struct xz_dec_lzma2 *s)
  1.2339 ++{
  1.2340 ++	if (DEC_IS_MULTI(s->dict.mode))
  1.2341 ++		vfree(s->dict.buf);
  1.2342 ++
  1.2343 ++	kfree(s);
  1.2344 ++}
  1.2345 +diff --git a/lib/xz/xz_dec_stream.c b/lib/xz/xz_dec_stream.c
  1.2346 +new file mode 100644
  1.2347 +index 0000000..ac809b1
  1.2348 +--- /dev/null
  1.2349 ++++ b/lib/xz/xz_dec_stream.c
  1.2350 +@@ -0,0 +1,821 @@
  1.2351 ++/*
  1.2352 ++ * .xz Stream decoder
  1.2353 ++ *
  1.2354 ++ * Author: Lasse Collin <lasse.collin@tukaani.org>
  1.2355 ++ *
  1.2356 ++ * This file has been put into the public domain.
  1.2357 ++ * You can do whatever you want with this file.
  1.2358 ++ */
  1.2359 ++
  1.2360 ++#include "xz_private.h"
  1.2361 ++#include "xz_stream.h"
  1.2362 ++
  1.2363 ++/* Hash used to validate the Index field */
  1.2364 ++struct xz_dec_hash {
  1.2365 ++	vli_type unpadded;
  1.2366 ++	vli_type uncompressed;
  1.2367 ++	uint32_t crc32;
  1.2368 ++};
  1.2369 ++
  1.2370 ++struct xz_dec {
  1.2371 ++	/* Position in dec_main() */
  1.2372 ++	enum {
  1.2373 ++		SEQ_STREAM_HEADER,
  1.2374 ++		SEQ_BLOCK_START,
  1.2375 ++		SEQ_BLOCK_HEADER,
  1.2376 ++		SEQ_BLOCK_UNCOMPRESS,
  1.2377 ++		SEQ_BLOCK_PADDING,
  1.2378 ++		SEQ_BLOCK_CHECK,
  1.2379 ++		SEQ_INDEX,
  1.2380 ++		SEQ_INDEX_PADDING,
  1.2381 ++		SEQ_INDEX_CRC32,
  1.2382 ++		SEQ_STREAM_FOOTER
  1.2383 ++	} sequence;
  1.2384 ++
  1.2385 ++	/* Position in variable-length integers and Check fields */
  1.2386 ++	uint32_t pos;
  1.2387 ++
  1.2388 ++	/* Variable-length integer decoded by dec_vli() */
  1.2389 ++	vli_type vli;
  1.2390 ++
  1.2391 ++	/* Saved in_pos and out_pos */
  1.2392 ++	size_t in_start;
  1.2393 ++	size_t out_start;
  1.2394 ++
  1.2395 ++	/* CRC32 value in Block or Index */
  1.2396 ++	uint32_t crc32;
  1.2397 ++
  1.2398 ++	/* Type of the integrity check calculated from uncompressed data */
  1.2399 ++	enum xz_check check_type;
  1.2400 ++
  1.2401 ++	/* Operation mode */
  1.2402 ++	enum xz_mode mode;
  1.2403 ++
  1.2404 ++	/*
  1.2405 ++	 * True if the next call to xz_dec_run() is allowed to return
  1.2406 ++	 * XZ_BUF_ERROR.
  1.2407 ++	 */
  1.2408 ++	bool allow_buf_error;
  1.2409 ++
  1.2410 ++	/* Information stored in Block Header */
  1.2411 ++	struct {
  1.2412 ++		/*
  1.2413 ++		 * Value stored in the Compressed Size field, or
  1.2414 ++		 * VLI_UNKNOWN if Compressed Size is not present.
  1.2415 ++		 */
  1.2416 ++		vli_type compressed;
  1.2417 ++
  1.2418 ++		/*
  1.2419 ++		 * Value stored in the Uncompressed Size field, or
  1.2420 ++		 * VLI_UNKNOWN if Uncompressed Size is not present.
  1.2421 ++		 */
  1.2422 ++		vli_type uncompressed;
  1.2423 ++
  1.2424 ++		/* Size of the Block Header field */
  1.2425 ++		uint32_t size;
  1.2426 ++	} block_header;
  1.2427 ++
  1.2428 ++	/* Information collected when decoding Blocks */
  1.2429 ++	struct {
  1.2430 ++		/* Observed compressed size of the current Block */
  1.2431 ++		vli_type compressed;
  1.2432 ++
  1.2433 ++		/* Observed uncompressed size of the current Block */
  1.2434 ++		vli_type uncompressed;
  1.2435 ++
  1.2436 ++		/* Number of Blocks decoded so far */
  1.2437 ++		vli_type count;
  1.2438 ++
  1.2439 ++		/*
  1.2440 ++		 * Hash calculated from the Block sizes. This is used to
  1.2441 ++		 * validate the Index field.
  1.2442 ++		 */
  1.2443 ++		struct xz_dec_hash hash;
  1.2444 ++	} block;
  1.2445 ++
  1.2446 ++	/* Variables needed when verifying the Index field */
  1.2447 ++	struct {
  1.2448 ++		/* Position in dec_index() */
  1.2449 ++		enum {
  1.2450 ++			SEQ_INDEX_COUNT,
  1.2451 ++			SEQ_INDEX_UNPADDED,
  1.2452 ++			SEQ_INDEX_UNCOMPRESSED
  1.2453 ++		} sequence;
  1.2454 ++
  1.2455 ++		/* Size of the Index in bytes */
  1.2456 ++		vli_type size;
  1.2457 ++
  1.2458 ++		/* Number of Records (matches block.count in valid files) */
  1.2459 ++		vli_type count;
  1.2460 ++
  1.2461 ++		/*
  1.2462 ++		 * Hash calculated from the Records (matches block.hash in
  1.2463 ++		 * valid files).
  1.2464 ++		 */
  1.2465 ++		struct xz_dec_hash hash;
  1.2466 ++	} index;
  1.2467 ++
  1.2468 ++	/*
  1.2469 ++	 * Temporary buffer needed to hold Stream Header, Block Header,
  1.2470 ++	 * and Stream Footer. The Block Header is the biggest (1 KiB)
  1.2471 ++	 * so we reserve space according to that. buf[] has to be aligned
  1.2472 ++	 * to a multiple of four bytes; the size_t variables before it
  1.2473 ++	 * should guarantee this.
  1.2474 ++	 */
  1.2475 ++	struct {
  1.2476 ++		size_t pos;
  1.2477 ++		size_t size;
  1.2478 ++		uint8_t buf[1024];
  1.2479 ++	} temp;
  1.2480 ++
  1.2481 ++	struct xz_dec_lzma2 *lzma2;
  1.2482 ++
  1.2483 ++#ifdef XZ_DEC_BCJ
  1.2484 ++	struct xz_dec_bcj *bcj;
  1.2485 ++	bool bcj_active;
  1.2486 ++#endif
  1.2487 ++};
  1.2488 ++
  1.2489 ++#ifdef XZ_DEC_ANY_CHECK
  1.2490 ++/* Sizes of the Check field with different Check IDs */
  1.2491 ++static const uint8_t check_sizes[16] = {
  1.2492 ++	0,
  1.2493 ++	4, 4, 4,
  1.2494 ++	8, 8, 8,
  1.2495 ++	16, 16, 16,
  1.2496 ++	32, 32, 32,
  1.2497 ++	64, 64, 64
  1.2498 ++};
  1.2499 ++#endif
  1.2500 ++
  1.2501 ++/*
  1.2502 ++ * Fill s->temp by copying data starting from b->in[b->in_pos]. Caller
  1.2503 ++ * must have set s->temp.pos to indicate how much data we are supposed
  1.2504 ++ * to copy into s->temp.buf. Return true once s->temp.pos has reached
  1.2505 ++ * s->temp.size.
  1.2506 ++ */
  1.2507 ++static bool fill_temp(struct xz_dec *s, struct xz_buf *b)
  1.2508 ++{
  1.2509 ++	size_t copy_size = min_t(size_t,
  1.2510 ++			b->in_size - b->in_pos, s->temp.size - s->temp.pos);
  1.2511 ++
  1.2512 ++	memcpy(s->temp.buf + s->temp.pos, b->in + b->in_pos, copy_size);
  1.2513 ++	b->in_pos += copy_size;
  1.2514 ++	s->temp.pos += copy_size;
  1.2515 ++
  1.2516 ++	if (s->temp.pos == s->temp.size) {
  1.2517 ++		s->temp.pos = 0;
  1.2518 ++		return true;
  1.2519 ++	}
  1.2520 ++
  1.2521 ++	return false;
  1.2522 ++}
  1.2523 ++
  1.2524 ++/* Decode a variable-length integer (little-endian base-128 encoding) */
  1.2525 ++static enum xz_ret dec_vli(struct xz_dec *s, const uint8_t *in,
  1.2526 ++			   size_t *in_pos, size_t in_size)
  1.2527 ++{
  1.2528 ++	uint8_t byte;
  1.2529 ++
  1.2530 ++	if (s->pos == 0)
  1.2531 ++		s->vli = 0;
  1.2532 ++
  1.2533 ++	while (*in_pos < in_size) {
  1.2534 ++		byte = in[*in_pos];
  1.2535 ++		++*in_pos;
  1.2536 ++
  1.2537 ++		s->vli |= (vli_type)(byte & 0x7F) << s->pos;
  1.2538 ++
  1.2539 ++		if ((byte & 0x80) == 0) {
  1.2540 ++			/* Don't allow non-minimal encodings. */
  1.2541 ++			if (byte == 0 && s->pos != 0)
  1.2542 ++				return XZ_DATA_ERROR;
  1.2543 ++
  1.2544 ++			s->pos = 0;
  1.2545 ++			return XZ_STREAM_END;
  1.2546 ++		}
  1.2547 ++
  1.2548 ++		s->pos += 7;
  1.2549 ++		if (s->pos == 7 * VLI_BYTES_MAX)
  1.2550 ++			return XZ_DATA_ERROR;
  1.2551 ++	}
  1.2552 ++
  1.2553 ++	return XZ_OK;
  1.2554 ++}
  1.2555 ++
  1.2556 ++/*
  1.2557 ++ * Decode the Compressed Data field from a Block. Update and validate
  1.2558 ++ * the observed compressed and uncompressed sizes of the Block so that
  1.2559 ++ * they don't exceed the values possibly stored in the Block Header
  1.2560 ++ * (validation assumes that no integer overflow occurs, since vli_type
  1.2561 ++ * is normally uint64_t). Update the CRC32 if presence of the CRC32
  1.2562 ++ * field was indicated in Stream Header.
  1.2563 ++ *
  1.2564 ++ * Once the decoding is finished, validate that the observed sizes match
  1.2565 ++ * the sizes possibly stored in the Block Header. Update the hash and
  1.2566 ++ * Block count, which are later used to validate the Index field.
  1.2567 ++ */
  1.2568 ++static enum xz_ret dec_block(struct xz_dec *s, struct xz_buf *b)
  1.2569 ++{
  1.2570 ++	enum xz_ret ret;
  1.2571 ++
  1.2572 ++	s->in_start = b->in_pos;
  1.2573 ++	s->out_start = b->out_pos;
  1.2574 ++
  1.2575 ++#ifdef XZ_DEC_BCJ
  1.2576 ++	if (s->bcj_active)
  1.2577 ++		ret = xz_dec_bcj_run(s->bcj, s->lzma2, b);
  1.2578 ++	else
  1.2579 ++#endif
  1.2580 ++		ret = xz_dec_lzma2_run(s->lzma2, b);
  1.2581 ++
  1.2582 ++	s->block.compressed += b->in_pos - s->in_start;
  1.2583 ++	s->block.uncompressed += b->out_pos - s->out_start;
  1.2584 ++
  1.2585 ++	/*
  1.2586 ++	 * There is no need to separately check for VLI_UNKNOWN, since
  1.2587 ++	 * the observed sizes are always smaller than VLI_UNKNOWN.
  1.2588 ++	 */
  1.2589 ++	if (s->block.compressed > s->block_header.compressed
  1.2590 ++			|| s->block.uncompressed
  1.2591 ++				> s->block_header.uncompressed)
  1.2592 ++		return XZ_DATA_ERROR;
  1.2593 ++
  1.2594 ++	if (s->check_type == XZ_CHECK_CRC32)
  1.2595 ++		s->crc32 = xz_crc32(b->out + s->out_start,
  1.2596 ++				b->out_pos - s->out_start, s->crc32);
  1.2597 ++
  1.2598 ++	if (ret == XZ_STREAM_END) {
  1.2599 ++		if (s->block_header.compressed != VLI_UNKNOWN
  1.2600 ++				&& s->block_header.compressed
  1.2601 ++					!= s->block.compressed)
  1.2602 ++			return XZ_DATA_ERROR;
  1.2603 ++
  1.2604 ++		if (s->block_header.uncompressed != VLI_UNKNOWN
  1.2605 ++				&& s->block_header.uncompressed
  1.2606 ++					!= s->block.uncompressed)
  1.2607 ++			return XZ_DATA_ERROR;
  1.2608 ++
  1.2609 ++		s->block.hash.unpadded += s->block_header.size
  1.2610 ++				+ s->block.compressed;
  1.2611 ++
  1.2612 ++#ifdef XZ_DEC_ANY_CHECK
  1.2613 ++		s->block.hash.unpadded += check_sizes[s->check_type];
  1.2614 ++#else
  1.2615 ++		if (s->check_type == XZ_CHECK_CRC32)
  1.2616 ++			s->block.hash.unpadded += 4;
  1.2617 ++#endif
  1.2618 ++
  1.2619 ++		s->block.hash.uncompressed += s->block.uncompressed;
  1.2620 ++		s->block.hash.crc32 = xz_crc32(
  1.2621 ++				(const uint8_t *)&s->block.hash,
  1.2622 ++				sizeof(s->block.hash), s->block.hash.crc32);
  1.2623 ++
  1.2624 ++		++s->block.count;
  1.2625 ++	}
  1.2626 ++
  1.2627 ++	return ret;
  1.2628 ++}
  1.2629 ++
  1.2630 ++/* Update the Index size and the CRC32 value. */
  1.2631 ++static void index_update(struct xz_dec *s, const struct xz_buf *b)
  1.2632 ++{
  1.2633 ++	size_t in_used = b->in_pos - s->in_start;
  1.2634 ++	s->index.size += in_used;
  1.2635 ++	s->crc32 = xz_crc32(b->in + s->in_start, in_used, s->crc32);
  1.2636 ++}
  1.2637 ++
  1.2638 ++/*
  1.2639 ++ * Decode the Number of Records, Unpadded Size, and Uncompressed Size
  1.2640 ++ * fields from the Index field. That is, Index Padding and CRC32 are not
  1.2641 ++ * decoded by this function.
  1.2642 ++ *
  1.2643 ++ * This can return XZ_OK (more input needed), XZ_STREAM_END (everything
  1.2644 ++ * successfully decoded), or XZ_DATA_ERROR (input is corrupt).
  1.2645 ++ */
  1.2646 ++static enum xz_ret dec_index(struct xz_dec *s, struct xz_buf *b)
  1.2647 ++{
  1.2648 ++	enum xz_ret ret;
  1.2649 ++
  1.2650 ++	do {
  1.2651 ++		ret = dec_vli(s, b->in, &b->in_pos, b->in_size);
  1.2652 ++		if (ret != XZ_STREAM_END) {
  1.2653 ++			index_update(s, b);
  1.2654 ++			return ret;
  1.2655 ++		}
  1.2656 ++
  1.2657 ++		switch (s->index.sequence) {
  1.2658 ++		case SEQ_INDEX_COUNT:
  1.2659 ++			s->index.count = s->vli;
  1.2660 ++
  1.2661 ++			/*
  1.2662 ++			 * Validate that the Number of Records field
  1.2663 ++			 * indicates the same number of Records as
  1.2664 ++			 * there were Blocks in the Stream.
  1.2665 ++			 */
  1.2666 ++			if (s->index.count != s->block.count)
  1.2667 ++				return XZ_DATA_ERROR;
  1.2668 ++
  1.2669 ++			s->index.sequence = SEQ_INDEX_UNPADDED;
  1.2670 ++			break;
  1.2671 ++
  1.2672 ++		case SEQ_INDEX_UNPADDED:
  1.2673 ++			s->index.hash.unpadded += s->vli;
  1.2674 ++			s->index.sequence = SEQ_INDEX_UNCOMPRESSED;
  1.2675 ++			break;
  1.2676 ++
  1.2677 ++		case SEQ_INDEX_UNCOMPRESSED:
  1.2678 ++			s->index.hash.uncompressed += s->vli;
  1.2679 ++			s->index.hash.crc32 = xz_crc32(
  1.2680 ++					(const uint8_t *)&s->index.hash,
  1.2681 ++					sizeof(s->index.hash),
  1.2682 ++					s->index.hash.crc32);
  1.2683 ++			--s->index.count;
  1.2684 ++			s->index.sequence = SEQ_INDEX_UNPADDED;
  1.2685 ++			break;
  1.2686 ++		}
  1.2687 ++	} while (s->index.count > 0);
  1.2688 ++
  1.2689 ++	return XZ_STREAM_END;
  1.2690 ++}
  1.2691 ++
  1.2692 ++/*
  1.2693 ++ * Validate that the next four input bytes match the value of s->crc32.
  1.2694 ++ * s->pos must be zero when starting to validate the first byte.
  1.2695 ++ */
  1.2696 ++static enum xz_ret crc32_validate(struct xz_dec *s, struct xz_buf *b)
  1.2697 ++{
  1.2698 ++	do {
  1.2699 ++		if (b->in_pos == b->in_size)
  1.2700 ++			return XZ_OK;
  1.2701 ++
  1.2702 ++		if (((s->crc32 >> s->pos) & 0xFF) != b->in[b->in_pos++])
  1.2703 ++			return XZ_DATA_ERROR;
  1.2704 ++
  1.2705 ++		s->pos += 8;
  1.2706 ++
  1.2707 ++	} while (s->pos < 32);
  1.2708 ++
  1.2709 ++	s->crc32 = 0;
  1.2710 ++	s->pos = 0;
  1.2711 ++
  1.2712 ++	return XZ_STREAM_END;
  1.2713 ++}
  1.2714 ++
  1.2715 ++#ifdef XZ_DEC_ANY_CHECK
  1.2716 ++/*
  1.2717 ++ * Skip over the Check field when the Check ID is not supported.
  1.2718 ++ * Returns true once the whole Check field has been skipped over.
  1.2719 ++ */
  1.2720 ++static bool check_skip(struct xz_dec *s, struct xz_buf *b)
  1.2721 ++{
  1.2722 ++	while (s->pos < check_sizes[s->check_type]) {
  1.2723 ++		if (b->in_pos == b->in_size)
  1.2724 ++			return false;
  1.2725 ++
  1.2726 ++		++b->in_pos;
  1.2727 ++		++s->pos;
  1.2728 ++	}
  1.2729 ++
  1.2730 ++	s->pos = 0;
  1.2731 ++
  1.2732 ++	return true;
  1.2733 ++}
  1.2734 ++#endif
  1.2735 ++
  1.2736 ++/* Decode the Stream Header field (the first 12 bytes of the .xz Stream). */
  1.2737 ++static enum xz_ret dec_stream_header(struct xz_dec *s)
  1.2738 ++{
  1.2739 ++	if (!memeq(s->temp.buf, HEADER_MAGIC, HEADER_MAGIC_SIZE))
  1.2740 ++		return XZ_FORMAT_ERROR;
  1.2741 ++
  1.2742 ++	if (xz_crc32(s->temp.buf + HEADER_MAGIC_SIZE, 2, 0)
  1.2743 ++			!= get_le32(s->temp.buf + HEADER_MAGIC_SIZE + 2))
  1.2744 ++		return XZ_DATA_ERROR;
  1.2745 ++
  1.2746 ++	if (s->temp.buf[HEADER_MAGIC_SIZE] != 0)
  1.2747 ++		return XZ_OPTIONS_ERROR;
  1.2748 ++
  1.2749 ++	/*
  1.2750 ++	 * Of integrity checks, we support only none (Check ID = 0) and
  1.2751 ++	 * CRC32 (Check ID = 1). However, if XZ_DEC_ANY_CHECK is defined,
  1.2752 ++	 * we will accept other check types too, but then the check won't
  1.2753 ++	 * be verified and a warning (XZ_UNSUPPORTED_CHECK) will be given.
  1.2754 ++	 */
  1.2755 ++	s->check_type = s->temp.buf[HEADER_MAGIC_SIZE + 1];
  1.2756 ++
  1.2757 ++#ifdef XZ_DEC_ANY_CHECK
  1.2758 ++	if (s->check_type > XZ_CHECK_MAX)
  1.2759 ++		return XZ_OPTIONS_ERROR;
  1.2760 ++
  1.2761 ++	if (s->check_type > XZ_CHECK_CRC32)
  1.2762 ++		return XZ_UNSUPPORTED_CHECK;
  1.2763 ++#else
  1.2764 ++	if (s->check_type > XZ_CHECK_CRC32)
  1.2765 ++		return XZ_OPTIONS_ERROR;
  1.2766 ++#endif
  1.2767 ++
  1.2768 ++	return XZ_OK;
  1.2769 ++}
  1.2770 ++
  1.2771 ++/* Decode the Stream Footer field (the last 12 bytes of the .xz Stream) */
  1.2772 ++static enum xz_ret dec_stream_footer(struct xz_dec *s)
  1.2773 ++{
  1.2774 ++	if (!memeq(s->temp.buf + 10, FOOTER_MAGIC, FOOTER_MAGIC_SIZE))
  1.2775 ++		return XZ_DATA_ERROR;
  1.2776 ++
  1.2777 ++	if (xz_crc32(s->temp.buf + 4, 6, 0) != get_le32(s->temp.buf))
  1.2778 ++		return XZ_DATA_ERROR;
  1.2779 ++
  1.2780 ++	/*
  1.2781 ++	 * Validate Backward Size. Note that we never added the size of the
  1.2782 ++	 * Index CRC32 field to s->index.size, thus we use s->index.size / 4
  1.2783 ++	 * instead of s->index.size / 4 - 1.
  1.2784 ++	 */
  1.2785 ++	if ((s->index.size >> 2) != get_le32(s->temp.buf + 4))
  1.2786 ++		return XZ_DATA_ERROR;
  1.2787 ++
  1.2788 ++	if (s->temp.buf[8] != 0 || s->temp.buf[9] != s->check_type)
  1.2789 ++		return XZ_DATA_ERROR;
  1.2790 ++
  1.2791 ++	/*
  1.2792 ++	 * Use XZ_STREAM_END instead of XZ_OK to be more convenient
  1.2793 ++	 * for the caller.
  1.2794 ++	 */
  1.2795 ++	return XZ_STREAM_END;
  1.2796 ++}
  1.2797 ++
  1.2798 ++/* Decode the Block Header and initialize the filter chain. */
  1.2799 ++static enum xz_ret dec_block_header(struct xz_dec *s)
  1.2800 ++{
  1.2801 ++	enum xz_ret ret;
  1.2802 ++
  1.2803 ++	/*
  1.2804 ++	 * Validate the CRC32. We know that the temp buffer is at least
  1.2805 ++	 * eight bytes so this is safe.
  1.2806 ++	 */
  1.2807 ++	s->temp.size -= 4;
  1.2808 ++	if (xz_crc32(s->temp.buf, s->temp.size, 0)
  1.2809 ++			!= get_le32(s->temp.buf + s->temp.size))
  1.2810 ++		return XZ_DATA_ERROR;
  1.2811 ++
  1.2812 ++	s->temp.pos = 2;
  1.2813 ++
  1.2814 ++	/*
  1.2815 ++	 * Catch unsupported Block Flags. We support only one or two filters
  1.2816 ++	 * in the chain, so we catch that with the same test.
  1.2817 ++	 */
  1.2818 ++#ifdef XZ_DEC_BCJ
  1.2819 ++	if (s->temp.buf[1] & 0x3E)
  1.2820 ++#else
  1.2821 ++	if (s->temp.buf[1] & 0x3F)
  1.2822 ++#endif
  1.2823 ++		return XZ_OPTIONS_ERROR;
  1.2824 ++
  1.2825 ++	/* Compressed Size */
  1.2826 ++	if (s->temp.buf[1] & 0x40) {
  1.2827 ++		if (dec_vli(s, s->temp.buf, &s->temp.pos, s->temp.size)
  1.2828 ++					!= XZ_STREAM_END)
  1.2829 ++			return XZ_DATA_ERROR;
  1.2830 ++
  1.2831 ++		s->block_header.compressed = s->vli;
  1.2832 ++	} else {
  1.2833 ++		s->block_header.compressed = VLI_UNKNOWN;
  1.2834 ++	}
  1.2835 ++
  1.2836 ++	/* Uncompressed Size */
  1.2837 ++	if (s->temp.buf[1] & 0x80) {
  1.2838 ++		if (dec_vli(s, s->temp.buf, &s->temp.pos, s->temp.size)
  1.2839 ++				!= XZ_STREAM_END)
  1.2840 ++			return XZ_DATA_ERROR;
  1.2841 ++
  1.2842 ++		s->block_header.uncompressed = s->vli;
  1.2843 ++	} else {
  1.2844 ++		s->block_header.uncompressed = VLI_UNKNOWN;
  1.2845 ++	}
  1.2846 ++
  1.2847 ++#ifdef XZ_DEC_BCJ
  1.2848 ++	/* If there are two filters, the first one must be a BCJ filter. */
  1.2849 ++	s->bcj_active = s->temp.buf[1] & 0x01;
  1.2850 ++	if (s->bcj_active) {
  1.2851 ++		if (s->temp.size - s->temp.pos < 2)
  1.2852 ++			return XZ_OPTIONS_ERROR;
  1.2853 ++
  1.2854 ++		ret = xz_dec_bcj_reset(s->bcj, s->temp.buf[s->temp.pos++]);
  1.2855 ++		if (ret != XZ_OK)
  1.2856 ++			return ret;
  1.2857 ++
  1.2858 ++		/*
  1.2859 ++		 * We don't support custom start offset,
  1.2860 ++		 * so Size of Properties must be zero.
  1.2861 ++		 */
  1.2862 ++		if (s->temp.buf[s->temp.pos++] != 0x00)
  1.2863 ++			return XZ_OPTIONS_ERROR;
  1.2864 ++	}
  1.2865 ++#endif
  1.2866 ++
  1.2867 ++	/* Valid Filter Flags always take at least two bytes. */
  1.2868 ++	if (s->temp.size - s->temp.pos < 2)
  1.2869 ++		return XZ_DATA_ERROR;
  1.2870 ++
  1.2871 ++	/* Filter ID = LZMA2 */
  1.2872 ++	if (s->temp.buf[s->temp.pos++] != 0x21)
  1.2873 ++		return XZ_OPTIONS_ERROR;
  1.2874 ++
  1.2875 ++	/* Size of Properties = 1-byte Filter Properties */
  1.2876 ++	if (s->temp.buf[s->temp.pos++] != 0x01)
  1.2877 ++		return XZ_OPTIONS_ERROR;
  1.2878 ++
  1.2879 ++	/* Filter Properties contains LZMA2 dictionary size. */
  1.2880 ++	if (s->temp.size - s->temp.pos < 1)
  1.2881 ++		return XZ_DATA_ERROR;
  1.2882 ++
  1.2883 ++	ret = xz_dec_lzma2_reset(s->lzma2, s->temp.buf[s->temp.pos++]);
  1.2884 ++	if (ret != XZ_OK)
  1.2885 ++		return ret;
  1.2886 ++
  1.2887 ++	/* The rest must be Header Padding. */
  1.2888 ++	while (s->temp.pos < s->temp.size)
  1.2889 ++		if (s->temp.buf[s->temp.pos++] != 0x00)
  1.2890 ++			return XZ_OPTIONS_ERROR;
  1.2891 ++
  1.2892 ++	s->temp.pos = 0;
  1.2893 ++	s->block.compressed = 0;
  1.2894 ++	s->block.uncompressed = 0;
  1.2895 ++
  1.2896 ++	return XZ_OK;
  1.2897 ++}
  1.2898 ++
  1.2899 ++static enum xz_ret dec_main(struct xz_dec *s, struct xz_buf *b)
  1.2900 ++{
  1.2901 ++	enum xz_ret ret;
  1.2902 ++
  1.2903 ++	/*
  1.2904 ++	 * Store the start position for the case when we are in the middle
  1.2905 ++	 * of the Index field.
  1.2906 ++	 */
  1.2907 ++	s->in_start = b->in_pos;
  1.2908 ++
  1.2909 ++	while (true) {
  1.2910 ++		switch (s->sequence) {
  1.2911 ++		case SEQ_STREAM_HEADER:
  1.2912 ++			/*
  1.2913 ++			 * Stream Header is copied to s->temp, and then
  1.2914 ++			 * decoded from there. This way if the caller
  1.2915 ++			 * gives us only little input at a time, we can
  1.2916 ++			 * still keep the Stream Header decoding code
  1.2917 ++			 * simple. Similar approach is used in many places
  1.2918 ++			 * in this file.
  1.2919 ++			 */
  1.2920 ++			if (!fill_temp(s, b))
  1.2921 ++				return XZ_OK;
  1.2922 ++
  1.2923 ++			/*
  1.2924 ++			 * If dec_stream_header() returns
  1.2925 ++			 * XZ_UNSUPPORTED_CHECK, it is still possible
  1.2926 ++			 * to continue decoding if working in multi-call
  1.2927 ++			 * mode. Thus, update s->sequence before calling
  1.2928 ++			 * dec_stream_header().
  1.2929 ++			 */
  1.2930 ++			s->sequence = SEQ_BLOCK_START;
  1.2931 ++
  1.2932 ++			ret = dec_stream_header(s);
  1.2933 ++			if (ret != XZ_OK)
  1.2934 ++				return ret;
  1.2935 ++
  1.2936 ++		case SEQ_BLOCK_START:
  1.2937 ++			/* We need one byte of input to continue. */
  1.2938 ++			if (b->in_pos == b->in_size)
  1.2939 ++				return XZ_OK;
  1.2940 ++
  1.2941 ++			/* See if this is the beginning of the Index field. */
  1.2942 ++			if (b->in[b->in_pos] == 0) {
  1.2943 ++				s->in_start = b->in_pos++;
  1.2944 ++				s->sequence = SEQ_INDEX;
  1.2945 ++				break;
  1.2946 ++			}
  1.2947 ++
  1.2948 ++			/*
  1.2949 ++			 * Calculate the size of the Block Header and
  1.2950 ++			 * prepare to decode it.
  1.2951 ++			 */
  1.2952 ++			s->block_header.size
  1.2953 ++				= ((uint32_t)b->in[b->in_pos] + 1) * 4;
  1.2954 ++
  1.2955 ++			s->temp.size = s->block_header.size;
  1.2956 ++			s->temp.pos = 0;
  1.2957 ++			s->sequence = SEQ_BLOCK_HEADER;
  1.2958 ++
  1.2959 ++		case SEQ_BLOCK_HEADER:
  1.2960 ++			if (!fill_temp(s, b))
  1.2961 ++				return XZ_OK;
  1.2962 ++
  1.2963 ++			ret = dec_block_header(s);
  1.2964 ++			if (ret != XZ_OK)
  1.2965 ++				return ret;
  1.2966 ++
  1.2967 ++			s->sequence = SEQ_BLOCK_UNCOMPRESS;
  1.2968 ++
  1.2969 ++		case SEQ_BLOCK_UNCOMPRESS:
  1.2970 ++			ret = dec_block(s, b);
  1.2971 ++			if (ret != XZ_STREAM_END)
  1.2972 ++				return ret;
  1.2973 ++
  1.2974 ++			s->sequence = SEQ_BLOCK_PADDING;
  1.2975 ++
  1.2976 ++		case SEQ_BLOCK_PADDING:
  1.2977 ++			/*
  1.2978 ++			 * Size of Compressed Data + Block Padding
  1.2979 ++			 * must be a multiple of four. We don't need
  1.2980 ++			 * s->block.compressed for anything else
  1.2981 ++			 * anymore, so we use it here to test the size
  1.2982 ++			 * of the Block Padding field.
  1.2983 ++			 */
  1.2984 ++			while (s->block.compressed & 3) {
  1.2985 ++				if (b->in_pos == b->in_size)
  1.2986 ++					return XZ_OK;
  1.2987 ++
  1.2988 ++				if (b->in[b->in_pos++] != 0)
  1.2989 ++					return XZ_DATA_ERROR;
  1.2990 ++
  1.2991 ++				++s->block.compressed;
  1.2992 ++			}
  1.2993 ++
  1.2994 ++			s->sequence = SEQ_BLOCK_CHECK;
  1.2995 ++
  1.2996 ++		case SEQ_BLOCK_CHECK:
  1.2997 ++			if (s->check_type == XZ_CHECK_CRC32) {
  1.2998 ++				ret = crc32_validate(s, b);
  1.2999 ++				if (ret != XZ_STREAM_END)
  1.3000 ++					return ret;
  1.3001 ++			}
  1.3002 ++#ifdef XZ_DEC_ANY_CHECK
  1.3003 ++			else if (!check_skip(s, b)) {
  1.3004 ++				return XZ_OK;
  1.3005 ++			}
  1.3006 ++#endif
  1.3007 ++
  1.3008 ++			s->sequence = SEQ_BLOCK_START;
  1.3009 ++			break;
  1.3010 ++
  1.3011 ++		case SEQ_INDEX:
  1.3012 ++			ret = dec_index(s, b);
  1.3013 ++			if (ret != XZ_STREAM_END)
  1.3014 ++				return ret;
  1.3015 ++
  1.3016 ++			s->sequence = SEQ_INDEX_PADDING;
  1.3017 ++
  1.3018 ++		case SEQ_INDEX_PADDING:
  1.3019 ++			while ((s->index.size + (b->in_pos - s->in_start))
  1.3020 ++					& 3) {
  1.3021 ++				if (b->in_pos == b->in_size) {
  1.3022 ++					index_update(s, b);
  1.3023 ++					return XZ_OK;
  1.3024 ++				}
  1.3025 ++
  1.3026 ++				if (b->in[b->in_pos++] != 0)
  1.3027 ++					return XZ_DATA_ERROR;
  1.3028 ++			}
  1.3029 ++
  1.3030 ++			/* Finish the CRC32 value and Index size. */
  1.3031 ++			index_update(s, b);
  1.3032 ++
  1.3033 ++			/* Compare the hashes to validate the Index field. */
  1.3034 ++			if (!memeq(&s->block.hash, &s->index.hash,
  1.3035 ++					sizeof(s->block.hash)))
  1.3036 ++				return XZ_DATA_ERROR;
  1.3037 ++
  1.3038 ++			s->sequence = SEQ_INDEX_CRC32;
  1.3039 ++
  1.3040 ++		case SEQ_INDEX_CRC32:
  1.3041 ++			ret = crc32_validate(s, b);
  1.3042 ++			if (ret != XZ_STREAM_END)
  1.3043 ++				return ret;
  1.3044 ++
  1.3045 ++			s->temp.size = STREAM_HEADER_SIZE;
  1.3046 ++			s->sequence = SEQ_STREAM_FOOTER;
  1.3047 ++
  1.3048 ++		case SEQ_STREAM_FOOTER:
  1.3049 ++			if (!fill_temp(s, b))
  1.3050 ++				return XZ_OK;
  1.3051 ++
  1.3052 ++			return dec_stream_footer(s);
  1.3053 ++		}
  1.3054 ++	}
  1.3055 ++
  1.3056 ++	/* Never reached */
  1.3057 ++}
  1.3058 ++
  1.3059 ++/*
  1.3060 ++ * xz_dec_run() is a wrapper for dec_main() to handle some special cases in
  1.3061 ++ * multi-call and single-call decoding.
  1.3062 ++ *
  1.3063 ++ * In multi-call mode, we must return XZ_BUF_ERROR when it seems clear that we
  1.3064 ++ * are not going to make any progress anymore. This is to prevent the caller
  1.3065 ++ * from calling us infinitely when the input file is truncated or otherwise
  1.3066 ++ * corrupt. Since zlib-style API allows that the caller fills the input buffer
  1.3067 ++ * only when the decoder doesn't produce any new output, we have to be careful
  1.3068 ++ * to avoid returning XZ_BUF_ERROR too easily: XZ_BUF_ERROR is returned only
  1.3069 ++ * after the second consecutive call to xz_dec_run() that makes no progress.
  1.3070 ++ *
  1.3071 ++ * In single-call mode, if we couldn't decode everything and no error
  1.3072 ++ * occurred, either the input is truncated or the output buffer is too small.
  1.3073 ++ * Since we know that the last input byte never produces any output, we know
  1.3074 ++ * that if all the input was consumed and decoding wasn't finished, the file
  1.3075 ++ * must be corrupt. Otherwise the output buffer has to be too small or the
  1.3076 ++ * file is corrupt in a way that decoding it produces too big output.
  1.3077 ++ *
  1.3078 ++ * If single-call decoding fails, we reset b->in_pos and b->out_pos back to
  1.3079 ++ * their original values. This is because with some filter chains there won't
  1.3080 ++ * be any valid uncompressed data in the output buffer unless the decoding
  1.3081 ++ * actually succeeds (that's the price to pay of using the output buffer as
  1.3082 ++ * the workspace).
  1.3083 ++ */
  1.3084 ++XZ_EXTERN enum xz_ret xz_dec_run(struct xz_dec *s, struct xz_buf *b)
  1.3085 ++{
  1.3086 ++	size_t in_start;
  1.3087 ++	size_t out_start;
  1.3088 ++	enum xz_ret ret;
  1.3089 ++
  1.3090 ++	if (DEC_IS_SINGLE(s->mode))
  1.3091 ++		xz_dec_reset(s);
  1.3092 ++
  1.3093 ++	in_start = b->in_pos;
  1.3094 ++	out_start = b->out_pos;
  1.3095 ++	ret = dec_main(s, b);
  1.3096 ++
  1.3097 ++	if (DEC_IS_SINGLE(s->mode)) {
  1.3098 ++		if (ret == XZ_OK)
  1.3099 ++			ret = b->in_pos == b->in_size
  1.3100 ++					? XZ_DATA_ERROR : XZ_BUF_ERROR;
  1.3101 ++
  1.3102 ++		if (ret != XZ_STREAM_END) {
  1.3103 ++			b->in_pos = in_start;
  1.3104 ++			b->out_pos = out_start;
  1.3105 ++		}
  1.3106 ++
  1.3107 ++	} else if (ret == XZ_OK && in_start == b->in_pos
  1.3108 ++			&& out_start == b->out_pos) {
  1.3109 ++		if (s->allow_buf_error)
  1.3110 ++			ret = XZ_BUF_ERROR;
  1.3111 ++
  1.3112 ++		s->allow_buf_error = true;
  1.3113 ++	} else {
  1.3114 ++		s->allow_buf_error = false;
  1.3115 ++	}
  1.3116 ++
  1.3117 ++	return ret;
  1.3118 ++}
  1.3119 ++
  1.3120 ++XZ_EXTERN struct xz_dec *xz_dec_init(enum xz_mode mode, uint32_t dict_max)
  1.3121 ++{
  1.3122 ++	struct xz_dec *s = kmalloc(sizeof(*s), GFP_KERNEL);
  1.3123 ++	if (s == NULL)
  1.3124 ++		return NULL;
  1.3125 ++
  1.3126 ++	s->mode = mode;
  1.3127 ++
  1.3128 ++#ifdef XZ_DEC_BCJ
  1.3129 ++	s->bcj = xz_dec_bcj_create(DEC_IS_SINGLE(mode));
  1.3130 ++	if (s->bcj == NULL)
  1.3131 ++		goto error_bcj;
  1.3132 ++#endif
  1.3133 ++
  1.3134 ++	s->lzma2 = xz_dec_lzma2_create(mode, dict_max);
  1.3135 ++	if (s->lzma2 == NULL)
  1.3136 ++		goto error_lzma2;
  1.3137 ++
  1.3138 ++	xz_dec_reset(s);
  1.3139 ++	return s;
  1.3140 ++
  1.3141 ++error_lzma2:
  1.3142 ++#ifdef XZ_DEC_BCJ
  1.3143 ++	xz_dec_bcj_end(s->bcj);
  1.3144 ++error_bcj:
  1.3145 ++#endif
  1.3146 ++	kfree(s);
  1.3147 ++	return NULL;
  1.3148 ++}
  1.3149 ++
  1.3150 ++XZ_EXTERN void xz_dec_reset(struct xz_dec *s)
  1.3151 ++{
  1.3152 ++	s->sequence = SEQ_STREAM_HEADER;
  1.3153 ++	s->allow_buf_error = false;
  1.3154 ++	s->pos = 0;
  1.3155 ++	s->crc32 = 0;
  1.3156 ++	memzero(&s->block, sizeof(s->block));
  1.3157 ++	memzero(&s->index, sizeof(s->index));
  1.3158 ++	s->temp.pos = 0;
  1.3159 ++	s->temp.size = STREAM_HEADER_SIZE;
  1.3160 ++}
  1.3161 ++
  1.3162 ++XZ_EXTERN void xz_dec_end(struct xz_dec *s)
  1.3163 ++{
  1.3164 ++	if (s != NULL) {
  1.3165 ++		xz_dec_lzma2_end(s->lzma2);
  1.3166 ++#ifdef XZ_DEC_BCJ
  1.3167 ++		xz_dec_bcj_end(s->bcj);
  1.3168 ++#endif
  1.3169 ++		kfree(s);
  1.3170 ++	}
  1.3171 ++}
  1.3172 +diff --git a/lib/xz/xz_dec_syms.c b/lib/xz/xz_dec_syms.c
  1.3173 +new file mode 100644
  1.3174 +index 0000000..32eb3c0
  1.3175 +--- /dev/null
  1.3176 ++++ b/lib/xz/xz_dec_syms.c
  1.3177 +@@ -0,0 +1,26 @@
  1.3178 ++/*
  1.3179 ++ * XZ decoder module information
  1.3180 ++ *
  1.3181 ++ * Author: Lasse Collin <lasse.collin@tukaani.org>
  1.3182 ++ *
  1.3183 ++ * This file has been put into the public domain.
  1.3184 ++ * You can do whatever you want with this file.
  1.3185 ++ */
  1.3186 ++
  1.3187 ++#include <linux/module.h>
  1.3188 ++#include <linux/xz.h>
  1.3189 ++
  1.3190 ++EXPORT_SYMBOL(xz_dec_init);
  1.3191 ++EXPORT_SYMBOL(xz_dec_reset);
  1.3192 ++EXPORT_SYMBOL(xz_dec_run);
  1.3193 ++EXPORT_SYMBOL(xz_dec_end);
  1.3194 ++
  1.3195 ++MODULE_DESCRIPTION("XZ decompressor");
  1.3196 ++MODULE_VERSION("1.0");
  1.3197 ++MODULE_AUTHOR("Lasse Collin <lasse.collin@tukaani.org> and Igor Pavlov");
  1.3198 ++
  1.3199 ++/*
  1.3200 ++ * This code is in the public domain, but in Linux it's simplest to just
  1.3201 ++ * say it's GPL and consider the authors as the copyright holders.
  1.3202 ++ */
  1.3203 ++MODULE_LICENSE("GPL");
  1.3204 +diff --git a/lib/xz/xz_dec_test.c b/lib/xz/xz_dec_test.c
  1.3205 +new file mode 100644
  1.3206 +index 0000000..da28a19
  1.3207 +--- /dev/null
  1.3208 ++++ b/lib/xz/xz_dec_test.c
  1.3209 +@@ -0,0 +1,220 @@
  1.3210 ++/*
  1.3211 ++ * XZ decoder tester
  1.3212 ++ *
  1.3213 ++ * Author: Lasse Collin <lasse.collin@tukaani.org>
  1.3214 ++ *
  1.3215 ++ * This file has been put into the public domain.
  1.3216 ++ * You can do whatever you want with this file.
  1.3217 ++ */
  1.3218 ++
  1.3219 ++#include <linux/kernel.h>
  1.3220 ++#include <linux/module.h>
  1.3221 ++#include <linux/fs.h>
  1.3222 ++#include <linux/uaccess.h>
  1.3223 ++#include <linux/crc32.h>
  1.3224 ++#include <linux/xz.h>
  1.3225 ++
  1.3226 ++/* Maximum supported dictionary size */
  1.3227 ++#define DICT_MAX (1 << 20)
  1.3228 ++
  1.3229 ++/* Device name to pass to register_chrdev(). */
  1.3230 ++#define DEVICE_NAME "xz_dec_test"
  1.3231 ++
  1.3232 ++/* Dynamically allocated device major number */
  1.3233 ++static int device_major;
  1.3234 ++
  1.3235 ++/*
  1.3236 ++ * We reuse the same decoder state, and thus can decode only one
  1.3237 ++ * file at a time.
  1.3238 ++ */
  1.3239 ++static bool device_is_open;
  1.3240 ++
  1.3241 ++/* XZ decoder state */
  1.3242 ++static struct xz_dec *state;
  1.3243 ++
  1.3244 ++/*
  1.3245 ++ * Return value of xz_dec_run(). We need to avoid calling xz_dec_run() after
  1.3246 ++ * it has returned XZ_STREAM_END, so we make this static.
  1.3247 ++ */
  1.3248 ++static enum xz_ret ret;
  1.3249 ++
  1.3250 ++/*
  1.3251 ++ * Input and output buffers. The input buffer is used as a temporary safe
  1.3252 ++ * place for the data coming from the userspace.
  1.3253 ++ */
  1.3254 ++static uint8_t buffer_in[1024];
  1.3255 ++static uint8_t buffer_out[1024];
  1.3256 ++
  1.3257 ++/*
  1.3258 ++ * Structure to pass the input and output buffers to the XZ decoder.
  1.3259 ++ * A few of the fields are never modified so we initialize them here.
  1.3260 ++ */
  1.3261 ++static struct xz_buf buffers = {
  1.3262 ++	.in = buffer_in,
  1.3263 ++	.out = buffer_out,
  1.3264 ++	.out_size = sizeof(buffer_out)
  1.3265 ++};
  1.3266 ++
  1.3267 ++/*
  1.3268 ++ * CRC32 of uncompressed data. This is used to give the user a simple way
  1.3269 ++ * to check that the decoder produces correct output.
  1.3270 ++ */
  1.3271 ++static uint32_t crc;
  1.3272 ++
  1.3273 ++static int xz_dec_test_open(struct inode *i, struct file *f)
  1.3274 ++{
  1.3275 ++	if (device_is_open)
  1.3276 ++		return -EBUSY;
  1.3277 ++
  1.3278 ++	device_is_open = true;
  1.3279 ++
  1.3280 ++	xz_dec_reset(state);
  1.3281 ++	ret = XZ_OK;
  1.3282 ++	crc = 0xFFFFFFFF;
  1.3283 ++
  1.3284 ++	buffers.in_pos = 0;
  1.3285 ++	buffers.in_size = 0;
  1.3286 ++	buffers.out_pos = 0;
  1.3287 ++
  1.3288 ++	printk(KERN_INFO DEVICE_NAME ": opened\n");
  1.3289 ++	return 0;
  1.3290 ++}
  1.3291 ++
  1.3292 ++static int xz_dec_test_release(struct inode *i, struct file *f)
  1.3293 ++{
  1.3294 ++	device_is_open = false;
  1.3295 ++
  1.3296 ++	if (ret == XZ_OK)
  1.3297 ++		printk(KERN_INFO DEVICE_NAME ": input was truncated\n");
  1.3298 ++
  1.3299 ++	printk(KERN_INFO DEVICE_NAME ": closed\n");
  1.3300 ++	return 0;
  1.3301 ++}
  1.3302 ++
  1.3303 ++/*
  1.3304 ++ * Decode the data given to us from the userspace. CRC32 of the uncompressed
  1.3305 ++ * data is calculated and is printed at the end of successful decoding. The
  1.3306 ++ * uncompressed data isn't stored anywhere for further use.
  1.3307 ++ *
  1.3308 ++ * The .xz file must have exactly one Stream and no Stream Padding. The data
  1.3309 ++ * after the first Stream is considered to be garbage.
  1.3310 ++ */
  1.3311 ++static ssize_t xz_dec_test_write(struct file *file, const char __user *buf,
  1.3312 ++				 size_t size, loff_t *pos)
  1.3313 ++{
  1.3314 ++	size_t remaining;
  1.3315 ++
  1.3316 ++	if (ret != XZ_OK) {
  1.3317 ++		if (size > 0)
  1.3318 ++			printk(KERN_INFO DEVICE_NAME ": %zu bytes of "
  1.3319 ++					"garbage at the end of the file\n",
  1.3320 ++					size);
  1.3321 ++
  1.3322 ++		return -ENOSPC;
  1.3323 ++	}
  1.3324 ++
  1.3325 ++	printk(KERN_INFO DEVICE_NAME ": decoding %zu bytes of input\n",
  1.3326 ++			size);
  1.3327 ++
  1.3328 ++	remaining = size;
  1.3329 ++	while ((remaining > 0 || buffers.out_pos == buffers.out_size)
  1.3330 ++			&& ret == XZ_OK) {
  1.3331 ++		if (buffers.in_pos == buffers.in_size) {
  1.3332 ++			buffers.in_pos = 0;
  1.3333 ++			buffers.in_size = min(remaining, sizeof(buffer_in));
  1.3334 ++			if (copy_from_user(buffer_in, buf, buffers.in_size))
  1.3335 ++				return -EFAULT;
  1.3336 ++
  1.3337 ++			buf += buffers.in_size;
  1.3338 ++			remaining -= buffers.in_size;
  1.3339 ++		}
  1.3340 ++
  1.3341 ++		buffers.out_pos = 0;
  1.3342 ++		ret = xz_dec_run(state, &buffers);
  1.3343 ++		crc = crc32(crc, buffer_out, buffers.out_pos);
  1.3344 ++	}
  1.3345 ++
  1.3346 ++	switch (ret) {
  1.3347 ++	case XZ_OK:
  1.3348 ++		printk(KERN_INFO DEVICE_NAME ": XZ_OK\n");
  1.3349 ++		return size;
  1.3350 ++
  1.3351 ++	case XZ_STREAM_END:
  1.3352 ++		printk(KERN_INFO DEVICE_NAME ": XZ_STREAM_END, "
  1.3353 ++				"CRC32 = 0x%08X\n", ~crc);
  1.3354 ++		return size - remaining - (buffers.in_size - buffers.in_pos);
  1.3355 ++
  1.3356 ++	case XZ_MEMLIMIT_ERROR:
  1.3357 ++		printk(KERN_INFO DEVICE_NAME ": XZ_MEMLIMIT_ERROR\n");
  1.3358 ++		break;
  1.3359 ++
  1.3360 ++	case XZ_FORMAT_ERROR:
  1.3361 ++		printk(KERN_INFO DEVICE_NAME ": XZ_FORMAT_ERROR\n");
  1.3362 ++		break;
  1.3363 ++
  1.3364 ++	case XZ_OPTIONS_ERROR:
  1.3365 ++		printk(KERN_INFO DEVICE_NAME ": XZ_OPTIONS_ERROR\n");
  1.3366 ++		break;
  1.3367 ++
  1.3368 ++	case XZ_DATA_ERROR:
  1.3369 ++		printk(KERN_INFO DEVICE_NAME ": XZ_DATA_ERROR\n");
  1.3370 ++		break;
  1.3371 ++
  1.3372 ++	case XZ_BUF_ERROR:
  1.3373 ++		printk(KERN_INFO DEVICE_NAME ": XZ_BUF_ERROR\n");
  1.3374 ++		break;
  1.3375 ++
  1.3376 ++	default:
  1.3377 ++		printk(KERN_INFO DEVICE_NAME ": Bug detected!\n");
  1.3378 ++		break;
  1.3379 ++	}
  1.3380 ++
  1.3381 ++	return -EIO;
  1.3382 ++}
  1.3383 ++
  1.3384 ++/* Allocate the XZ decoder state and register the character device. */
  1.3385 ++static int __init xz_dec_test_init(void)
  1.3386 ++{
  1.3387 ++	static const struct file_operations fileops = {
  1.3388 ++		.owner = THIS_MODULE,
  1.3389 ++		.open = &xz_dec_test_open,
  1.3390 ++		.release = &xz_dec_test_release,
  1.3391 ++		.write = &xz_dec_test_write
  1.3392 ++	};
  1.3393 ++
  1.3394 ++	state = xz_dec_init(XZ_PREALLOC, DICT_MAX);
  1.3395 ++	if (state == NULL)
  1.3396 ++		return -ENOMEM;
  1.3397 ++
  1.3398 ++	device_major = register_chrdev(0, DEVICE_NAME, &fileops);
  1.3399 ++	if (device_major < 0) {
  1.3400 ++		xz_dec_end(state);
  1.3401 ++		return device_major;
  1.3402 ++	}
  1.3403 ++
  1.3404 ++	printk(KERN_INFO DEVICE_NAME ": module loaded\n");
  1.3405 ++	printk(KERN_INFO DEVICE_NAME ": Create a device node with "
  1.3406 ++			"'mknod " DEVICE_NAME " c %d 0' and write .xz files "
  1.3407 ++			"to it.\n", device_major);
  1.3408 ++	return 0;
  1.3409 ++}
  1.3410 ++
  1.3411 ++static void __exit xz_dec_test_exit(void)
  1.3412 ++{
  1.3413 ++	unregister_chrdev(device_major, DEVICE_NAME);
  1.3414 ++	xz_dec_end(state);
  1.3415 ++	printk(KERN_INFO DEVICE_NAME ": module unloaded\n");
  1.3416 ++}
  1.3417 ++
  1.3418 ++module_init(xz_dec_test_init);
  1.3419 ++module_exit(xz_dec_test_exit);
  1.3420 ++
  1.3421 ++MODULE_DESCRIPTION("XZ decompressor tester");
  1.3422 ++MODULE_VERSION("1.0");
  1.3423 ++MODULE_AUTHOR("Lasse Collin <lasse.collin@tukaani.org>");
  1.3424 ++
  1.3425 ++/*
  1.3426 ++ * This code is in the public domain, but in Linux it's simplest to just
  1.3427 ++ * say it's GPL and consider the authors as the copyright holders.
  1.3428 ++ */
  1.3429 ++MODULE_LICENSE("GPL");
  1.3430 +diff --git a/lib/xz/xz_lzma2.h b/lib/xz/xz_lzma2.h
  1.3431 +new file mode 100644
  1.3432 +index 0000000..071d67b
  1.3433 +--- /dev/null
  1.3434 ++++ b/lib/xz/xz_lzma2.h
  1.3435 +@@ -0,0 +1,204 @@
  1.3436 ++/*
  1.3437 ++ * LZMA2 definitions
  1.3438 ++ *
  1.3439 ++ * Authors: Lasse Collin <lasse.collin@tukaani.org>
  1.3440 ++ *          Igor Pavlov <http://7-zip.org/>
  1.3441 ++ *
  1.3442 ++ * This file has been put into the public domain.
  1.3443 ++ * You can do whatever you want with this file.
  1.3444 ++ */
  1.3445 ++
  1.3446 ++#ifndef XZ_LZMA2_H
  1.3447 ++#define XZ_LZMA2_H
  1.3448 ++
  1.3449 ++/* Range coder constants */
  1.3450 ++#define RC_SHIFT_BITS 8
  1.3451 ++#define RC_TOP_BITS 24
  1.3452 ++#define RC_TOP_VALUE (1 << RC_TOP_BITS)
  1.3453 ++#define RC_BIT_MODEL_TOTAL_BITS 11
  1.3454 ++#define RC_BIT_MODEL_TOTAL (1 << RC_BIT_MODEL_TOTAL_BITS)
  1.3455 ++#define RC_MOVE_BITS 5
  1.3456 ++
  1.3457 ++/*
  1.3458 ++ * Maximum number of position states. A position state is the lowest pb
  1.3459 ++ * number of bits of the current uncompressed offset. In some places there
  1.3460 ++ * are different sets of probabilities for different position states.
  1.3461 ++ */
  1.3462 ++#define POS_STATES_MAX (1 << 4)
  1.3463 ++
  1.3464 ++/*
  1.3465 ++ * This enum is used to track which LZMA symbols have occurred most recently
  1.3466 ++ * and in which order. This information is used to predict the next symbol.
  1.3467 ++ *
  1.3468 ++ * Symbols:
  1.3469 ++ *  - Literal: One 8-bit byte
  1.3470 ++ *  - Match: Repeat a chunk of data at some distance
  1.3471 ++ *  - Long repeat: Multi-byte match at a recently seen distance
  1.3472 ++ *  - Short repeat: One-byte repeat at a recently seen distance
  1.3473 ++ *
  1.3474 ++ * The symbol names are in from STATE_oldest_older_previous. REP means
  1.3475 ++ * either short or long repeated match, and NONLIT means any non-literal.
  1.3476 ++ */
  1.3477 ++enum lzma_state {
  1.3478 ++	STATE_LIT_LIT,
  1.3479 ++	STATE_MATCH_LIT_LIT,
  1.3480 ++	STATE_REP_LIT_LIT,
  1.3481 ++	STATE_SHORTREP_LIT_LIT,
  1.3482 ++	STATE_MATCH_LIT,
  1.3483 ++	STATE_REP_LIT,
  1.3484 ++	STATE_SHORTREP_LIT,
  1.3485 ++	STATE_LIT_MATCH,
  1.3486 ++	STATE_LIT_LONGREP,
  1.3487 ++	STATE_LIT_SHORTREP,
  1.3488 ++	STATE_NONLIT_MATCH,
  1.3489 ++	STATE_NONLIT_REP
  1.3490 ++};
  1.3491 ++
  1.3492 ++/* Total number of states */
  1.3493 ++#define STATES 12
  1.3494 ++
  1.3495 ++/* The lowest 7 states indicate that the previous state was a literal. */
  1.3496 ++#define LIT_STATES 7
  1.3497 ++
  1.3498 ++/* Indicate that the latest symbol was a literal. */
  1.3499 ++static inline void lzma_state_literal(enum lzma_state *state)
  1.3500 ++{
  1.3501 ++	if (*state <= STATE_SHORTREP_LIT_LIT)
  1.3502 ++		*state = STATE_LIT_LIT;
  1.3503 ++	else if (*state <= STATE_LIT_SHORTREP)
  1.3504 ++		*state -= 3;
  1.3505 ++	else
  1.3506 ++		*state -= 6;
  1.3507 ++}
  1.3508 ++
  1.3509 ++/* Indicate that the latest symbol was a match. */
  1.3510 ++static inline void lzma_state_match(enum lzma_state *state)
  1.3511 ++{
  1.3512 ++	*state = *state < LIT_STATES ? STATE_LIT_MATCH : STATE_NONLIT_MATCH;
  1.3513 ++}
  1.3514 ++
  1.3515 ++/* Indicate that the latest state was a long repeated match. */
  1.3516 ++static inline void lzma_state_long_rep(enum lzma_state *state)
  1.3517 ++{
  1.3518 ++	*state = *state < LIT_STATES ? STATE_LIT_LONGREP : STATE_NONLIT_REP;
  1.3519 ++}
  1.3520 ++
  1.3521 ++/* Indicate that the latest symbol was a short match. */
  1.3522 ++static inline void lzma_state_short_rep(enum lzma_state *state)
  1.3523 ++{
  1.3524 ++	*state = *state < LIT_STATES ? STATE_LIT_SHORTREP : STATE_NONLIT_REP;
  1.3525 ++}
  1.3526 ++
  1.3527 ++/* Test if the previous symbol was a literal. */
  1.3528 ++static inline bool lzma_state_is_literal(enum lzma_state state)
  1.3529 ++{
  1.3530 ++	return state < LIT_STATES;
  1.3531 ++}
  1.3532 ++
  1.3533 ++/* Each literal coder is divided in three sections:
  1.3534 ++ *   - 0x001-0x0FF: Without match byte
  1.3535 ++ *   - 0x101-0x1FF: With match byte; match bit is 0
  1.3536 ++ *   - 0x201-0x2FF: With match byte; match bit is 1
  1.3537 ++ *
  1.3538 ++ * Match byte is used when the previous LZMA symbol was something else than
  1.3539 ++ * a literal (that is, it was some kind of match).
  1.3540 ++ */
  1.3541 ++#define LITERAL_CODER_SIZE 0x300
  1.3542 ++
  1.3543 ++/* Maximum number of literal coders */
  1.3544 ++#define LITERAL_CODERS_MAX (1 << 4)
  1.3545 ++
  1.3546 ++/* Minimum length of a match is two bytes. */
  1.3547 ++#define MATCH_LEN_MIN 2
  1.3548 ++
  1.3549 ++/* Match length is encoded with 4, 5, or 10 bits.
  1.3550 ++ *
  1.3551 ++ * Length   Bits
  1.3552 ++ *  2-9      4 = Choice=0 + 3 bits
  1.3553 ++ * 10-17     5 = Choice=1 + Choice2=0 + 3 bits
  1.3554 ++ * 18-273   10 = Choice=1 + Choice2=1 + 8 bits
  1.3555 ++ */
  1.3556 ++#define LEN_LOW_BITS 3
  1.3557 ++#define LEN_LOW_SYMBOLS (1 << LEN_LOW_BITS)
  1.3558 ++#define LEN_MID_BITS 3
  1.3559 ++#define LEN_MID_SYMBOLS (1 << LEN_MID_BITS)
  1.3560 ++#define LEN_HIGH_BITS 8
  1.3561 ++#define LEN_HIGH_SYMBOLS (1 << LEN_HIGH_BITS)
  1.3562 ++#define LEN_SYMBOLS (LEN_LOW_SYMBOLS + LEN_MID_SYMBOLS + LEN_HIGH_SYMBOLS)
  1.3563 ++
  1.3564 ++/*
  1.3565 ++ * Maximum length of a match is 273 which is a result of the encoding
  1.3566 ++ * described above.
  1.3567 ++ */
  1.3568 ++#define MATCH_LEN_MAX (MATCH_LEN_MIN + LEN_SYMBOLS - 1)
  1.3569 ++
  1.3570 ++/*
  1.3571 ++ * Different sets of probabilities are used for match distances that have
  1.3572 ++ * very short match length: Lengths of 2, 3, and 4 bytes have a separate
  1.3573 ++ * set of probabilities for each length. The matches with longer length
  1.3574 ++ * use a shared set of probabilities.
  1.3575 ++ */
  1.3576 ++#define DIST_STATES 4
  1.3577 ++
  1.3578 ++/*
  1.3579 ++ * Get the index of the appropriate probability array for decoding
  1.3580 ++ * the distance slot.
  1.3581 ++ */
  1.3582 ++static inline uint32_t lzma_get_dist_state(uint32_t len)
  1.3583 ++{
  1.3584 ++	return len < DIST_STATES + MATCH_LEN_MIN
  1.3585 ++			? len - MATCH_LEN_MIN : DIST_STATES - 1;
  1.3586 ++}
  1.3587 ++
  1.3588 ++/*
  1.3589 ++ * The highest two bits of a 32-bit match distance are encoded using six bits.
  1.3590 ++ * This six-bit value is called a distance slot. This way encoding a 32-bit
  1.3591 ++ * value takes 6-36 bits, larger values taking more bits.
  1.3592 ++ */
  1.3593 ++#define DIST_SLOT_BITS 6
  1.3594 ++#define DIST_SLOTS (1 << DIST_SLOT_BITS)
  1.3595 ++
  1.3596 ++/* Match distances up to 127 are fully encoded using probabilities. Since
  1.3597 ++ * the highest two bits (distance slot) are always encoded using six bits,
  1.3598 ++ * the distances 0-3 don't need any additional bits to encode, since the
  1.3599 ++ * distance slot itself is the same as the actual distance. DIST_MODEL_START
  1.3600 ++ * indicates the first distance slot where at least one additional bit is
  1.3601 ++ * needed.
  1.3602 ++ */
  1.3603 ++#define DIST_MODEL_START 4
  1.3604 ++
  1.3605 ++/*
  1.3606 ++ * Match distances greater than 127 are encoded in three pieces:
  1.3607 ++ *   - distance slot: the highest two bits
  1.3608 ++ *   - direct bits: 2-26 bits below the highest two bits
  1.3609 ++ *   - alignment bits: four lowest bits
  1.3610 ++ *
  1.3611 ++ * Direct bits don't use any probabilities.
  1.3612 ++ *
  1.3613 ++ * The distance slot value of 14 is for distances 128-191.
  1.3614 ++ */
  1.3615 ++#define DIST_MODEL_END 14
  1.3616 ++
  1.3617 ++/* Distance slots that indicate a distance <= 127. */
  1.3618 ++#define FULL_DISTANCES_BITS (DIST_MODEL_END / 2)
  1.3619 ++#define FULL_DISTANCES (1 << FULL_DISTANCES_BITS)
  1.3620 ++
  1.3621 ++/*
  1.3622 ++ * For match distances greater than 127, only the highest two bits and the
  1.3623 ++ * lowest four bits (alignment) is encoded using probabilities.
  1.3624 ++ */
  1.3625 ++#define ALIGN_BITS 4
  1.3626 ++#define ALIGN_SIZE (1 << ALIGN_BITS)
  1.3627 ++#define ALIGN_MASK (ALIGN_SIZE - 1)
  1.3628 ++
  1.3629 ++/* Total number of all probability variables */
  1.3630 ++#define PROBS_TOTAL (1846 + LITERAL_CODERS_MAX * LITERAL_CODER_SIZE)
  1.3631 ++
  1.3632 ++/*
  1.3633 ++ * LZMA remembers the four most recent match distances. Reusing these
  1.3634 ++ * distances tends to take less space than re-encoding the actual
  1.3635 ++ * distance value.
  1.3636 ++ */
  1.3637 ++#define REPS 4
  1.3638 ++
  1.3639 ++#endif
  1.3640 +diff --git a/lib/xz/xz_private.h b/lib/xz/xz_private.h
  1.3641 +new file mode 100644
  1.3642 +index 0000000..a65633e
  1.3643 +--- /dev/null
  1.3644 ++++ b/lib/xz/xz_private.h
  1.3645 +@@ -0,0 +1,156 @@
  1.3646 ++/*
  1.3647 ++ * Private includes and definitions
  1.3648 ++ *
  1.3649 ++ * Author: Lasse Collin <lasse.collin@tukaani.org>
  1.3650 ++ *
  1.3651 ++ * This file has been put into the public domain.
  1.3652 ++ * You can do whatever you want with this file.
  1.3653 ++ */
  1.3654 ++
  1.3655 ++#ifndef XZ_PRIVATE_H
  1.3656 ++#define XZ_PRIVATE_H
  1.3657 ++
  1.3658 ++#ifdef __KERNEL__
  1.3659 ++#	include <linux/xz.h>
  1.3660 ++#	include <asm/byteorder.h>
  1.3661 ++#	include <asm/unaligned.h>
  1.3662 ++	/* XZ_PREBOOT may be defined only via decompress_unxz.c. */
  1.3663 ++#	ifndef XZ_PREBOOT
  1.3664 ++#		include <linux/slab.h>
  1.3665 ++#		include <linux/vmalloc.h>
  1.3666 ++#		include <linux/string.h>
  1.3667 ++#		ifdef CONFIG_XZ_DEC_X86
  1.3668 ++#			define XZ_DEC_X86
  1.3669 ++#		endif
  1.3670 ++#		ifdef CONFIG_XZ_DEC_POWERPC
  1.3671 ++#			define XZ_DEC_POWERPC
  1.3672 ++#		endif
  1.3673 ++#		ifdef CONFIG_XZ_DEC_IA64
  1.3674 ++#			define XZ_DEC_IA64
  1.3675 ++#		endif
  1.3676 ++#		ifdef CONFIG_XZ_DEC_ARM
  1.3677 ++#			define XZ_DEC_ARM
  1.3678 ++#		endif
  1.3679 ++#		ifdef CONFIG_XZ_DEC_ARMTHUMB
  1.3680 ++#			define XZ_DEC_ARMTHUMB
  1.3681 ++#		endif
  1.3682 ++#		ifdef CONFIG_XZ_DEC_SPARC
  1.3683 ++#			define XZ_DEC_SPARC
  1.3684 ++#		endif
  1.3685 ++#		define memeq(a, b, size) (memcmp(a, b, size) == 0)
  1.3686 ++#		define memzero(buf, size) memset(buf, 0, size)
  1.3687 ++#	endif
  1.3688 ++#	define get_le32(p) le32_to_cpup((const uint32_t *)(p))
  1.3689 ++#else
  1.3690 ++	/*
  1.3691 ++	 * For userspace builds, use a separate header to define the required
  1.3692 ++	 * macros and functions. This makes it easier to adapt the code into
  1.3693 ++	 * different environments and avoids clutter in the Linux kernel tree.
  1.3694 ++	 */
  1.3695 ++#	include "xz_config.h"
  1.3696 ++#endif
  1.3697 ++
  1.3698 ++/* If no specific decoding mode is requested, enable support for all modes. */
  1.3699 ++#if !defined(XZ_DEC_SINGLE) && !defined(XZ_DEC_PREALLOC) \
  1.3700 ++		&& !defined(XZ_DEC_DYNALLOC)
  1.3701 ++#	define XZ_DEC_SINGLE
  1.3702 ++#	define XZ_DEC_PREALLOC
  1.3703 ++#	define XZ_DEC_DYNALLOC
  1.3704 ++#endif
  1.3705 ++
  1.3706 ++/*
  1.3707 ++ * The DEC_IS_foo(mode) macros are used in "if" statements. If only some
  1.3708 ++ * of the supported modes are enabled, these macros will evaluate to true or
  1.3709 ++ * false at compile time and thus allow the compiler to omit unneeded code.
  1.3710 ++ */
  1.3711 ++#ifdef XZ_DEC_SINGLE
  1.3712 ++#	define DEC_IS_SINGLE(mode) ((mode) == XZ_SINGLE)
  1.3713 ++#else
  1.3714 ++#	define DEC_IS_SINGLE(mode) (false)
  1.3715 ++#endif
  1.3716 ++
  1.3717 ++#ifdef XZ_DEC_PREALLOC
  1.3718 ++#	define DEC_IS_PREALLOC(mode) ((mode) == XZ_PREALLOC)
  1.3719 ++#else
  1.3720 ++#	define DEC_IS_PREALLOC(mode) (false)
  1.3721 ++#endif
  1.3722 ++
  1.3723 ++#ifdef XZ_DEC_DYNALLOC
  1.3724 ++#	define DEC_IS_DYNALLOC(mode) ((mode) == XZ_DYNALLOC)
  1.3725 ++#else
  1.3726 ++#	define DEC_IS_DYNALLOC(mode) (false)
  1.3727 ++#endif
  1.3728 ++
  1.3729 ++#if !defined(XZ_DEC_SINGLE)
  1.3730 ++#	define DEC_IS_MULTI(mode) (true)
  1.3731 ++#elif defined(XZ_DEC_PREALLOC) || defined(XZ_DEC_DYNALLOC)
  1.3732 ++#	define DEC_IS_MULTI(mode) ((mode) != XZ_SINGLE)
  1.3733 ++#else
  1.3734 ++#	define DEC_IS_MULTI(mode) (false)
  1.3735 ++#endif
  1.3736 ++
  1.3737 ++/*
  1.3738 ++ * If any of the BCJ filter decoders are wanted, define XZ_DEC_BCJ.
  1.3739 ++ * XZ_DEC_BCJ is used to enable generic support for BCJ decoders.
  1.3740 ++ */
  1.3741 ++#ifndef XZ_DEC_BCJ
  1.3742 ++#	if defined(XZ_DEC_X86) || defined(XZ_DEC_POWERPC) \
  1.3743 ++			|| defined(XZ_DEC_IA64) || defined(XZ_DEC_ARM) \
  1.3744 ++			|| defined(XZ_DEC_ARM) || defined(XZ_DEC_ARMTHUMB) \
  1.3745 ++			|| defined(XZ_DEC_SPARC)
  1.3746 ++#		define XZ_DEC_BCJ
  1.3747 ++#	endif
  1.3748 ++#endif
  1.3749 ++
  1.3750 ++/*
  1.3751 ++ * Allocate memory for LZMA2 decoder. xz_dec_lzma2_reset() must be used
  1.3752 ++ * before calling xz_dec_lzma2_run().
  1.3753 ++ */
  1.3754 ++XZ_EXTERN struct xz_dec_lzma2 *xz_dec_lzma2_create(enum xz_mode mode,
  1.3755 ++						   uint32_t dict_max);
  1.3756 ++
  1.3757 ++/*
  1.3758 ++ * Decode the LZMA2 properties (one byte) and reset the decoder. Return
  1.3759 ++ * XZ_OK on success, XZ_MEMLIMIT_ERROR if the preallocated dictionary is not
  1.3760 ++ * big enough, and XZ_OPTIONS_ERROR if props indicates something that this
  1.3761 ++ * decoder doesn't support.
  1.3762 ++ */
  1.3763 ++XZ_EXTERN enum xz_ret xz_dec_lzma2_reset(struct xz_dec_lzma2 *s,
  1.3764 ++					 uint8_t props);
  1.3765 ++
  1.3766 ++/* Decode raw LZMA2 stream from b->in to b->out. */
  1.3767 ++XZ_EXTERN enum xz_ret xz_dec_lzma2_run(struct xz_dec_lzma2 *s,
  1.3768 ++				       struct xz_buf *b);
  1.3769 ++
  1.3770 ++/* Free the memory allocated for the LZMA2 decoder. */
  1.3771 ++XZ_EXTERN void xz_dec_lzma2_end(struct xz_dec_lzma2 *s);
  1.3772 ++
  1.3773 ++#ifdef XZ_DEC_BCJ
  1.3774 ++/*
  1.3775 ++ * Allocate memory for BCJ decoders. xz_dec_bcj_reset() must be used before
  1.3776 ++ * calling xz_dec_bcj_run().
  1.3777 ++ */
  1.3778 ++XZ_EXTERN struct xz_dec_bcj *xz_dec_bcj_create(bool single_call);
  1.3779 ++
  1.3780 ++/*
  1.3781 ++ * Decode the Filter ID of a BCJ filter. This implementation doesn't
  1.3782 ++ * support custom start offsets, so no decoding of Filter Properties
  1.3783 ++ * is needed. Returns XZ_OK if the given Filter ID is supported.
  1.3784 ++ * Otherwise XZ_OPTIONS_ERROR is returned.
  1.3785 ++ */
  1.3786 ++XZ_EXTERN enum xz_ret xz_dec_bcj_reset(struct xz_dec_bcj *s, uint8_t id);
  1.3787 ++
  1.3788 ++/*
  1.3789 ++ * Decode raw BCJ + LZMA2 stream. This must be used only if there actually is
  1.3790 ++ * a BCJ filter in the chain. If the chain has only LZMA2, xz_dec_lzma2_run()
  1.3791 ++ * must be called directly.
  1.3792 ++ */
  1.3793 ++XZ_EXTERN enum xz_ret xz_dec_bcj_run(struct xz_dec_bcj *s,
  1.3794 ++				     struct xz_dec_lzma2 *lzma2,
  1.3795 ++				     struct xz_buf *b);
  1.3796 ++
  1.3797 ++/* Free the memory allocated for the BCJ filters. */
  1.3798 ++#define xz_dec_bcj_end(s) kfree(s)
  1.3799 ++#endif
  1.3800 ++
  1.3801 ++#endif
  1.3802 +diff --git a/lib/xz/xz_stream.h b/lib/xz/xz_stream.h
  1.3803 +new file mode 100644
  1.3804 +index 0000000..66cb5a7
  1.3805 +--- /dev/null
  1.3806 ++++ b/lib/xz/xz_stream.h
  1.3807 +@@ -0,0 +1,62 @@
  1.3808 ++/*
  1.3809 ++ * Definitions for handling the .xz file format
  1.3810 ++ *
  1.3811 ++ * Author: Lasse Collin <lasse.collin@tukaani.org>
  1.3812 ++ *
  1.3813 ++ * This file has been put into the public domain.
  1.3814 ++ * You can do whatever you want with this file.
  1.3815 ++ */
  1.3816 ++
  1.3817 ++#ifndef XZ_STREAM_H
  1.3818 ++#define XZ_STREAM_H
  1.3819 ++
  1.3820 ++#if defined(__KERNEL__) && !XZ_INTERNAL_CRC32
  1.3821 ++#	include <linux/crc32.h>
  1.3822 ++#	undef crc32
  1.3823 ++#	define xz_crc32(buf, size, crc) \
  1.3824 ++		(~crc32_le(~(uint32_t)(crc), buf, size))
  1.3825 ++#endif
  1.3826 ++
  1.3827 ++/*
  1.3828 ++ * See the .xz file format specification at
  1.3829 ++ * http://tukaani.org/xz/xz-file-format.txt
  1.3830 ++ * to understand the container format.
  1.3831 ++ */
  1.3832 ++
  1.3833 ++#define STREAM_HEADER_SIZE 12
  1.3834 ++
  1.3835 ++#define HEADER_MAGIC "\3757zXZ"
  1.3836 ++#define HEADER_MAGIC_SIZE 6
  1.3837 ++
  1.3838 ++#define FOOTER_MAGIC "YZ"
  1.3839 ++#define FOOTER_MAGIC_SIZE 2
  1.3840 ++
  1.3841 ++/*
  1.3842 ++ * Variable-length integer can hold a 63-bit unsigned integer or a special
  1.3843 ++ * value indicating that the value is unknown.
  1.3844 ++ *
  1.3845 ++ * Experimental: vli_type can be defined to uint32_t to save a few bytes
  1.3846 ++ * in code size (no effect on speed). Doing so limits the uncompressed and
  1.3847 ++ * compressed size of the file to less than 256 MiB and may also weaken
  1.3848 ++ * error detection slightly.
  1.3849 ++ */
  1.3850 ++typedef uint64_t vli_type;
  1.3851 ++
  1.3852 ++#define VLI_MAX ((vli_type)-1 / 2)
  1.3853 ++#define VLI_UNKNOWN ((vli_type)-1)
  1.3854 ++
  1.3855 ++/* Maximum encoded size of a VLI */
  1.3856 ++#define VLI_BYTES_MAX (sizeof(vli_type) * 8 / 7)
  1.3857 ++
  1.3858 ++/* Integrity Check types */
  1.3859 ++enum xz_check {
  1.3860 ++	XZ_CHECK_NONE = 0,
  1.3861 ++	XZ_CHECK_CRC32 = 1,
  1.3862 ++	XZ_CHECK_CRC64 = 4,
  1.3863 ++	XZ_CHECK_SHA256 = 10
  1.3864 ++};
  1.3865 ++
  1.3866 ++/* Maximum possible Check ID */
  1.3867 ++#define XZ_CHECK_MAX 15
  1.3868 ++
  1.3869 ++#endif
  1.3870 +diff --git a/scripts/Makefile.lib b/scripts/Makefile.lib
  1.3871 +index 54fd1b7..b862007 100644
  1.3872 +--- a/scripts/Makefile.lib
  1.3873 ++++ b/scripts/Makefile.lib
  1.3874 +@@ -246,6 +246,34 @@ cmd_lzo = (cat $(filter-out FORCE,$^) | \
  1.3875 + 	lzop -9 && $(call size_append, $(filter-out FORCE,$^))) > $@ || \
  1.3876 + 	(rm -f $@ ; false)
  1.3877 + 
  1.3878 ++# XZ
  1.3879 ++# ---------------------------------------------------------------------------
  1.3880 ++# Use xzkern to compress the kernel image and xzmisc to compress other things.
  1.3881 ++#
  1.3882 ++# xzkern uses a big LZMA2 dictionary since it doesn't increase memory usage
  1.3883 ++# of the kernel decompressor. A BCJ filter is used if it is available for
  1.3884 ++# the target architecture. xzkern also appends uncompressed size of the data
  1.3885 ++# using size_append. The .xz format has the size information available at
  1.3886 ++# the end of the file too, but it's in more complex format and it's good to
  1.3887 ++# avoid changing the part of the boot code that reads the uncompressed size.
  1.3888 ++# Note that the bytes added by size_append will make the xz tool think that
  1.3889 ++# the file is corrupt. This is expected.
  1.3890 ++#
  1.3891 ++# xzmisc doesn't use size_append, so it can be used to create normal .xz
  1.3892 ++# files. xzmisc uses smaller LZMA2 dictionary than xzkern, because a very
  1.3893 ++# big dictionary would increase the memory usage too much in the multi-call
  1.3894 ++# decompression mode. A BCJ filter isn't used either.
  1.3895 ++quiet_cmd_xzkern = XZKERN  $@
  1.3896 ++cmd_xzkern = (cat $(filter-out FORCE,$^) | \
  1.3897 ++	sh $(srctree)/scripts/xz_wrap.sh && \
  1.3898 ++	$(call size_append, $(filter-out FORCE,$^))) > $@ || \
  1.3899 ++	(rm -f $@ ; false)
  1.3900 ++
  1.3901 ++quiet_cmd_xzmisc = XZMISC  $@
  1.3902 ++cmd_xzmisc = (cat $(filter-out FORCE,$^) | \
  1.3903 ++	xz --check=crc32 --lzma2=dict=1MiB) > $@ || \
  1.3904 ++	(rm -f $@ ; false)
  1.3905 ++
  1.3906 + # misc stuff
  1.3907 + # ---------------------------------------------------------------------------
  1.3908 + quote:="
  1.3909 +diff --git a/scripts/xz_wrap.sh b/scripts/xz_wrap.sh
  1.3910 +new file mode 100644
  1.3911 +index 0000000..17a5798
  1.3912 +--- /dev/null
  1.3913 ++++ b/scripts/xz_wrap.sh
  1.3914 +@@ -0,0 +1,23 @@
  1.3915 ++#!/bin/sh
  1.3916 ++#
  1.3917 ++# This is a wrapper for xz to compress the kernel image using appropriate
  1.3918 ++# compression options depending on the architecture.
  1.3919 ++#
  1.3920 ++# Author: Lasse Collin <lasse.collin@tukaani.org>
  1.3921 ++#
  1.3922 ++# This file has been put into the public domain.
  1.3923 ++# You can do whatever you want with this file.
  1.3924 ++#
  1.3925 ++
  1.3926 ++BCJ=
  1.3927 ++LZMA2OPTS=
  1.3928 ++
  1.3929 ++case $ARCH in
  1.3930 ++	x86|x86_64)     BCJ=--x86 ;;
  1.3931 ++	powerpc)        BCJ=--powerpc ;;
  1.3932 ++	ia64)           BCJ=--ia64; LZMA2OPTS=pb=4 ;;
  1.3933 ++	arm)            BCJ=--arm ;;
  1.3934 ++	sparc)          BCJ=--sparc ;;
  1.3935 ++esac
  1.3936 ++
  1.3937 ++exec xz --check=crc32 $BCJ --lzma2=$LZMA2OPTS,dict=32MiB