Commit graph

33 commits

Author SHA1 Message Date
Con Kolivas a7b4708bd2 Use a different failure mode for when perror is unlikely to be set.
Add 2 unlikely wrappers.
2011-02-21 14:51:20 +11:00
Con Kolivas 57e25da244 Update copyright yeah in updated files. 2011-02-20 23:04:44 +11:00
Con Kolivas 7b073160a3 Can't always open fd_out in runzip for integrity testing, so use fd_hist. 2011-02-20 22:44:10 +11:00
Con Kolivas 9b264959f5 Implement the ability to test the integrity of the file written to disk on decompression. 2011-02-20 22:29:49 +11:00
Con Kolivas 8a27dc5057 Changes to make md5 be used for integrity testing.
Add the md5 value to the end of each archive.
This can then be used for integrity testing instead of crc32.
Keep crc in new archives to maintain compatibility with version 0.5 files.
Use md5 integrity testing on decompression when available in preference, and disable calculation of crc32.
Display the choice of integrity testing in verbose output and when -i is used.
Display the md5 and crc values when max verbosity, file info, or display hash is enabled.
Store a new flag in the magic header to show that the md5 value is stored at the end of the file.
Update the magic header information document.
2011-02-20 18:01:19 +11:00
Con Kolivas 44a279579e Add option to display hash information without enabling verbose mode. 2011-02-20 12:20:05 +11:00
Con Kolivas 744202a47f Remove unused variable. 2011-02-19 10:39:07 +11:00
Con Kolivas 7287ab8a66 Fix md5 process bytes to occur at the same time as crc with the same buffer, saving time. 2011-02-19 10:34:45 +11:00
Con Kolivas fb2a12744a Implement md5 checking on decompression.
Implement hash check flag to determine whether to show md5sum on compression/decompression or not.
2011-02-18 15:16:13 +11:00
Con Kolivas cd8b086bf2 Minimise the number of mallocs in unzip_match. 2011-02-17 09:32:01 +11:00
Con Kolivas f2d33c00f8 Cast the mallocs to their variable type.
Check that read and write actually return greater than zero.
2011-02-11 11:46:58 +11:00
Con Kolivas 3879807865 Try limiting stream_read in unzip_literal and just returning how much was read. 2011-02-10 16:57:22 +11:00
Con Kolivas 9a3bfe33d1 Revert "Make sure to read the full length asked of unzip_literal."
This reverts commit 499ae18cef.

Wrong fix, revert it.
2011-02-10 16:46:35 +11:00
Con Kolivas 499ae18cef Make sure to read the full length asked of unzip_literal. 2011-02-10 15:30:31 +11:00
Con Kolivas 0a32b5f72d Convert mmaps to malloc in runzip as they may fail if not a multiple of page size! 2011-02-10 13:53:42 +11:00
Con Kolivas bece82a593 Trivial documentation fixes courtesy of Laszlo Ersek. 2011-02-10 13:14:36 +11:00
Con Kolivas 2cabb335cb Update copyright notices courtesy of Jari Aalto. 2010-12-16 09:45:21 +11:00
Con Kolivas 2b08c6e280 Implement massive multithreading decompression.
This is done by taking each stream of data on read in into separate buffers for up to as many threads as CPUs.
As each thread's data becomes available, feed it into runzip once it is requests more of the stream.
Provided there are enough chunks in the originally compressed data, this provides a massive speedup potentially proportional to the number of CPUs. The slower the backend compression, the better the speed up (i.e. zpaq is the best sped up).
Fix the output of zpaq compress and decompress from trampling on itself and racing and consuming a lot of CPU time printing to the console.
When limiting cwindow to 6 on 32 bits, ensure that control.window is also set.
When testing for the maximum size of testmalloc, the multiple used was out by one, so increase it.
Minor output tweaks.
2010-11-16 21:25:32 +11:00
Con Kolivas a66dafe66a Updated benchmark results.
More tidying up.
2010-11-05 14:52:14 +11:00
Con Kolivas 296534921a Unlimited mode is now usable in a meaningful timeframe!
Modify the sliding mmap window to have a 64k smaller buffer which matches the size of the search size, and change the larger lower buffer to make it slide with the main hash search progress. This makes for a MUCH faster unlimited mode, making it actually usable.
Limit windows to 2GB again on 32 bit, but do it when determining the largest size possible in rzip.c.
Implement a linux-kernel like unlikely() wrapper for inbuilt expect, and modify most fatal warnings to be unlikely, and a few places where it's also suitable.
Minor cleanups.
2010-11-05 12:16:43 +11:00
Con Kolivas 102140dc2b Reinstate the temporary files for decompression to stdout and testing as the damaged line reinstated last commit meant it looked like those combinations worked when they actually didn't.
Compression from stdin still works without temporary files.
2010-11-02 10:52:21 +11:00
Con Kolivas 1e88273ffc Minor fixes. 2010-11-02 00:08:35 +11:00
Con Kolivas 772fbf602e Reinstitute 2GB window limit on 32 bit. It still doesn't work. However we can now decompress larger windows.
Do more mmap in place of malloc.
Update docs.
Remove redundant code.
2010-11-01 22:55:59 +11:00
Con Kolivas 49336a5e87 Update magic header info.
Move offset check to after reading chunk width.
Realloc instead of free and malloc.
2010-11-01 15:27:35 +11:00
Con Kolivas 1ed2ce423f Change the byte width to be variable depending on chunk size, and write it as a single char describing the next byte width for decompression.
More minor tidying.
2010-11-01 13:28:49 +11:00
Con Kolivas 3a22eb09b3 Fix output to work correctly to screen when stdout is selected.
Make stdout write directly to stdout on decompression without the need for temporary files since there is no need to seek backwards.
Make file testing not actually write the file during test.
More tidying up.
2010-11-01 11:18:58 +11:00
Con Kolivas a9ad1aef0e Start cleaning up all the flag testing with neat macros. 2010-11-01 04:53:53 +11:00
Con Kolivas 2bacbc60d2 We were attempting to truncate mmap to page size when only the offset needed to be.
Fix the longstanding limit on 32 bits that allowed us to allocate only 2GB of ram by moving the big malloc calls to mmap equivalents which allow us to mmap up to 2^44 bytes of anonymous space.
Use progressively smaller preallocation to try and defragment ram prior to real mmap call to increase success rate of allocating ram when it's a significant proportion of total ram.
Don't fail if preallocation is unsuccessful.
Add more detailed error reporting.
Minor cleanups.
2010-11-01 00:19:39 +11:00
Con Kolivas 25705aec28 Minor tidying. 2010-10-31 15:17:04 +11:00
Con Kolivas 8d9b64e1ec Change byte width to be dependant on file size.
This will increase speed of compression and generate a smaller file, but not be backward compatible.
Tweak the way memory is allocated to optimise chances of success and minimise slowdown for the machine.
fsync to empty dirty data before allocating large ram to increase chance of mem allocation and decrease disk thrash of write vs read.
Add lots more information to verbose mode.
Lots of code tidying and minor tweaks.
2010-10-31 15:09:05 +11:00
Con Kolivas c5da3a1adb Add more robust checking.
Premalloc ram to improve early detection of being unable to allocate that much ram.
Make sure to always make chunk size a multiple of page size for mmap to work.
Begin changes to make variable byte width offsets in rzip chunks.
Decrease header entries to only 2 byte wide as per original rzip.
Random other tidying.
2010-10-31 10:35:04 +11:00
Con Kolivas d972496aa8 Make messages come via stdout instead of stderror courtesy of Alexander Saprykin 2010-04-25 16:26:00 +10:00
Con Kolivas 6dcceb0b1b Initial import 2010-03-29 10:07:08 +11:00