As we can work out what that compression overhead is, we can factor that into testing how much ram we can allocate.
There is no advantage to running multiple threads when there is no compression back end so drop to 1 only.
Limit ram for compression back end to 1/3 ram regardless for when OSs lie due to heavy overcommit.
Add one more thread on compression and decompression to account for the staggered nature of thread recruitment.
Make the initial buffer slightly smaller and make it progressively larger, thus recruiting threads sooner and more evenly.
This also speeds up decompression for the same reason.
Check the amount of memory being used by each thread on decompression to ensure we don't try to recruit too much ram.
Update main help screen to include environment settings.
Update to respect $TMP environment variable for TMP files.
Updated control structure to include tmpdir pointer.
Update lrzip.conf parser to respect -U -M options.
Update lrzip.conf example to include new parameters.
Reorder main Switch loop in main.c for readability.
Have MAXRAM and control.window be exclusive. MAXRAM wins.
Have UNLIMITED and control.window be exclusive. UNLIMITED wins.
Have UNLIMITED and MAXRAM be exclusive. UNLIMITED wins.
Corrects heuristic computation in rzip.c which would override
MAXRAM or UNLIMITED if control.window set
Show heuristically computed control.window when computed.
Remove display compression level from control.window verbose output.
Update print_verbose format for Testing for incompressible data in stream.c
to omit extra \n.
Changes by Peter Hyman <pete@peterhyman.com>
This is done by making the semaphore wrappers null functions on osx and closing the thread in the creation wrapper.
Move the wrappers to rzip.h to make this change clean.
Instead of failing completely, detect when a failure has occurred, and serialise work for that thread to decrease the memory required to complete compression / decompression.
Do that by waiting for the thread before it to complete before trying to work on that block again.
Check internally when lzma compress has failed and try a lower compression level before falling back to bzip2 compression.
This overlapping of compressing streams means that when files are large enough to be split into multiple blocks, all CPUs will be used more effectively throughout the compression, affording a nice speedup.
Move the writing of the chunk byte size and initial headers into the compthread to prevent any races occurring.
Fix a few dodgy callocs that may have been overflowing!
The previous commit reverts were done because the changes designed to speed it up actually slowed it down instead.
Scale the 9 levels down to the 7 that lzma has.
This makes the default lzma compression level 5 which is what lzma normally has, and uses a lot less ram and is significantly faster than previously, but at the cost of giving slightly less compression.
Change suggested maximum compression in README to disable threading with -p 1.
Use bzip2 as a fallback compression when lzma fails due to internal memory errors as may happen on 32 bits.
Limit lzma windows to 300MB in the right place on 32 bit only.
Make the main process less nice than the backend threads since it tends to be the rate limiting step.
Choose sane defaults for memory usage since linux ludicriously overcommits.
Use sliding mmap for any compression windows greater than 2/3 ram.
Consolidate and simplify testing of allocatable ram.
Minor tweaks to output.
Round up the size of the high buffer in sliding mmap to one page.
Squeeze a little more out of 32 bit compression windows.