Reinstate the temporary files for decompression to stdout and testing as the damaged line reinstated last commit meant it looked like those combinations worked when they actually didn't.

Compression from stdin still works without temporary files.
This commit is contained in:
Con Kolivas 2010-11-02 10:52:21 +11:00
parent c464975b8d
commit 102140dc2b
4 changed files with 23 additions and 24 deletions

14
README
View file

@ -62,9 +62,9 @@ lrztar wrapper to fake a complete archiver.
2. It requires a lot of memory to get the best performance out of, and is not 2. It requires a lot of memory to get the best performance out of, and is not
really usable (for compression) with less than 256MB. Decompression requires really usable (for compression) with less than 256MB. Decompression requires
less ram and works on smaller ram machines. less ram and works on smaller ram machines.
3. stdin on decompression and stdout on compression work but in a very 3. Only stdin in compression works well. The other combinations of
inefficient manner generating temporary files on disk so this method of using stdin/stdout work but in a very inefficient manner generating temporary files
lrzip is not recommended (the other combinations of stdin/out work nicely). on disk so this method of using lrzip is not recommended.
See the file README.benchmarks in doc/ for performance examples and what kind See the file README.benchmarks in doc/ for performance examples and what kind
of data lrzip is very good with. of data lrzip is very good with.
@ -118,11 +118,9 @@ Q. Other operating systems?
A. Patches are welcome. Version 0.43+ should build on MacOSX 10.5+ A. Patches are welcome. Version 0.43+ should build on MacOSX 10.5+
Q. Does it work on stdin/stdout? Q. Does it work on stdin/stdout?
A. Yes it does. The two most common uses, compression from stdin and A. Yes it does. Compression from stdin works nicely.. However the other
decompression to stdout work nicely. However the other combinations of combinations of stdin and stdout use temporary files on disk because of
decompression from stdin and compression to stdout use temporary files on seeking requirements so the performance of these mode is low. Not recommended!
disk because of seeking requirements so the performance of these mode is low.
Not recommended!
Q. I have another compression format that is even better than zpaq, can you Q. I have another compression format that is even better than zpaq, can you
use that? use that?

View file

@ -3,10 +3,9 @@ lrzip-0.50
Rewrote the file format to be up to 5% more compact and slightly faster. Rewrote the file format to be up to 5% more compact and slightly faster.
Made the memory initialisation much more robust, with attempted fallback Made the memory initialisation much more robust, with attempted fallback
to still work even when initial settings fail. to still work even when initial settings fail.
Fixed a lot of the stdin/stdout code. Updated a lot of the stdin code.
The two most common scenarios now work without temporary files: The most common scenario of compression from stdin now works without
Compression from stdin and decompression to stdout. temporary files.
Testing of archive integrity no longer writes any temporary files.
Lots more meaningful warnings if failure occurs. Lots more meaningful warnings if failure occurs.
Lots of code cleanups and tidying. Lots of code cleanups and tidying.

24
main.c
View file

@ -291,31 +291,33 @@ static void decompress_file(void)
if (!NO_SET_PERMS) if (!NO_SET_PERMS)
preserve_perms(fd_in, fd_out); preserve_perms(fd_in, fd_out);
fd_hist = open(control.outfile, O_RDONLY);
if (fd_hist == -1) if (fd_hist == -1)
fatal("Failed to open history file %s\n", control.outfile); fatal("Failed to open history file %s\n", control.outfile);
} else if (TEST_ONLY) { } else
fd_out = open("/dev/null", O_WRONLY); fd_out = open_tmpoutfile();
fd_hist = open("/dev/zero", O_RDONLY);
} else if (STDOUT) {
fd_out = 1;
fd_hist = 1;
}
fd_hist = open(control.outfile, O_RDONLY);
read_magic(fd_in, &expected_size); read_magic(fd_in, &expected_size);
print_progress("Decompressing..."); print_progress("Decompressing...");
runzip_fd(fd_in, fd_out, fd_hist, expected_size); runzip_fd(fd_in, fd_out, fd_hist, expected_size);
if (STDOUT)
dump_tmpoutfile(fd_out);
/* if we get here, no fatal errors during decompression */ /* if we get here, no fatal errors during decompression */
print_progress("\r"); print_progress("\r");
if (!(STDOUT | TEST_ONLY)) if (!(STDOUT | TEST_ONLY))
print_output("Output filename is: %s: ", control.outfile); print_output("Output filename is: %s: ", control.outfile);
print_progress("[OK] - %lld bytes \n", expected_size); print_progress("[OK] - %lld bytes \n", expected_size);
if (!STDOUT) { if (close(fd_hist) != 0 || close(fd_out) != 0)
if (close(fd_hist) != 0 || close(fd_out) != 0) fatal("Failed to close files\n");
fatal("Failed to close files\n");
if (TEST_ONLY | STDOUT) {
/* Delete temporary files generated for testing or faking stdout */
if (unlink(control.outfile) != 0)
fatal("Failed to unlink tmpfile: %s\n", strerror(errno));
} }
close(fd_in); close(fd_in);

View file

@ -197,7 +197,7 @@ static i64 runzip_chunk(int fd_in, int fd_out, int fd_hist, i64 expected_size, i
break; break;
} }
p = 100 * ((double)(tally + total) / (double)expected_size); p = 100 * ((double)(tally + total) / (double)expected_size);
if ( p != l ) { if (p != l) {
prog_done = (double)(tally + total) / (double)divisor[divisor_index]; prog_done = (double)(tally + total) / (double)divisor[divisor_index];
print_progress("%3d%% %9.2f / %9.2f %s\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b", print_progress("%3d%% %9.2f / %9.2f %s\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b",
p, prog_done, prog_tsize, suffix[divisor_index] ); p, prog_done, prog_tsize, suffix[divisor_index] );