Description
While adding more unit tests, I found a corner case when trying to compress a document with a maxDstSize
which is exactly the expected compressed size (found by a previous compression attempt): in this test, FSE_compress(..)
returns 0 (uncompressible data) while previously it was able to compress the same input (given a larger destination buffer).
I'm doing a two-step process:
- Allocate a buffer large enough (using
FSE_compressBound
), and compress the source by callingFSE_compress(..., maxDstSize: FSE_compressBound(ORIGINAL_SIZE))
, and measure its final compressed sizeCOMPRESSED_SIZE
. - Repeat the process on the same buffer and payload, but this time calling
FSE_compress(..., maxDstSize: COMPRESSED_SIZE)
. I get a result code of 0 which means that data is uncompressible.
I tried probing the minimum size that will allow compressing the buffer (which is known to be compressible), and each time I need to call FSE_compress(..)
with at least COMPRESSED_SIZE + 8
. At first I thought it could be a pointer alignment issue, but it is always + 8 bytes whatever compressed size is (by changing the source a bit).
In my test, raw original size is 4,288 bytes, FSE_compressBound
return 4,833 bytes, and the compressed size is 2,821 bytes. I need to pass at least 2,823+8 = 2,829 bytes for FSE_compress
to succeed (return value > 0).
Is this expected behavior? I'm not sure if the "+8 rule" is true, or if this is random chance with the inputs I'm passing in.