Zstd bomb fix#937
Conversation
| bcs::from_bytes(decompressed.as_slice()).context("failed to deserialize blob")?; | ||
| // decompress the blob and deserialize the data with bcs | ||
| let decoder = zstd::Decoder::new(blob.data.as_slice())?; | ||
| let blob = bcs::from_reader(decoder).context("failed to deserialize blob")?; |
There was a problem hiding this comment.
Do you know how fill_buf would be called here? I wondering if this in fact going to raise an Error or whether it might just panic if the buffer size is restricted.
There was a problem hiding this comment.
I don't think I fully understand this question, and we didn't restrict the size of the internal buffer.
But you did bring to my attention that the slice also supports BufRead and we can take advantage of it, bypassing the intermediate buffer.
There was a problem hiding this comment.
Yes, bcs would return an Error.
I have implemented a customizable limit on length of any variable-length fields in zefchain/bcs#12.
Use streamed decompression API to avoid potentially huge allocations for blob data decompressed from zstd. The bcs decoder enforces the length limit of 2^31 - 1 on byte arrays.
16ea004 to
b2fd870
Compare
Avoid an intermediate buffer when serializing and compressing for CelestiaBlob, plugging the encoder into the serializer.
The data slice is already BufRead, just use it directly.
Use a crafted zstd payload as submitted in #876 (comment)
Need this to enable length limit enforcement on deserialization of blobs.
GenericArray has panicky From conversions, who knew?
|
It looks like there was a simple container conflict. I still have same reservation that we can effectively still be |
Fixes #876 (but see outstanding issues below), #1008.
Dependencies
movementlabsxyz/aptos-core#111
Summary
protocol-units.Use streamed decompression API to avoid potentially huge allocations
for blob data decompressed from zstd. The bcs decoder enforces the
length limit of 2^31 - 1 on byte arrays.
Changelog
Testing
Added unit tests to movement-celestia-da-util to exercise compression and decompression,
with supercritical and valid lengths for compressed payload and the data field. The test creating legit data structures with super-long blobs is ignored in default runs because of the memory and processing requirements.
Outstanding issues
As commented in #876 (comment), there are four byte array fields in the blob data structure that can each be bloated up to 2 GiB.