-
Notifications
You must be signed in to change notification settings - Fork 0
Change storage layout to allow mapping #62
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
📝 WalkthroughWalkthroughRefactors BaseBuffer to store block hashes in a mapping plus a circular buffer and updates multiple ChildToParentProver contracts and tests to use the new buffer layout and a Changes
Sequence Diagram(s)(omitted) Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@src/contracts/block-hash-pusher/BaseBuffer.sol`:
- Around line 49-65: The loop currently writes block numbers into _buffer even
when the corresponding block hash in blockHashes is zero, creating an
irreversible tombstone because future writes for that block are skipped; update
the write path in BaseBuffer (the loop that iterates over blockHashes, uses
variables blockNumber, bufferIndex, existingBlockNumber, and writes to _buffer
and _blockHashes) to reject zero hashes by adding a guard such as
require(blockHashes[i] != bytes32(0), "zero block hash not allowed") (or
otherwise revert) before mutating _buffer/_blockHashes so zero entries cannot be
recorded and corrupt future writes.
| // write the hashes to both the mapping and circular buffer | ||
| for (uint256 i = 0; i < blockHashesLength; i++) { | ||
| uint256 blockNumber = firstBlockNumber + i; | ||
| uint256 bufferIndex = blockNumber % _BUFFER_SIZE; | ||
| uint256 existingBlockNumber = _buffer[bufferIndex]; | ||
|
|
||
| BufferSlot storage bufferSlot = _buffer[bufferIndex]; | ||
| if (blockNumber <= bufferSlot.blockNumber) { | ||
| // noop | ||
| if (blockNumber <= existingBlockNumber) { | ||
| continue; | ||
| } | ||
|
|
||
| bufferSlot.blockNumber = blockNumber; | ||
| bufferSlot.blockHash = blockHashes[i]; | ||
| if (existingBlockNumber != 0) { | ||
| _blockHashes[existingBlockNumber] = 0; | ||
| } | ||
|
|
||
| _blockHashes[blockNumber] = blockHashes[i]; | ||
| _buffer[bufferIndex] = blockNumber; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Guard against zero block hashes to avoid irreversible tombstones.
If a zero hash is pushed, _buffer records the block number but the mapping is left empty. Later pushes for the same block number will be skipped, making it impossible to correct that entry. Consider rejecting zero hashes at write time.
🔧 Proposed fix
for (uint256 i = 0; i < blockHashesLength; i++) {
uint256 blockNumber = firstBlockNumber + i;
+ require(blockHashes[i] != bytes32(0), "ZERO_BLOCK_HASH");
uint256 bufferIndex = blockNumber % _BUFFER_SIZE;
uint256 existingBlockNumber = _buffer[bufferIndex];🤖 Prompt for AI Agents
In `@src/contracts/block-hash-pusher/BaseBuffer.sol` around lines 49 - 65, The
loop currently writes block numbers into _buffer even when the corresponding
block hash in blockHashes is zero, creating an irreversible tombstone because
future writes for that block are skipped; update the write path in BaseBuffer
(the loop that iterates over blockHashes, uses variables blockNumber,
bufferIndex, existingBlockNumber, and writes to _buffer and _blockHashes) to
reject zero hashes by adding a guard such as require(blockHashes[i] !=
bytes32(0), "zero block hash not allowed") (or otherwise revert) before mutating
_buffer/_blockHashes so zero entries cannot be recorded and corrupt future
writes.
|
Reverts 62185dd |
| mapping(uint256 blockNumber => bytes32 blockHash) private _blockHashes; | ||
|
|
||
| BufferSlot[_BUFFER_SIZE] private _buffer; | ||
| uint256[_BUFFER_SIZE] private _buffer; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's call this _blockNumberBuffer as it used to be.
frangio
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok after above comment is addressed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
test/mocks/BufferMock.sol (1)
9-16: Remove the lingeringuint64cast to avoid truncation.
Line 16 still narrows the value even thoughnewestBlockNumberis nowuint256. This can silently truncate large block numbers.✅ Proposed fix
- newestBlockNumber = uint64(firstBlockNumber + blockHashes.length - 1); + newestBlockNumber = firstBlockNumber + blockHashes.length - 1;
🧹 Nitpick comments (2)
src/contracts/provers/linea/ChildToParentProver.sol (1)
45-47: Update the stale URL comment to reference the local BaseBuffer.The comment on line 46 still references
https://github.com/OffchainLabs/block-hash-pusher/...but this prover now uses a local BaseBuffer with a different storage layout. Consider updating the comment to point to the localBaseBuffer.solpath for accuracy.📝 Suggested comment update
// calculate the slot based on the provided block number - // see: https://github.com/OffchainLabs/block-hash-pusher/blob/a1e26f2e42e6306d1e7f03c5d20fa6aa64ff7a12/contracts/Buffer.sol#L32 + // see: src/contracts/block-hash-pusher/BaseBuffer.sol uint256 slot = uint256(SlotDerivation.deriveMapping(bytes32(blockHashMappingSlot), targetBlockNumber));src/contracts/provers/scroll/ChildToParentProver.sol (1)
16-18: Verify the mapping slot aligns with BaseBuffer’s new layout.
Storage proofs will fail if Line 18 drifts from the actual mapping slot; please confirm it’s truly1in the updated BaseBuffer layout. If feasible, consider sourcing the slot from a shared constant to avoid future drift.
This PR updates the
BaseBuffercontract to have a mapping ofblockNumbertoblockHash. This layout storage is done to allow for storage proofs on the buffer contract.Summary by CodeRabbit
Bug Fixes
Chores
Tests
✏️ Tip: You can customize this high-level summary in your review settings.