Skip to content

Conversation

@RogerPodacter
Copy link
Member

@RogerPodacter RogerPodacter commented Sep 9, 2025

Summary

This PR implements the Facet V2 batching protocol, enabling significant cost reductions through transaction batching and EIP-4844 blob support. This is the first rollup protocol to support both permissionless batching and priority sequencing simultaneously.

What's Implemented

Core Features

  • Batch submission via multiple transports: Calldata and EIP-4844 blobs (events only support V1 single transactions)
  • RLP encoding for all batch data structures
  • Dual-role system: Priority batches (authorized sequencer) and forced batches (permissionless)
  • Gas validation for priority batches to ensure they don't exceed their 80% allocation
  • EIP-4844 blob support through beacon node integration
  • Content-based deduplication of identical batches
  • Full backwards compatibility with V1 single transaction format

Key Components

  • FacetBatchCollector: Scans L1 blocks for transactions from all sources (V1 singles, V2 batches)
  • FacetBatchParser: Parses and validates RLP-encoded batch format
  • FacetBlockBuilder: Implements transaction ordering rules (priority first, then permissionless)
  • StandardL2Transaction: Handles standard EIP-2718 typed transactions (EIP-1559, EIP-2930, legacy)
  • BlobProvider: Fetches EIP-4844 blobs from beacon nodes
  • BatchSignatureVerifier: Placeholder for EIP-712 signature verification (not yet implemented)

Configuration

  • FACET_BATCH_V2_ENABLED: Feature flag to enable V2 protocol
  • PRIORITY_SHARE_BPS: Priority sequencer gas allocation (default 8000 = 80%)
  • MAX_BATCH_BYTES: Maximum batch size (128KB default)
  • MAX_TXS_PER_BATCH: Maximum transactions per batch (1000)

Testing

Comprehensive test coverage including:

  • End-to-end blob integration tests
  • Mixed V1/V2 transaction ordering scenarios
  • Gas validation and priority batch rejection
  • Batch parsing and deduplication
  • Malformed data handling
  • Multi-batch blob support

Migration

Fully backwards compatible - existing V1 transactions continue working unchanged. Enable with FACET_BATCH_V2_ENABLED=true.


Note

Implements Facet V2 batching (calldata + blob) with parsing/ordering, beacon-node blob integration, a new TypeScript micro‑sequencer, CI/Dockerization, and extensive tests/refactors across importer, minting, and infra.

  • Protocol/Backend:
    • Facet V2 Batching: Add FacetBatchCollector, FacetBatchParser, FacetBlockBuilder, FacetBatchConstants, ParsedBatch, StandardL2Transaction, and BatchSignatureVerifier with priority/permissionless roles, gas checks, and content-hash dedupe.
    • Blob Support (EIP-4844): Add BlobProvider, EthereumBeaconNodeClient, and BlobUtils for blob fetching/encoding; integrate blob batch collection.
    • Importer/Infra: Enhance L1RpcPrefetcher and EthBlockImporter (prefetch flow, error handling, shutdown), add ImportProfiler, and improve GethDriver logging/flow.
    • Minting: Update FctMintCalculator, FctMintCalculatorAlbatross, and MintPeriod to ignore standard L2 txs for mint and handle fork-period logic.
    • Config: Add PriorityRegistry; expand env/network handling (e.g., hoodi); minor sysconfig/chain updates.
  • Sequencer (Node/TS):
    • Add lightweight micro‑sequencer (src/ services for batching, posting via direct/DA Builder, monitoring, API), scripts, and Dockerfile.
  • CI/DevOps:
    • Add GitHub Actions to build/publish images (build-images.yml, build-sequencer.yml).
    • Update docker-compose to include sequencer service; add env examples.
  • Tests/Docs:
    • Add comprehensive specs for blobs, batch parsing/ordering, reorg handling, minting; add test pages & utilities.
  • Misc:
    • Minor project hygiene (ignore files, gems, logging).

Written by Cursor Bugbot for commit 271e854. This will update automatically on new commits. Configure here.

This PR introduces the V2 batch system for Facet, enabling efficient transaction batching through both calldata and EIP-4844 blobs. This is a work in progress toward a more scalable architecture.

## Overview

The V2 system allows transactions to be submitted in batches rather than individually, with support for:
- **Calldata batches**: Traditional L1 calldata submissions with magic byte detection
- **Blob batches**: EIP-4844 blob submissions for reduced L1 costs
- **Dual roles**: Sequencer (priority) and Forced (permissionless) batch submission

## Key Components

### Batch Collection & Parsing
- `FacetBatchCollector`: Scans L1 blocks for Facet batches in both calldata and blobs
- `FacetBatchParser`: Parses raw payloads using magic bytes (0x00000000000012345) to identify Facet data
- `ParsedBatch`: Represents a parsed batch with role, transactions, and metadata

### Blob Support
- `BlobProvider`: Fetches blob data from Ethereum beacon nodes
- `EthereumBeaconNodeClient`: Interfaces with beacon API for blob sidecars
- `BlobUtils`: Handles EIP-4844 encoding/decoding (field element packing)

### Transaction Types
- `StandardL2Transaction`: Standard EIP-2718 typed transactions (legacy, EIP-2930, EIP-1559)
- Maintains backward compatibility with V1 single transactions

### Block Building
- `FacetBlockBuilder`: Constructs L2 blocks from collected batches
- `PriorityRegistry`: Manages authorized sequencers
- `BatchSignatureVerifier`: Validates batch signatures

## How It Works

1. **Submission**: Batches are submitted to L1 either as calldata or blobs
2. **Detection**: Magic bytes identify Facet data within aggregated submissions
3. **Collection**: Collector scans all sources (calldata, events, blobs) for batches
4. **Parsing**: Parser extracts transactions from identified batches
5. **Building**: Builder orders transactions (priority first, then forced)
6. **Execution**: Transactions are executed in the determined order

## Current Status

This is a work in progress. The foundation is in place for:
- ✅ Batch detection and parsing
- ✅ Blob fetching from beacon nodes
- ✅ Transaction ordering and block building
- ✅ Backward compatibility with V1

Still in development:
- Full integration with sequencer infrastructure
- Production blob aggregation strategies
- Performance optimizations
cursor[bot]

This comment was marked as outdated.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR implements the Facet V2 batching protocol, introducing a new transaction batching system that significantly reduces costs through EIP-4844 blob support and permissionless batching. The implementation enables dual-role sequencing with both priority batches (authorized sequencer) and forced batches (permissionless) while maintaining full backwards compatibility with V1 single transactions.

Key changes include:

  • Multi-transport batch support: Calldata and EIP-4844 blobs with RLP encoding
  • Transaction ordering system: Priority batches processed first, then forced batches by L1 index
  • Blob integration infrastructure: Beacon node client, blob utils, and provider components
  • Gas validation and deduplication: Priority batch limits and content-based batch deduplication

Reviewed Changes

Copilot reviewed 29 out of 29 changed files in this pull request and generated 6 comments.

Show a summary per file
File Description
spec/support/facet_transaction_helper.rb Enhanced test helper to support V2 batch transactions alongside V1 singles
spec/support/blob_test_helper.rb New comprehensive blob testing utilities for EIP-4844 integration
spec/services/*.rb Test coverage for all batch processing components (collector, parser, builder)
spec/integration/*.rb End-to-end integration tests for blob processing and mixed transaction scenarios
lib/sys_config.rb Configuration for V2 protocol features and priority sequencer settings
lib/blob_utils.rb EIP-4844 blob encoding/decoding utilities with proper field element handling
app/services/facet_batch_*.rb Core batch processing pipeline (collector, parser, builder)
app/services/blob_provider.rb Beacon node integration for blob fetching
app/models/*.rb Data structures for parsed batches, constants, and standard L2 transactions

Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.

Comment on lines +50 to 57
# NOTE: This only returns receipts for V1 single transactions that have a direct sourceHash mapping
# Batch transactions (StandardL2Transaction) don't have a 1:1 mapping with L1 transactions
# and should be queried directly from the L2 block if needed
res = eth_transactions.map do |eth_tx|
tx_in_geth = latest_l2_block['transactions'].find do |tx|
next false if tx['sourceHash'].nil?
eth_tx.facet_tx_source_hash == Hash32.from_hex(tx['sourceHash'])
end
Copy link

Copilot AI Sep 9, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[nitpick] The comment accurately explains the limitation but could be improved. Consider documenting the specific use case where this distinction matters and provide guidance on how to query batch transactions from L2 blocks.

Copilot uses AI. Check for mistakes.
Comment on lines +102 to +111
before do
# Mock gas calculation to exceed limit for priority batch only
allow_any_instance_of(described_class).to receive(:calculate_batch_gas) do |instance, batch|
if batch.is_priority?
l2_block_gas_limit + 1 # Over limit
else
100 # Under limit
end
end
end
Copy link

Copilot AI Sep 9, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[nitpick] Using allow_any_instance_of can lead to brittle tests. Consider injecting the gas calculator as a dependency or using a more targeted stub approach to improve test maintainability.

Suggested change
before do
# Mock gas calculation to exceed limit for priority batch only
allow_any_instance_of(described_class).to receive(:calculate_batch_gas) do |instance, batch|
if batch.is_priority?
l2_block_gas_limit + 1 # Over limit
else
100 # Under limit
end
end
end
let(:batch_gas_calculator) do
->(batch) {
if batch.is_priority?
l2_block_gas_limit + 1 # Over limit
else
100 # Under limit
end
}
end
let(:builder) do
described_class.new(
collected: collected,
l2_block_gas_limit: l2_block_gas_limit,
get_authorized_signer: ->(block) { authorized_signer },
batch_gas_calculator: batch_gas_calculator
)
end

Copilot uses AI. Check for mistakes.
Comment on lines +92 to +111
while active && pos < bytes.length
# Skip leading 0x00 of the field element
pos += 1

consume = [BYTES_PER_FIELD_ELEMENT - 1, bytes.length - pos].min
consume.times do
byte = bytes[pos]
pos += 1

remaining = bytes[pos..-1] || []
# Match viem: terminator if this byte is 0x80 and there is no other 0x80 in the rest of the current blob
is_terminator = (byte == 0x80) && !remaining.include?(0x80)
if is_terminator
active = false
break
end

out << byte
end
end
Copy link

Copilot AI Sep 9, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The remaining.include?(0x80) operation scans the entire remaining array for each byte, creating O(n²) complexity. Consider using a more efficient approach like tracking if a terminator has been found or scanning once to find the terminator position.

Suggested change
while active && pos < bytes.length
# Skip leading 0x00 of the field element
pos += 1
consume = [BYTES_PER_FIELD_ELEMENT - 1, bytes.length - pos].min
consume.times do
byte = bytes[pos]
pos += 1
remaining = bytes[pos..-1] || []
# Match viem: terminator if this byte is 0x80 and there is no other 0x80 in the rest of the current blob
is_terminator = (byte == 0x80) && !remaining.include?(0x80)
if is_terminator
active = false
break
end
out << byte
end
end
# Find the position of the first 0x80 (terminator) in the blob
terminator_index = bytes.index(0x80)
# If no terminator, process the whole blob; else, process up to the terminator
process_up_to = terminator_index ? terminator_index : bytes.length
while active && pos < process_up_to
# Skip leading 0x00 of the field element
pos += 1
consume = [BYTES_PER_FIELD_ELEMENT - 1, process_up_to - pos].min
consume.times do
break if pos >= process_up_to
byte = bytes[pos]
pos += 1
out << byte
end
end
# If a terminator was found, stop processing further blobs
if terminator_index
active = false
end

Copilot uses AI. Check for mistakes.
Comment on lines 221 to 224
# TODO: Implement EIP-712 signature verification
# For now, return nil (signature not verified)
nil
end
Copy link

Copilot AI Sep 9, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The signature verification is not implemented, which is a security concern for priority batches. Consider either implementing the verification or adding stronger documentation about this limitation and its security implications.

Copilot uses AI. Check for mistakes.
Comment on lines 298 to 302
facet_txs = if SysConfig.facet_batch_v2_enabled?
collect_facet_transactions_v2(block_result, receipt_result)
else
EthTransaction.facet_txs_from_rpc_results(block_result, receipt_result)
end
Copy link

Copilot AI Sep 9, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[nitpick] Consider extracting the transaction collection logic into a strategy pattern or separate service class to reduce complexity in the importer and improve testability of the V2 collection logic.

Copilot uses AI. Check for mistakes.
Comment on lines 193 to 196
rescue => e
Rails.logger.error "Failed to recover EIP-1559 address: #{e.message}"
Address20.from_hex("0x" + "0" * 40)
end
Copy link

Copilot AI Sep 9, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Returning a zero address on signature recovery failure could mask serious issues. Consider either raising an exception or returning a more specific error indication that doesn't look like a valid address.

Copilot uses AI. Check for mistakes.
cursor[bot]

This comment was marked as outdated.

cursor[bot]

This comment was marked as outdated.

cursor[bot]

This comment was marked as outdated.

cursor[bot]

This comment was marked as outdated.

cursor[bot]

This comment was marked as outdated.

cursor[bot]

This comment was marked as outdated.

cursor[bot]

This comment was marked as outdated.

cursor[bot]

This comment was marked as outdated.

cursor[bot]

This comment was marked as outdated.

cursor[bot]

This comment was marked as outdated.

cursor[bot]

This comment was marked as outdated.

RogerPodacter and others added 3 commits September 30, 2025 13:09
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
cursor[bot]

This comment was marked as outdated.

cursor[bot]

This comment was marked as outdated.

cursor[bot]

This comment was marked as outdated.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants