Conversation
…mpounding vals, consolidation, or exits and run tenderly simulation
📊 Forge Coverage ReportGenerated by workflow run #642 |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 37bf53fa22
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| if args.txns: | ||
| env['TXNS'] = args.txns | ||
| if args.schedule: | ||
| env['SCHEDULE_FILE'] = args.schedule | ||
| if args.execute: |
There was a problem hiding this comment.
Forge sim omits TXNS env for timelock runs
When running the forge-based simulation with --schedule/--execute, run_forge_simulation never sets the TXNS environment variable (lines 672-676), so SimulateTransactions.s.sol’s run() immediately fails at vm.envString("TXNS") (line 46) before any transactions execute. This makes the documented timelock workflow (simulate.py --schedule ... --execute ...) unusable in forge mode. Compose TXNS from the schedule/execute/then files when those flags are used.
Useful? React with 👍 / 👎.
…g 2: Signing data nonce for batched transactions (AutoCompound.s.sol)
- Reverts for gas consumed more than 10 Mil - State Syncing turned off, because not necessary for our use case
…d generate separate consolidation transactions
…s for auto-compound workflows
… auto-compound script - Added functionality to distribute validators across withdrawal time buckets for optimal consolidation. - Introduced command-line options for bucket size and enabled round-robin distribution. - Enhanced error handling for validators without valid withdrawal credentials.
…in auto-compound script - Implemented a check to ensure --bucket-hours is a positive integer. - Added warning for excluded validators due to missing beacon index, enhancing user feedback on validator selection.
…chars) and full withdrawal credentials (66 chars).
…r each consolidation transaction and update simulation handling for multiple files. Enhanced simulate.py to support comma-separated transaction file inputs.
… to a dedicated 'txns' directory for better organization and clarity in file management.
… transaction scheduling and execution capabilities. Updated _buildTimelockCalldata function to utilize EtherFiTimelock for batch scheduling and execution of transactions.
…ents-tooling-validators-1 Improvements for Compounding Validator Tooling
…ions in submarine withdrawal process
…TH withdrawals from pod balances
…ce for improved accuracy
| echo -e "${RED}Error: Linking required but no consolidation files found${NC}" | ||
| exit 1 | ||
| fi | ||
| SIMULATION_EXIT_CODE=$? |
There was a problem hiding this comment.
set -e makes simulation exit code check unreachable
Low Severity
The script uses set -e (line 18), which causes immediate exit on any non-zero command return. The eval "$CMD" calls at lines 191 and 216 would terminate the script before SIMULATION_EXIT_CODE=$? is reached on a failure. Consequently, SIMULATION_EXIT_CODE can only ever be 0, making the error-handling check at line 225 dead code. The intended error message about failed simulations never displays.
Additional Locations (1)
| OUTPUT_FILE="txns.json" \ | ||
| SAFE_NONCE="$NONCE" \ | ||
| forge script script/operations/auto-compound/AutoCompound.s.sol:AutoCompound \ | ||
| --fork-url "$MAINNET_RPC_URL" -vvvv 2>&1 | tee "$OUTPUT_DIR/forge.log" |
There was a problem hiding this comment.
Missing pipefail causes silent forge script failures
High Severity
The forge script ... 2>&1 | tee "$OUTPUT_DIR/forge.log" pipe discards the exit code of forge script because bash captures the exit code of the rightmost command (tee) by default. Without set -o pipefail, if the Forge script fails (compilation error, RPC failure, revert, etc.), the error is silently swallowed. The script then proceeds to move non-existent transaction files (suppressed by || true) and eventually attempts Tenderly simulation with no valid inputs.
| { access = "read-write", path = "./script/operations/exits" }, | ||
| { access = "read-write", path = "./script/operations/utils" }, | ||
| { access = "read", path = "./script/operations/data" }, | ||
| { access = "read", path = "./" } |
There was a problem hiding this comment.
Overly broad read access to project root in foundry.toml
Medium Severity
The fs_permissions entry { access = "read", path = "./" } grants read access to the entire project root for all Forge scripts. This allows any script to read sensitive files like .env (which contains TENDERLY_API_ACCESS_TOKEN, database credentials, and RPC URLs). It also makes all other specific read-only permission entries redundant.
|
|
||
| summary = { | ||
| 'total_targets': len(consolidations), | ||
| 'total_sources': sum(len(c['sources']) for c in consolidations), |
There was a problem hiding this comment.
Summary miscounts sources by including targets in total
Low Severity
The summary's total_sources counts all entries in each consolidation's sources list (including the target validator placed at sources[0]), while the selection loop's total_sources counter only counts actual batch_sources (excluding the target). This inconsistency means the summary over-reports the source count by the number of consolidation batches, potentially confusing operators reviewing the plan.
Additional Locations (1)
| vm.startBroadcast(); | ||
| (bool success, ) = to.call{value: value}(data); | ||
| require(success, "Consolidation transaction failed"); | ||
| vm.stopBroadcast(); |
There was a problem hiding this comment.
Broadcast mode double-executes consolidation on fork state
Medium Severity
In broadcast mode, _executeOrWriteTx first executes the consolidation transaction on the fork via vm.prank for gas estimation (line 304), which mutates the fork state (consuming rate limits, spending ETH, submitting requests). It then executes the same transaction again inside vm.startBroadcast (line 315) against the already-modified state. The second execution is likely to fail due to exhausted rate limits or duplicated consolidation requests.
…mounts in both gwei and wei, and improve node address resolution logic
| vm.startBroadcast(); | ||
| (bool success, ) = to.call{value: value}(data); | ||
| require(success, "Consolidation transaction failed"); | ||
| vm.stopBroadcast(); |
There was a problem hiding this comment.
Broadcast mode double-executes consolidation on fork, inflating fees
Medium Severity
In broadcast mode, _executeOrWriteTx executes the consolidation call twice on the local fork: once via vm.prank for gas estimation (line 304) and once via vm.startBroadcast (line 315). Since consolidation request fees increase exponentially (EIP-7251), the first execution inflates the on-fork fee state. When subsequent batches call _getConsolidationFee, the returned fee reflects double the expected number of prior requests, causing later transactions to overpay. If the admin and broadcaster are the same address, the prank call also drains ETH from the balance needed for broadcast.
| vm.startBroadcast(); | ||
| (bool success, ) = to.call{value: value}(data); | ||
| require(success, "Consolidation transaction failed"); | ||
| vm.stopBroadcast(); |
There was a problem hiding this comment.
Gas estimation call corrupts fork state before broadcast
Medium Severity
In broadcast mode, _executeOrWriteTx executes the consolidation via vm.prank + to.call{value: value}(data) for gas estimation, which modifies the fork state (submits consolidation requests, consumes fees). The subsequent vm.startBroadcast() call then tries the identical operation on the already-modified fork state. Because consolidation request fees increase after each submission (EIP-7251 exponential fee), the broadcast call uses a now-stale fee value that may be insufficient, causing it to revert. The gas estimation needs a vm.snapshot()/vm.revertTo() around it to avoid polluting the fork state.
| function _loadConfig() internal view returns (Config memory config) { | ||
| config.jsonFile = vm.envString("JSON_FILE"); | ||
| config.outputFile = vm.envOr("OUTPUT_FILE", string(DEFAULT_OUTPUT_FILE)); | ||
| config.batchSize = vm.envOr("BATCH_SIZE", DEFAULT_BATCH_SIZE); |
There was a problem hiding this comment.
Documented OUTPUT_FILE and BATCH_SIZE env vars are unused
Medium Severity
config.outputFile and config.batchSize are loaded from OUTPUT_FILE and BATCH_SIZE environment variables but never referenced anywhere else in the code. The README documents these as supported options ("Validators per transaction" and "Output filename"), but setting them has no effect. Output filenames are hardcoded using nonce prefixes, and consolidation transactions are generated one per EigenPod with no sub-batching.
Additional Locations (1)
| vm.startBroadcast(); | ||
| (bool success, ) = to.call{value: value}(data); | ||
| require(success, "Consolidation transaction failed"); | ||
| vm.stopBroadcast(); |
There was a problem hiding this comment.
Gas estimation executes state-changing call before broadcast
High Severity
In broadcast mode, the gas estimation at lines 302–306 uses vm.prank + .call which actually executes the consolidation on the fork, modifying state. The subsequent vm.startBroadcast + .call then attempts the same consolidation on already-modified fork state, which will fail because the consolidation was already processed. This makes broadcast mode non-functional.
…t LiquidityPool's total pooled ether
…act; enhance error handling in Python scripts
…t validators and improve grouping by withdrawal credentials
| string memory postSweepDir = string.concat(config.outputDir, "/post-sweep"); | ||
| string memory filePath = string.concat(postSweepDir, "/queue-withdrawals.json"); | ||
| vm.writeFile(filePath, jsonContent); | ||
| console2.log(" Written: queue-withdrawals.json"); |
There was a problem hiding this comment.
Post-sweep output directory not created
Medium Severity
_executeQueueETHWithdrawals writes queue-withdrawals.json into config.outputDir + "/post-sweep" but never creates that subdirectory. When withdrawals exist, vm.writeFile targets a path that may not exist, which can fail and stop the consolidation workflow.
| mv "script/operations/auto-compound/txns/$SCHEDULE_FILE" "$OUTPUT_DIR/" 2>/dev/null || true | ||
| mv "script/operations/auto-compound/txns/$EXECUTE_FILE" "$OUTPUT_DIR/" 2>/dev/null || true | ||
| # Move all consolidation files (may be multiple with incrementing nonces) | ||
| mv "script/operations/auto-compound/txns/"*-consolidation.json "$OUTPUT_DIR/" 2>/dev/null || true |
There was a problem hiding this comment.
| uint256 feePerRequest, | ||
| uint256 batchSize | ||
| ) internal pure returns (ConsolidationTx[] memory transactions) { | ||
| uint256 numBatches = (pubkeys.length + batchSize - 1) / batchSize; |
There was a problem hiding this comment.
Missing batch size guard can crash generator
Low Severity
BATCH_SIZE is accepted directly from env and used as a divisor in both transaction generation paths. Setting BATCH_SIZE to 0 triggers division-by-zero and the script fails before producing output.
Additional Locations (2)
…idators to prevent fallback on default balances
… with detailed configuration and error handling
The forge script was hanging for 30+ minutes on large runs (1000+ validators) due to two bottlenecks: 1. JSON parsing: _countConsolidations and _countSources iterated one-by-one calling stdJson.keyExists ~2000+ times, each re-parsing the entire JSON. Now reads pre-computed num_consolidations and source_count fields directly. 2. Gas estimation: In broadcast mode, each batch was manually simulated via vm.prank before broadcasting, even with forge --skip-simulation. Now skippable via SKIP_GAS_ESTIMATE env var. Shell script changes: - Add --skip-forge-sim flag (passes --skip-simulation + SKIP_GAS_ESTIMATE to forge) - Add --verbose/-v flag (opt-in -vvvv traces, was previously always-on) - Default forge runs are now minimal verbosity Co-authored-by: Cursor <cursoragent@cursor.com>
…r enhanced validator consolidation workflow
…forge-perf fix: speed up consolidation forge script and add --skip-forge-sim flag
| vm.startBroadcast(); | ||
| (bool success, ) = to.call{value: value}(data); | ||
| require(success, "Consolidation transaction failed"); | ||
| vm.stopBroadcast(); |
There was a problem hiding this comment.
Gas estimation executes state-changing call before broadcast
High Severity
In broadcast mode with gas estimation enabled, _executeOrWriteTx executes the consolidation call via vm.prank + to.call{value: value}(data) to estimate gas, which actually processes the consolidation requests on the fork state. The subsequent vm.startBroadcast() call then re-executes the same consolidation on the now-modified fork. Because the EIP-7251 consolidation fee increases with each request, and the requests may have already been processed on the fork, the broadcast execution is likely to fail or produce incorrect results.
| podPubkeys, | ||
| feePerRequest, | ||
| address(nodesManager) | ||
| ); |
There was a problem hiding this comment.
BATCH_SIZE is documented but completely ignored for consolidation
Medium Severity
The BATCH_SIZE environment variable is loaded into config.batchSize and documented as "Number of validators per EigenPod transaction", but _generateConsolidationTransactionsByPod never uses it. All validators for a given EigenPod are placed into a single transaction regardless of count. If a pod has many validators (e.g., 200), this could produce a transaction that exceeds the block gas limit and cannot be executed on mainnet.
Additional Locations (1)
Introduced a new constant for the consolidation gas limit and updated the `cast_send_raw` function to accept an optional gas limit parameter. This allows for more flexible transaction management during the consolidation process.
Added a constant for transaction delay and implemented it in the broadcasting functions to ensure smoother transaction processing during consolidation and withdrawal operations.
| vm.startBroadcast(); | ||
| (bool success, ) = to.call{value: value}(data); | ||
| require(success, "Consolidation transaction failed"); | ||
| vm.stopBroadcast(); |
There was a problem hiding this comment.
Gas estimation double-executes consolidation in broadcast mode
Medium Severity
In broadcast mode with gas estimation enabled (the default), _executeOrWriteTx executes the consolidation transaction on the fork via vm.prank for gas estimation, then executes it again via vm.startBroadcast. The first execution modifies fork state — consuming rate limits and increasing the EIP-7685 consolidation fee — causing the second execution to likely revert with InsufficientConsolidationFees or rate limit errors.
… script Updated the `parse_int_hex_or_decimal` function to improve handling of various integer formats, including scientific notation and hexadecimal. This change ensures more robust parsing of transaction values from JSON, enhancing the reliability of the consolidation process.
| vm.startBroadcast(); | ||
| (bool success, ) = to.call{value: value}(data); | ||
| require(success, "Consolidation transaction failed"); | ||
| vm.stopBroadcast(); |
There was a problem hiding this comment.
Gas estimation mutates state before broadcast call
Medium Severity
In _executeOrWriteTx, when broadcast=true and skipGasEstimate=false (the default), the gas estimation call via vm.prank + to.call actually executes requestConsolidation on the fork, mutating state (consuming the rate limiter, submitting consolidation to the EigenPod). The subsequent vm.startBroadcast() + to.call then attempts the same call on the already-modified fork state, which can cause the rate limiter to be consumed twice or the consolidation to fail.
Enhanced the `parse_int_hex_or_decimal` function to handle empty strings, scientific notation, and mixed formats more effectively. This update ensures reliable parsing of transaction values, contributing to the overall stability of the consolidation process.
Introduced the `--ignore-pending-withdrawals` flag in both the shell and Python scripts for unrestaking validators. This allows users to skip the pending withdrawal check and treat the full balance as available, enhancing flexibility in the unrestaking process.
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 1 potential issue.
Bugbot Autofix is OFF. To automatically fix reported issues with Cloud Agents, enable Autofix in the Cursor dashboard.
| podPubkeys, | ||
| feePerRequest, | ||
| address(nodesManager) | ||
| ); |
There was a problem hiding this comment.
BATCH_SIZE ignored for consolidation transaction grouping
Medium Severity
The _generateConsolidationTransactionsByPod function places ALL validators from a given EigenPod into a single consolidation transaction, ignoring the BATCH_SIZE configuration. The config parameter is passed but config.batchSize is never used for sub-batching. If a pod has many validators (e.g., 100+), the resulting transaction could exceed block gas limits. The README documents BATCH_SIZE as controlling "Validators per transaction," but this isn't enforced for consolidation.


generate txns to convert to auto-compounding vals, consolidation, or exits and run tenderly simulation
An eample usage is to generate txns for auto-compounding vals:
if the safe's last nonce was 661.
For details, Refer to
READMENote
Medium Risk
No protocol contract logic changes, but this adds/adjusts mainnet transaction generation and optional broadcast flows (timelock + linking + consolidation), where mistakes in inputs/fees/nonces could lead to incorrect or costly on-chain operations.
Overview
Adds a new
script/operations/toolkit to generate, batch, and simulate validator operation transactions (auto-compounding, consolidation-to-target, and EL-triggered exits), including a detailedREADMEworkflow.Introduces Forge scripts and Python/bash helpers to (a) query/plan validators from a DB + beacon API, (b) generate Gnosis Safe JSON (and optional raw) transactions, (c) handle linking of legacy validator IDs (timelock-gated for auto-compound; direct/EOA flow for consolidation-to-target), and (d) optionally simulate sequences locally or on Tenderly VNets.
Also updates infra/config for this tooling: expands
foundry.tomlfilesystem permissions, extends.gitignorefor generated ops artifacts/logs, refreshesDeployed.s.solwith new mainnet addresses, annotates selector constants inELExitsTransactions.s.sol, and adds deployment metadata JSONs forRestakingRewardsRouter.Written by Cursor Bugbot for commit 5973e5f. This will update automatically on new commits. Configure here.