Skip to content

Comments

feat: tooling for validator management#347

Open
seongyun-ko wants to merge 146 commits intomasterfrom
syko/feature/tooling-validators
Open

feat: tooling for validator management#347
seongyun-ko wants to merge 146 commits intomasterfrom
syko/feature/tooling-validators

Conversation

@seongyun-ko
Copy link
Contributor

@seongyun-ko seongyun-ko commented Jan 6, 2026

generate txns to convert to auto-compounding vals, consolidation, or exits and run tenderly simulation

An eample usage is to generate txns for auto-compounding vals:

  1. query validators
# Query 50 validators for an operator (by name)
python3 script/operations/auto-compound/query_validators.py \
  --operator "validation_cloud" \
  --count 50 \
  --output script/operations/auto-compound/validators.json
  1. generate gnosis txns with hashes
    if the safe's last nonce was 661.
JSON_FILE=validators.json SAFE_NONCE=662 forge script \
  script/operations/auto-compound/AutoCompound.s.sol:AutoCompound \
  --fork-url $MAINNET_RPC_URL -vvvv
  1. simulate at Tenderly
python3 script/operations/utils/simulate.py --tenderly \
  --schedule script/operations/auto-compound/auto-compound-txns-link-schedule.json \
  --execute script/operations/auto-compound/auto-compound-txns-link-execute.json \
  --then script/operations/auto-compound/auto-compound-txns-consolidation.json \
  --delay 8h \
  --vnet-name "ValidationCloud-AutoCompound"

For details, Refer to README


Note

Medium Risk
No protocol contract logic changes, but this adds/adjusts mainnet transaction generation and optional broadcast flows (timelock + linking + consolidation), where mistakes in inputs/fees/nonces could lead to incorrect or costly on-chain operations.

Overview
Adds a new script/operations/ toolkit to generate, batch, and simulate validator operation transactions (auto-compounding, consolidation-to-target, and EL-triggered exits), including a detailed README workflow.

Introduces Forge scripts and Python/bash helpers to (a) query/plan validators from a DB + beacon API, (b) generate Gnosis Safe JSON (and optional raw) transactions, (c) handle linking of legacy validator IDs (timelock-gated for auto-compound; direct/EOA flow for consolidation-to-target), and (d) optionally simulate sequences locally or on Tenderly VNets.

Also updates infra/config for this tooling: expands foundry.toml filesystem permissions, extends .gitignore for generated ops artifacts/logs, refreshes Deployed.s.sol with new mainnet addresses, annotates selector constants in ELExitsTransactions.s.sol, and adds deployment metadata JSONs for RestakingRewardsRouter.

Written by Cursor Bugbot for commit 5973e5f. This will update automatically on new commits. Configure here.

…mpounding vals, consolidation, or exits and run tenderly simulation
@github-actions
Copy link

github-actions bot commented Jan 6, 2026

📊 Forge Coverage Report

forge coverage failed; see logs above (tests step is the gate).

Generated by workflow run #642

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 37bf53fa22

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines 672 to 676
if args.txns:
env['TXNS'] = args.txns
if args.schedule:
env['SCHEDULE_FILE'] = args.schedule
if args.execute:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Forge sim omits TXNS env for timelock runs

When running the forge-based simulation with --schedule/--execute, run_forge_simulation never sets the TXNS environment variable (lines 672-676), so SimulateTransactions.s.sol’s run() immediately fails at vm.envString("TXNS") (line 46) before any transactions execute. This makes the documented timelock workflow (simulate.py --schedule ... --execute ...) unusable in forge mode. Compose TXNS from the schedule/execute/then files when those flags are used.

Useful? React with 👍 / 👎.

…g 2: Signing data nonce for batched transactions (AutoCompound.s.sol)
pankajjagtapp and others added 8 commits January 8, 2026 13:45
- Reverts for gas consumed more than 10 Mil
- State Syncing turned off, because not necessary for our use case
…d generate separate consolidation transactions
… auto-compound script

- Added functionality to distribute validators across withdrawal time buckets for optimal consolidation.
- Introduced command-line options for bucket size and enabled round-robin distribution.
- Enhanced error handling for validators without valid withdrawal credentials.
…in auto-compound script

- Implemented a check to ensure --bucket-hours is a positive integer.
- Added warning for excluded validators due to missing beacon index, enhancing user feedback on validator selection.
…chars) and full withdrawal credentials (66 chars).
…r each consolidation transaction and update simulation handling for multiple files. Enhanced simulate.py to support comma-separated transaction file inputs.
… to a dedicated 'txns' directory for better organization and clarity in file management.
… transaction scheduling and execution capabilities. Updated _buildTimelockCalldata function to utilize EtherFiTimelock for batch scheduling and execution of transactions.
…ents-tooling-validators-1

Improvements for Compounding Validator Tooling
echo -e "${RED}Error: Linking required but no consolidation files found${NC}"
exit 1
fi
SIMULATION_EXIT_CODE=$?
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

set -e makes simulation exit code check unreachable

Low Severity

The script uses set -e (line 18), which causes immediate exit on any non-zero command return. The eval "$CMD" calls at lines 191 and 216 would terminate the script before SIMULATION_EXIT_CODE=$? is reached on a failure. Consequently, SIMULATION_EXIT_CODE can only ever be 0, making the error-handling check at line 225 dead code. The intended error message about failed simulations never displays.

Additional Locations (1)

Fix in Cursor Fix in Web

OUTPUT_FILE="txns.json" \
SAFE_NONCE="$NONCE" \
forge script script/operations/auto-compound/AutoCompound.s.sol:AutoCompound \
--fork-url "$MAINNET_RPC_URL" -vvvv 2>&1 | tee "$OUTPUT_DIR/forge.log"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing pipefail causes silent forge script failures

High Severity

The forge script ... 2>&1 | tee "$OUTPUT_DIR/forge.log" pipe discards the exit code of forge script because bash captures the exit code of the rightmost command (tee) by default. Without set -o pipefail, if the Forge script fails (compilation error, RPC failure, revert, etc.), the error is silently swallowed. The script then proceeds to move non-existent transaction files (suppressed by || true) and eventually attempts Tenderly simulation with no valid inputs.

Fix in Cursor Fix in Web

{ access = "read-write", path = "./script/operations/exits" },
{ access = "read-write", path = "./script/operations/utils" },
{ access = "read", path = "./script/operations/data" },
{ access = "read", path = "./" }
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overly broad read access to project root in foundry.toml

Medium Severity

The fs_permissions entry { access = "read", path = "./" } grants read access to the entire project root for all Forge scripts. This allows any script to read sensitive files like .env (which contains TENDERLY_API_ACCESS_TOKEN, database credentials, and RPC URLs). It also makes all other specific read-only permission entries redundant.

Fix in Cursor Fix in Web


summary = {
'total_targets': len(consolidations),
'total_sources': sum(len(c['sources']) for c in consolidations),
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary miscounts sources by including targets in total

Low Severity

The summary's total_sources counts all entries in each consolidation's sources list (including the target validator placed at sources[0]), while the selection loop's total_sources counter only counts actual batch_sources (excluding the target). This inconsistency means the summary over-reports the source count by the number of consolidation batches, potentially confusing operators reviewing the plan.

Additional Locations (1)

Fix in Cursor Fix in Web

vm.startBroadcast();
(bool success, ) = to.call{value: value}(data);
require(success, "Consolidation transaction failed");
vm.stopBroadcast();
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Broadcast mode double-executes consolidation on fork state

Medium Severity

In broadcast mode, _executeOrWriteTx first executes the consolidation transaction on the fork via vm.prank for gas estimation (line 304), which mutates the fork state (consuming rate limits, spending ETH, submitting requests). It then executes the same transaction again inside vm.startBroadcast (line 315) against the already-modified state. The second execution is likely to fail due to exhausted rate limits or duplicated consolidation requests.

Fix in Cursor Fix in Web

vm.startBroadcast();
(bool success, ) = to.call{value: value}(data);
require(success, "Consolidation transaction failed");
vm.stopBroadcast();
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Broadcast mode double-executes consolidation on fork, inflating fees

Medium Severity

In broadcast mode, _executeOrWriteTx executes the consolidation call twice on the local fork: once via vm.prank for gas estimation (line 304) and once via vm.startBroadcast (line 315). Since consolidation request fees increase exponentially (EIP-7251), the first execution inflates the on-fork fee state. When subsequent batches call _getConsolidationFee, the returned fee reflects double the expected number of prior requests, causing later transactions to overpay. If the admin and broadcaster are the same address, the prank call also drains ETH from the balance needed for broadcast.

Fix in Cursor Fix in Web

vm.startBroadcast();
(bool success, ) = to.call{value: value}(data);
require(success, "Consolidation transaction failed");
vm.stopBroadcast();
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Gas estimation call corrupts fork state before broadcast

Medium Severity

In broadcast mode, _executeOrWriteTx executes the consolidation via vm.prank + to.call{value: value}(data) for gas estimation, which modifies the fork state (submits consolidation requests, consumes fees). The subsequent vm.startBroadcast() call then tries the identical operation on the already-modified fork state. Because consolidation request fees increase after each submission (EIP-7251 exponential fee), the broadcast call uses a now-stale fee value that may be insufficient, causing it to revert. The gas estimation needs a vm.snapshot()/vm.revertTo() around it to avoid polluting the fork state.

Fix in Cursor Fix in Web

function _loadConfig() internal view returns (Config memory config) {
config.jsonFile = vm.envString("JSON_FILE");
config.outputFile = vm.envOr("OUTPUT_FILE", string(DEFAULT_OUTPUT_FILE));
config.batchSize = vm.envOr("BATCH_SIZE", DEFAULT_BATCH_SIZE);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Documented OUTPUT_FILE and BATCH_SIZE env vars are unused

Medium Severity

config.outputFile and config.batchSize are loaded from OUTPUT_FILE and BATCH_SIZE environment variables but never referenced anywhere else in the code. The README documents these as supported options ("Validators per transaction" and "Output filename"), but setting them has no effect. Output filenames are hardcoded using nonce prefixes, and consolidation transactions are generated one per EigenPod with no sub-batching.

Additional Locations (1)

Fix in Cursor Fix in Web

vm.startBroadcast();
(bool success, ) = to.call{value: value}(data);
require(success, "Consolidation transaction failed");
vm.stopBroadcast();
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Gas estimation executes state-changing call before broadcast

High Severity

In broadcast mode, the gas estimation at lines 302–306 uses vm.prank + .call which actually executes the consolidation on the fork, modifying state. The subsequent vm.startBroadcast + .call then attempts the same consolidation on already-modified fork state, which will fail because the consolidation was already processed. This makes broadcast mode non-functional.

Fix in Cursor Fix in Web

string memory postSweepDir = string.concat(config.outputDir, "/post-sweep");
string memory filePath = string.concat(postSweepDir, "/queue-withdrawals.json");
vm.writeFile(filePath, jsonContent);
console2.log(" Written: queue-withdrawals.json");
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Post-sweep output directory not created

Medium Severity

_executeQueueETHWithdrawals writes queue-withdrawals.json into config.outputDir + "/post-sweep" but never creates that subdirectory. When withdrawals exist, vm.writeFile targets a path that may not exist, which can fail and stop the consolidation workflow.

Fix in Cursor Fix in Web

mv "script/operations/auto-compound/txns/$SCHEDULE_FILE" "$OUTPUT_DIR/" 2>/dev/null || true
mv "script/operations/auto-compound/txns/$EXECUTE_FILE" "$OUTPUT_DIR/" 2>/dev/null || true
# Move all consolidation files (may be multiple with incrementing nonces)
mv "script/operations/auto-compound/txns/"*-consolidation.json "$OUTPUT_DIR/" 2>/dev/null || true
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Moves unrelated consolidation files into run output

Medium Severity

The move command uses script/operations/auto-compound/txns/*-consolidation.json without nonce/run scoping, so it can collect leftover consolidation files from other runs and include them in this run’s output and simulation set.

Fix in Cursor Fix in Web

uint256 feePerRequest,
uint256 batchSize
) internal pure returns (ConsolidationTx[] memory transactions) {
uint256 numBatches = (pubkeys.length + batchSize - 1) / batchSize;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing batch size guard can crash generator

Low Severity

BATCH_SIZE is accepted directly from env and used as a divisor in both transaction generation paths. Setting BATCH_SIZE to 0 triggers division-by-zero and the script fails before producing output.

Additional Locations (2)

Fix in Cursor Fix in Web

pankajjagtapp and others added 5 commits February 17, 2026 15:05
…idators to prevent fallback on default balances
… with detailed configuration and error handling
The forge script was hanging for 30+ minutes on large runs (1000+ validators)
due to two bottlenecks:

1. JSON parsing: _countConsolidations and _countSources iterated one-by-one
   calling stdJson.keyExists ~2000+ times, each re-parsing the entire JSON.
   Now reads pre-computed num_consolidations and source_count fields directly.

2. Gas estimation: In broadcast mode, each batch was manually simulated via
   vm.prank before broadcasting, even with forge --skip-simulation.
   Now skippable via SKIP_GAS_ESTIMATE env var.

Shell script changes:
- Add --skip-forge-sim flag (passes --skip-simulation + SKIP_GAS_ESTIMATE to forge)
- Add --verbose/-v flag (opt-in -vvvv traces, was previously always-on)
- Default forge runs are now minimal verbosity

Co-authored-by: Cursor <cursoragent@cursor.com>
…forge-perf

fix: speed up consolidation forge script and add --skip-forge-sim flag
vm.startBroadcast();
(bool success, ) = to.call{value: value}(data);
require(success, "Consolidation transaction failed");
vm.stopBroadcast();
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Gas estimation executes state-changing call before broadcast

High Severity

In broadcast mode with gas estimation enabled, _executeOrWriteTx executes the consolidation call via vm.prank + to.call{value: value}(data) to estimate gas, which actually processes the consolidation requests on the fork state. The subsequent vm.startBroadcast() call then re-executes the same consolidation on the now-modified fork. Because the EIP-7251 consolidation fee increases with each request, and the requests may have already been processed on the fork, the broadcast execution is likely to fail or produce incorrect results.

Fix in Cursor Fix in Web

podPubkeys,
feePerRequest,
address(nodesManager)
);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

BATCH_SIZE is documented but completely ignored for consolidation

Medium Severity

The BATCH_SIZE environment variable is loaded into config.batchSize and documented as "Number of validators per EigenPod transaction", but _generateConsolidationTransactionsByPod never uses it. All validators for a given EigenPod are placed into a single transaction regardless of count. If a pod has many validators (e.g., 200), this could produce a transaction that exceeds the block gas limit and cannot be executed on mainnet.

Additional Locations (1)

Fix in Cursor Fix in Web

Introduced a new constant for the consolidation gas limit and updated the `cast_send_raw` function to accept an optional gas limit parameter. This allows for more flexible transaction management during the consolidation process.
Added a constant for transaction delay and implemented it in the broadcasting functions to ensure smoother transaction processing during consolidation and withdrawal operations.
vm.startBroadcast();
(bool success, ) = to.call{value: value}(data);
require(success, "Consolidation transaction failed");
vm.stopBroadcast();
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Gas estimation double-executes consolidation in broadcast mode

Medium Severity

In broadcast mode with gas estimation enabled (the default), _executeOrWriteTx executes the consolidation transaction on the fork via vm.prank for gas estimation, then executes it again via vm.startBroadcast. The first execution modifies fork state — consuming rate limits and increasing the EIP-7685 consolidation fee — causing the second execution to likely revert with InsufficientConsolidationFees or rate limit errors.

Fix in Cursor Fix in Web

… script

Updated the `parse_int_hex_or_decimal` function to improve handling of various integer formats, including scientific notation and hexadecimal. This change ensures more robust parsing of transaction values from JSON, enhancing the reliability of the consolidation process.
vm.startBroadcast();
(bool success, ) = to.call{value: value}(data);
require(success, "Consolidation transaction failed");
vm.stopBroadcast();
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Gas estimation mutates state before broadcast call

Medium Severity

In _executeOrWriteTx, when broadcast=true and skipGasEstimate=false (the default), the gas estimation call via vm.prank + to.call actually executes requestConsolidation on the fork, mutating state (consuming the rate limiter, submitting consolidation to the EigenPod). The subsequent vm.startBroadcast() + to.call then attempts the same call on the already-modified fork state, which can cause the rate limiter to be consumed twice or the consolidation to fail.

Fix in Cursor Fix in Web

Enhanced the `parse_int_hex_or_decimal` function to handle empty strings, scientific notation, and mixed formats more effectively. This update ensures reliable parsing of transaction values, contributing to the overall stability of the consolidation process.
Introduced the `--ignore-pending-withdrawals` flag in both the shell and Python scripts for unrestaking validators. This allows users to skip the pending withdrawal check and treat the full balance as available, enhancing flexibility in the unrestaking process.
Copy link

@cursor cursor bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cursor Bugbot has reviewed your changes and found 1 potential issue.

Bugbot Autofix is OFF. To automatically fix reported issues with Cloud Agents, enable Autofix in the Cursor dashboard.

podPubkeys,
feePerRequest,
address(nodesManager)
);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

BATCH_SIZE ignored for consolidation transaction grouping

Medium Severity

The _generateConsolidationTransactionsByPod function places ALL validators from a given EigenPod into a single consolidation transaction, ignoring the BATCH_SIZE configuration. The config parameter is passed but config.batchSize is never used for sub-batching. If a pod has many validators (e.g., 100+), the resulting transaction could exceed block gas limits. The README documents BATCH_SIZE as controlling "Validators per transaction," but this isn't enforced for consolidation.

Fix in Cursor Fix in Web

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants