Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
18 changes: 18 additions & 0 deletions blockchain/contracts/scripts/deploy_lit_node_contracts.js
Original file line number Diff line number Diff line change
Expand Up @@ -538,6 +538,24 @@ async function deployLitNodeContracts(deployNodeConfig) {
tx = await stakingContract.addRealm();
await tx.wait();

// Ensure the default keyset is set
let realmConfig = {
maxConcurrentRequests: 1000,
maxPresignCount: 25,
minPresignCount: 10,
peerCheckingIntervalSecs: 7,
maxPresignConcurrency: 2,
rpcHealthcheckEnabled: true,
minEpochForRewards: 3,
permittedValidatorsOn: false,
defaultKeySet: DEFAULT_KEY_SET_NAME,
};
// 1000n, 25n, 10n,
// 7n, 2n, true,
// 3n, false, ''
tx = await stakingContract.setRealmConfig(1, realmConfig);
await tx.wait();

// set the default keyset config
let defaultKeysetConfig = {
identifier: DEFAULT_KEY_SET_NAME,
Expand Down
1 change: 0 additions & 1 deletion rust/lit-node/lit-node-testnet/src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -269,7 +269,6 @@ impl TestSetupBuilder {
.epoch_length(self.epoch_length)
.max_presign_count_u64(self.max_presign_count)
.min_presign_count_u64(self.min_presign_count)
.default_key_set(Some(DEFAULT_KEY_SET_NAME.to_string()))
.build();

info!(
Expand Down
6 changes: 5 additions & 1 deletion rust/lit-node/lit-node-testnet/src/testnet/datil/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -171,7 +171,11 @@ impl DatilTestnet {
let client = Arc::new(SignerMiddleware::new(self.provider.clone(), node_wallet));

let local_pubkey_router = PubkeyRouter::new(pubkey_router_address, client);
info!("Voting for root keys on the Datil chain for staker #{} with node address {:?}", idx + 1, node_account.node_address);
info!(
"Voting for root keys on the Datil chain for staker #{} with node address {:?}",
idx + 1,
node_account.node_address
);
let func = local_pubkey_router.vote_for_root_keys(staking_address, root_keys.clone());
let tx = func.send().await.unwrap();
let _receipt = tx.await.unwrap();
Expand Down
4 changes: 2 additions & 2 deletions rust/lit-node/lit-node/tests/common/lit_actions.rs
Original file line number Diff line number Diff line change
Expand Up @@ -469,9 +469,9 @@ pub async fn generate_pkp_check_is_permitted_pkp_action(
}

let cfg = lit_node_common::config::load_cfg().expect("failed to load LitConfig");
let loaded_config = &cfg.load_full();
let _loaded_config = &cfg.load_full();

let (pkp_pubkey, token_id, _, _) = end_user.first_pkp().info();
let (pkp_pubkey, _token_id, _, _) = end_user.first_pkp().info();

let pkp = end_user.pkp_by_pubkey(pkp_pubkey);
let res = pkp
Expand Down
88 changes: 53 additions & 35 deletions rust/lit-node/lit-node/tests/integration/backup_datil_long.rs
Original file line number Diff line number Diff line change
Expand Up @@ -85,6 +85,7 @@ async fn end_to_end_test(number_of_nodes: usize, recovery_party_size: usize) {
.num_staked_and_joined_validators(number_of_nodes)
.epoch_length(epoch_length)
.include_datil_testnet(DatilTestnetType::NoKeyOverride)
.force_deploy(true)
Comment thread
DashKash54 marked this conversation as resolved.
.build()
.await;

Expand Down Expand Up @@ -206,6 +207,7 @@ async fn end_to_end_test(number_of_nodes: usize, recovery_party_size: usize) {
&client,
&validator_collection2,
&backup_directory,
recovery_party_size,
)
.await;

Expand Down Expand Up @@ -358,6 +360,7 @@ async fn upload_key_backups_to_nodes(
client: &Client,
validator_collection: &ValidatorCollection,
backup_directory: &PathBuf,
recovery_party_size: usize,
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this the number of key shares being restored or the number of parties being restored to? Can we restore 4 key shares to 6 parties? If so is recovery_party_size here 4 or 6?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes and no. Each backup share ( ie, the ones downloaded by the node ops ) can only go to a single new node. In this test, things are a little convoluted since the backup parties and the node ops with backups are equivalent.

In your example, if we have 4 backup shares and 6 nodes, and run this test it will pass. But only because the network receiving the shares will kick the 2 nodes that don't have them - because those 2 nodes can't participate in the DKG. It's all relatively clever. ;-)

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But we don't have the test where the backups < recovery parties so no one gets kicked in the existing test. Can we add this additional test as well

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For Naga Prod we only have 6 backup shares, from what I understand. This means we'll need to reduce the network size, regardless of the number of backup parties.

However, I can write a test that uses 5 backup shares, 5 new nodes and 3 recovery parties. This should work as I believe we only need a threshold of the recovery parties to authorize the recovery to proceed. It won't be a short task though - most of these tests to Mike and Ege days to write, and they had prior knowledge of the backup system. I'm going in cold ;-)

) {
let validators = validator_collection.get_active_validators().await.unwrap();
let mut join_set = JoinSet::new();
Expand All @@ -376,7 +379,15 @@ async fn upload_key_backups_to_nodes(

let tar_file =
backup_directory.join(format!("{public_address}{BACKUP_ENCRYPTED_KEYS}"));
let file = tokio::fs::File::open(tar_file).await.unwrap();
let file = tokio::fs::File::open(tar_file).await;

let file = match file {
Ok(file) => file,
Err(e) => {
error!("No file for: {}", e);
Copy link

Copilot AI Jan 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The error message "No file for: {}" is unclear and unhelpful. The error variable 'e' represents the file open error, not the file itself. Consider changing this to something more descriptive like "Failed to open backup file: {}" to accurately reflect what went wrong.

Suggested change
error!("No file for: {}", e);
error!("Failed to open backup file {}: {}", tar_file.display(), e);

Copilot uses AI. Check for mistakes.
return (public_address, false);
}
};

info!("Uploading backup for validator {}", public_address);
let response = client
Expand Down Expand Up @@ -405,11 +416,15 @@ async fn upload_key_backups_to_nodes(
(public_address, success)
});
}
let mut success_count = 0;
while let Some(node_info) = join_set.join_next().await {
let (public_address, success) = node_info.unwrap();
info!("Node {} received tar backup: {}", public_address, success);
assert!(success);
if success {
success_count += 1;
}
}
assert!(success_count == recovery_party_size);
Copy link

Copilot AI Jan 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The assertion assert!(success_count == recovery_party_size) should use the equality assertion assert_eq!(success_count, recovery_party_size) for better error messages. When the assertion fails, assert_eq! will show the actual values of both sides, making debugging easier.

Suggested change
assert!(success_count == recovery_party_size);
assert_eq!(success_count, recovery_party_size);

Copilot uses AI. Check for mistakes.
}

#[derive(Clone, Default, Serialize)]
Expand Down Expand Up @@ -453,40 +468,43 @@ async fn upload_blinders_to_nodes(
let admin_signing_key = admin_signing_key.clone();
let chain_id = testnet.chain_id;
let client = client.clone();
let blinders = downloaded_blinders[&public_address].clone();

join_set.spawn(async move {
// Send the blinders to the node operators
let url = format!("http://{public_address}/web/admin/set_blinders");
let auth_sig =
generate_admin_auth_sig(&admin_signing_key, chain_id, &url, &public_address);
let auth_sig = serde_json::to_string(&auth_sig.auth_sig).unwrap();

let json_body = serde_json::to_string(&blinders).unwrap();

info!(
"{} Sending blinders: {}",
public_address,
serde_json::to_string_pretty(&blinders).unwrap()
);
info!("Sending blinders to validator: {}", url);
let response = client
.post(url)
.header("Content-Type", "application/octet-stream")
.header(
"x-auth-sig",
data_encoding::BASE64URL.encode(auth_sig.as_bytes()),
)
.body(json_body)
.send()
.await
.unwrap()
.text()
.await
.unwrap();
info!("Response: {}", response);
public_address
});
if downloaded_blinders.contains_key(&public_address) {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Where is the test to simulate a 4 -> 6 restore?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These are all pre-existing tests. I just made them work when the backup ##s don't match the restore nodes.

Currently we don't have enough blinders - I'm trying to generate more tomorrow, and can update the test, but there isn't any good reason to do so. 3 -> 5 is no different than 4 -> 6 in terms of what the software does - it's just slower.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah but we should still have a test that validates the kicking of the remaining nodes and the eventual success of the DKG which seems to be missing rn?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, this test does just that .... though you'd have to change the paramaters. I flipped them back to "normal" since it's more efficient to run in CI.

let blinders = downloaded_blinders[&public_address].clone();

join_set.spawn(async move {
// Send the blinders to the node operators
let url = format!("http://{public_address}/web/admin/set_blinders");
let auth_sig =
generate_admin_auth_sig(&admin_signing_key, chain_id, &url, &public_address);
let auth_sig = serde_json::to_string(&auth_sig.auth_sig).unwrap();

let json_body = serde_json::to_string(&blinders).unwrap();

info!(
"{} Sending blinders: {}",
public_address,
serde_json::to_string_pretty(&blinders).unwrap()
);
info!("Sending blinders to validator: {}", url);
let response = client
.post(url)
.header("Content-Type", "application/octet-stream")
.header(
"x-auth-sig",
data_encoding::BASE64URL.encode(auth_sig.as_bytes()),
)
.body(json_body)
.send()
.await
.unwrap()
.text()
.await
.unwrap();
info!("Response: {}", response);
public_address
});
};
Copy link

Copilot AI Jan 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The semicolon after the closing brace on this line is unnecessary. An 'if' statement does not require a semicolon after its closing brace unless it's part of an expression context. Remove the semicolon for cleaner code.

Suggested change
};
}

Copilot uses AI. Check for mistakes.
}
while let Some(node_info) = join_set.join_next().await {
let public_address = node_info.unwrap();
Expand Down
5 changes: 5 additions & 0 deletions rust/lit-node/lit-node/tests/upgrades/version_upgrades.rs
Original file line number Diff line number Diff line change
Expand Up @@ -93,6 +93,11 @@ async fn test_version_upgrade_against_old_version(
upgrade_step_data.initial_node_versions
);

info!(
"Initial node count: {}",
upgrade_step_data.initial_node_count
);

// Assert all node versions are the same.
assert!(
upgrade_step_data
Expand Down
Loading