Skip to content

Conversation

@czareko
Copy link
Collaborator

@czareko czareko commented Jan 29, 2026

No description provided.

@czareko czareko marked this pull request as ready for review January 29, 2026 08:26
@czareko czareko changed the title feat: HS for Multisig feat: High-Security Integration for Multisig Pallet Jan 29, 2026
@n13
Copy link
Collaborator

n13 commented Jan 29, 2026

Gemini Review:
Architecture: The pallet-multisig now has a hard dependency on pallet-reversible-transfers for the HighSecurityInspector trait. I recommended inverting this dependency by defining the trait within pallet-multisig to keep the pallet generic.

Security: The implementation includes good defense-in-depth measures, such as checking call size before decoding. The documentation regarding the "Risk Window" during migration is clear and helpful.

Code Quality: The refactoring in transaction_extensions.rs to share the whitelist logic with the multisig configuration is excellent.

1. Architecture & Dependency Management

Observation: pallet-multisig now depends on pallet-reversible-transfers to access the HighSecurityInspector trait.

  • pallets/multisig/Cargo.toml adds pallet-reversible-transfers.
  • pallets/multisig/src/lib.rs imports the trait from pallet-reversible-transfers.

Recommendation: Invert this dependency to keep pallet-multisig generic and decoupled from specific business logic pallets.

  • Action: Define the HighSecurityInspector trait directly inside pallet-multisig (e.g., in traits.rs or lib.rs).
  • Benefit: pallet-multisig will no longer depend on pallet-reversible-transfers. The Runtime will bridge the two by implementing the trait (as it already does in runtime/src/configs/mod.rs), but the trait definition will live in the consumer (multisig) or a shared primitives crate, not the provider (reversible-transfers).

It kinda makes sense ^^^

Copy link
Contributor

@ethan-crypto ethan-crypto left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One issue I noticed is that in the benchmarks we aren't actually computing the worst-case for the cleanup process because we aren't setting the call data to the max size for the expired proposals. This could lead to underweighting.

@czareko czareko marked this pull request as draft February 3, 2026 08:34
@czareko czareko marked this pull request as ready for review February 6, 2026 01:36
Copy link
Contributor

@ethan-crypto ethan-crypto left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • Right now we aren't making an effective use of the r regression coefficient. I suggested some changes that would account for this by defining a cleaned_target = i.min(r) which will only clean a subset of the total proposals, resulting in a more accurate weighting and a greater refund for proposal calls

  • The cleanup path remains expensive per proposal as long as call lives inside the same storage value. We should make the cleanup iteration cost depend only on the small fields we care about. To fix this we could either go with a split storage approach:

 ProposalMeta (small: proposer/status/expiry/deposit/…) used by cleanup iteration

 ProposalCall (big) only touched on execute / remove

OR store call as a preimage / hash (call_hash, call_len) and require bytes only when executing.

@n13
Copy link
Collaborator

n13 commented Feb 8, 2026

Proposal meta is a really good optimization @ethan-crypto

Basically don't have to decode the call just to find the expiry date

There's many ways to do this

@czareko
Copy link
Collaborator Author

czareko commented Feb 10, 2026

Finally, we decided to simplify the scope again, so benchmarks and weights are less sophisticated now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants