Skip to content

fix(platform): default omitted proved query limits#3509

Open
PastaPastaPasta wants to merge 13 commits intov3.1-devfrom
codex/fix-contested-resources-proof-limit
Open

fix(platform): default omitted proved query limits#3509
PastaPastaPasta wants to merge 13 commits intov3.1-devfrom
codex/fix-contested-resources-proof-limit

Conversation

@PastaPastaPasta
Copy link
Copy Markdown
Member

@PastaPastaPasta PastaPastaPasta commented Apr 21, 2026

Issue being fixed or feature implemented

Proof verification for several proved paginated queries could fail when the request omitted count/limit and the server applied its default query limit. This showed up live for contested resources and evonode proposed block range queries as GroveDB invalid-proof errors.

What was done?

  • default omitted proved query limits only on the proof-verification and request-reconstruction side
  • preserve omitted count/limit on the wire so request semantics do not change
  • cover the remaining affected proved query paths in the proof verifier
  • reuse drive::config::DEFAULT_QUERY_LIMIT instead of a duplicated verifier constant
  • tighten the live SDK regression test so omitted-limit queries must verify successfully and match the explicit default-limit result
  • address follow-up review feedback around the regression test naming, shared limit constant usage, and wording

How Has This Been Tested?

  • cargo test -p drive-proof-verifier
  • cargo test -p dash-sdk --test main --no-run
  • live verification of the contested-resources repro against current mainnet and testnet during investigation

Breaking Changes

None.

Checklist:

  • I have performed a self-review of my own code
  • I have commented my code, particularly in hard-to-understand areas
  • I have added or updated relevant unit/integration/functional/e2e tests
  • I have added "!" to the title and described breaking changes in the corresponding section if my code contains any
  • I have made corresponding changes to the documentation if needed

For repository code-owners and collaborators only

  • I have assigned this pull request to a milestone

This pull request was created by Codex.

Summary by CodeRabbit

  • Bug Fixes

    • Implemented consistent query limit validation across all request types, ensuring omitted limits apply a platform default and invalid limits produce clear error messages.
  • Tests

    • Added comprehensive unit tests verifying query limit handling, default behavior, and overflow rejection across various request and proof conversions.

… bug

VotePollsByDocumentTypeQuery with start_index_values=["dash"] on the DPNS parentNameAndLabel index produces an invalid GroveDB proof. The proof is missing data for the query range. Without start_index_values the query works and returns top-level keys. Reproduced on mainnet 2026-04-21.
… bug

VotePollsByDocumentTypeQuery with start_index_values=["dash"] on the DPNS parentNameAndLabel index produces an invalid GroveDB proof when limit is None. Adding any explicit limit (e.g. 100) works around the bug. The proof error is: 'Proof is missing data for query range. Encountered unexpected node type: KVHash(...)'. Reproduced on mainnet 2026-04-21.
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Apr 21, 2026

📝 Walkthrough

Walkthrough

The changes introduce a centralized proved_request_limit() function to standardize conversion of optional gRPC count/limit fields from u32 to u16, applying platform defaults when omitted. This replaces scattered inline casting logic across multiple proof verifier implementations with consistent error handling and validation.

Changes

Cohort / File(s) Summary
Core Validation Function
packages/rs-drive-proof-verifier/src/lib.rs
Added proved_request_limit() helper function to centralize limit validation and defaulting, converting optional u32 to u16 with overflow handling. Re-exported DEFAULT_QUERY_LIMIT from drive config. Includes unit tests verifying default application, explicit limit preservation, and overflow rejection.
Request Conversion Updates
packages/rs-drive-proof-verifier/src/from_request.rs
Replaced direct numeric casting in multiple TryFromRequest::try_from_request implementations with calls to proved_request_limit(). Updated imports and added unit tests verifying that encoding omits count/limit when None and decoding applies DEFAULT_QUERY_LIMIT when absent.
Proof Type Implementations
packages/rs-drive-proof-verifier/src/proof.rs, packages/rs-drive-proof-verifier/src/proof/groups.rs, packages/rs-drive-proof-verifier/src/proof/token_pre_programmed_distributions.rs
Updated FromProof implementations to use proved_request_limit() for limit/count validation instead of inline conversions. Changes affect GetEvonodesProposedEpochBlocksByRangeRequest, group queries, and token distribution queries with consistent error propagation.
Regression Test & Fixtures
packages/rs-sdk/tests/fetch/contested_resource.rs, packages/rs-sdk/tests/vectors/contested_resources_start_index_values_proof_bug/data_contract-*.json, packages/rs-sdk/tests/vectors/contested_resources_start_index_values_proof_bug/quorum_pubkey-*.json
Added asynchronous regression test should_contested_resources_start_index_values_no_limit_match_explicit_default_limit validating wire-level omission of count and functional equivalence between omitted and explicit default limits. Includes new test data fixtures (contract schema and quorum key).

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Poem

🐰 A limit once scattered through files so wide,
Now gathers its wisdom where proved_request_limit resides,
With defaults applied and overflows caught with care,
The proof verifier hops with validation so fair! ✨

🚥 Pre-merge checks | ✅ 5
✅ Passed checks (5 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'fix(platform): default omitted proved query limits' directly and accurately summarizes the main change: applying default limits for omitted proved query count/limit fields during proof verification.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch codex/fix-contested-resources-proof-limit

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@github-actions github-actions Bot added this to the v3.1.0 milestone Apr 21, 2026
@thepastaclaw
Copy link
Copy Markdown
Collaborator

thepastaclaw commented Apr 21, 2026

✅ Review complete (commit c05b969)

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@packages/rs-sdk/tests/fetch/contested_resource.rs`:
- Around line 467-472: The test
should_contested_resources_start_index_values_no_limit_match_explicit_default_limit
currently only gets ignored when feature "network-testing" is enabled and
"offline-testing" is not, which means it runs when both features are absent;
change the cfg_attr on that test so it is ignored whenever offline-testing is
NOT enabled (e.g., replace the current cfg_attr(all(feature = "network-testing",
not(feature = "offline-testing")), ignore = "...") with cfg_attr(not(feature =
"offline-testing"), ignore = "...")) so the test is skipped outside offline
(fixture-backed) mode and thus avoids live network calls.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 2a9f1da4-07fd-45b5-9f48-645c92802729

📥 Commits

Reviewing files that changed from the base of the PR and between 4355c15 and a4fd818.

📒 Files selected for processing (11)
  • packages/rs-drive-proof-verifier/src/from_request.rs
  • packages/rs-drive-proof-verifier/src/lib.rs
  • packages/rs-drive-proof-verifier/src/proof.rs
  • packages/rs-drive-proof-verifier/src/proof/groups.rs
  • packages/rs-drive-proof-verifier/src/proof/token_pre_programmed_distributions.rs
  • packages/rs-sdk/tests/fetch/contested_resource.rs
  • packages/rs-sdk/tests/vectors/contested_resources_start_index_values_proof_bug/.gitkeep
  • packages/rs-sdk/tests/vectors/contested_resources_start_index_values_proof_bug/data_contract-e668c659af66aee1e72c186dde7b5b7e0a1d712a09c40d5721f622bf53c53155.json
  • packages/rs-sdk/tests/vectors/contested_resources_start_index_values_proof_bug/msg_GetContestedResourcesRequest_8f4daadf3e41747492cd381cbbd1cf33dd52735f597de4b4c804effd2600d135.json
  • packages/rs-sdk/tests/vectors/contested_resources_start_index_values_proof_bug/msg_GetContestedResourcesRequest_8f71462d5f438e1b12fedf94ad5799af81392b7b0cbb7e86412da37ab13aef4b.json
  • packages/rs-sdk/tests/vectors/contested_resources_start_index_values_proof_bug/quorum_pubkey-6-000000928ce4e3adf20561462b82d40e0804fdbde0e6115133f9a9348267cace.json

Comment on lines +467 to +472
#[tokio::test(flavor = "multi_thread", worker_threads = 1)]
#[cfg_attr(
all(feature = "network-testing", not(feature = "offline-testing")),
ignore = "this regression is fixture-backed in offline mode; regenerate vectors instead of running it live"
)]
async fn should_contested_resources_start_index_values_no_limit_match_explicit_default_limit() {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Ignore this fixture-backed test outside offline mode.

With the current cfg_attr, this test still runs when neither offline-testing nor network-testing is enabled, yet it performs SDK fetches. That can introduce live/network-dependent behavior in default integration test runs.

Proposed fix
 #[tokio::test(flavor = "multi_thread", worker_threads = 1)]
 #[cfg_attr(
-    all(feature = "network-testing", not(feature = "offline-testing")),
-    ignore = "this regression is fixture-backed in offline mode; regenerate vectors instead of running it live"
+    not(feature = "offline-testing"),
+    ignore = "this regression is fixture-backed in offline mode; enable offline-testing to run it"
 )]
 async fn should_contested_resources_start_index_values_no_limit_match_explicit_default_limit() {

As per coding guidelines, Unit and integration tests should not perform network calls; mock dependencies.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/rs-sdk/tests/fetch/contested_resource.rs` around lines 467 - 472,
The test
should_contested_resources_start_index_values_no_limit_match_explicit_default_limit
currently only gets ignored when feature "network-testing" is enabled and
"offline-testing" is not, which means it runs when both features are absent;
change the cfg_attr on that test so it is ignored whenever offline-testing is
NOT enabled (e.g., replace the current cfg_attr(all(feature = "network-testing",
not(feature = "offline-testing")), ignore = "...") with cfg_attr(not(feature =
"offline-testing"), ignore = "...")) so the test is skipped outside offline
(fixture-backed) mode and thus avoids live network calls.

@codecov
Copy link
Copy Markdown

codecov Bot commented Apr 21, 2026

Codecov Report

❌ Patch coverage is 94.11765% with 2 lines in your changes missing coverage. Please review.
✅ Project coverage is 88.29%. Comparing base (c556a86) to head (c05b969).
⚠️ Report is 15 commits behind head on v3.1-dev.

Files with missing lines Patch % Lines
...ckages/rs-drive-proof-verifier/src/from_request.rs 75.00% 1 Missing ⚠️
packages/rs-drive-proof-verifier/src/proof.rs 0.00% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff            @@
##           v3.1-dev    #3509   +/-   ##
=========================================
  Coverage     88.29%   88.29%           
=========================================
  Files          2474     2474           
  Lines        300927   300927           
=========================================
  Hits         265707   265707           
  Misses        35220    35220           
Components Coverage Δ
dpp 87.97% <ø> (ø)
drive 87.35% <100.00%> (ø)
drive-abci 90.25% <ø> (ø)
sdk ∅ <ø> (∅)
dapi-client ∅ <ø> (∅)
platform-version ∅ <ø> (∅)
platform-value 92.26% <ø> (ø)
platform-wallet ∅ <ø> (∅)
drive-proof-verifier 55.66% <75.00%> (ø)
🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@github-actions
Copy link
Copy Markdown
Contributor

✅ DashSDKFFI.xcframework built for this PR.

SwiftPM (host the zip at a stable URL, then use):

.binaryTarget(
  name: "DashSDKFFI",
  url: "https://your.cdn.example/DashSDKFFI.xcframework.zip",
  checksum: "9be7091bab975f774bbd13aff5491f8b26a5546ba936a1c74199ae1777fd5024"
)

Xcode manual integration:

  • Download 'DashSDKFFI.xcframework' artifact from the run link above.
  • Drag it into your app target (Frameworks, Libraries & Embedded Content) and set Embed & Sign.
  • If using the Swift wrapper package, point its binaryTarget to the xcframework location or add the package and place the xcframework at the expected path.

Copy link
Copy Markdown
Collaborator

@thepastaclaw thepastaclaw left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

The PR correctly centralizes default-limit handling in proved_request_limit and applies it across the affected proved paginated query paths, with unit tests and a live regression test. No blocking issues — the main concern (raised independently by both agents) is that the verifier hard-codes DEFAULT_QUERY_LIMIT while the server reads the runtime-configurable DriveConfig::default_query_limit, which leaves a latent mismatch under non-default deployments. Remaining findings are code-quality nitpicks.

Reviewed commit: a4fd818

🟡 3 suggestion(s) | 💬 3 nitpick(s)

2 additional findings

🟡 suggestion: GetIdentityKeysRequest still uses the pre-PR `limit.map(...).transpose()?` pattern

packages/rs-drive-proof-verifier/src/proof.rs (lines 482-483)

This PR set out to cover the remaining proved paginated paths in the verifier and centralize default-limit handling in proved_request_limit. FromProof<GetIdentityKeysRequest> at proof.rs:482-483 is still on the old v0.limit.map(try_u32_to_u16).transpose()? form, which preserves None unchanged. Today the server-side handler (packages/rs-drive-abci/src/query/identity_based_queries/keys/v0/mod.rs:136) also builds IdentityKeysRequest with limit.map(|l| l as u16) and applies no default, so the two sides are symmetric and this path is not broken. The risk is strictly forward-looking: if a future change to prove_identity_keys/verify_identity_keys_by_identity_id ever injects a server-side default, this path will silently regress in exactly the way contested resources did. Either route this path through proved_request_limit for consistency, or add a brief comment documenting that None is intentionally preserved because the server does too.

💬 nitpick: `try_to_request`/`try_from_request` are no longer round-trip symmetric for omitted limits

packages/rs-drive-proof-verifier/src/from_request.rs (lines 83-163)

After this PR, encoding a query with limit: None serializes count: None on the wire (see test_contested_resources_request_preserves_omitted_limit_when_encoding), but decoding that same wire request with try_from_request returns limit: Some(DEFAULT_QUERY_LIMIT) (see test_contested_resources_request_defaults_limit_when_decoding). The asymmetry is intentional and correct for proof reconstruction, but the TryFromRequest trait presents try_from_request/try_to_request as a symmetric encode/decode pair, and callers who construct a query, encode, then decode will now observe a different query. A short doc comment on the trait or the affected impls explaining that try_from_request injects the platform default would prevent future callers from being surprised.

🤖 Prompt for all review comments with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.

In `packages/rs-drive-proof-verifier/src/lib.rs`:
- [SUGGESTION] lines 28-37: Verifier hard-codes DEFAULT_QUERY_LIMIT while server reads runtime-configurable default_query_limit
  `proved_request_limit` defaults omitted limits to the compile-time constant `drive::config::DEFAULT_QUERY_LIMIT` (100), but every affected server handler applies the runtime-configurable `DriveConfig::default_query_limit` (e.g. `packages/rs-drive-abci/src/query/voting/contested_resources/v0/mod.rs:132-146`, `packages/rs-drive-abci/src/query/voting/vote_polls_by_end_date_query/v0/mod.rs`, `packages/rs-drive-abci/src/query/group_queries/group_infos/v0/mod.rs:42-53`, `packages/rs-drive-abci/src/query/token_queries/token_pre_programmed_distributions/v0/mod.rs`, `packages/rs-drive-abci/src/query/validator_queries/proposed_block_counts_by_range/v0/mod.rs:34-46`). Under the default config these agree, but `default_query_limit` is an operator-settable field in `DriveConfig`. If any deployment sets a non-default value, the verifier will reconstruct the proved request with a different bound than the server used to generate the proof, and every omitted-limit proved query will once again fail to verify — the exact class of bug this PR set out to eliminate. Consider pinning the default through `PlatformVersion` (making it consensus-tracked), having the server echo the applied limit in response metadata, or at minimum documenting that clients and servers must share the same `default_query_limit`.

In `packages/rs-sdk/tests/fetch/contested_resource.rs`:
- [SUGGESTION] lines 468-471: Ignore gate also disables the test during `generate-test-vectors` runs
  `generate-test-vectors = ["network-testing"]` in `packages/rs-sdk/Cargo.toml`, and `scripts/generate_test_vectors.sh` invokes `cargo test --features generate-test-vectors` without `--include-ignored`. The current `cfg_attr(all(feature = "network-testing", not(feature = "offline-testing")), ignore = ...)` therefore fires during vector regeneration too, so the new `contested_resources_start_index_values_proof_bug` fixtures cannot be refreshed by the supported workflow — the regression will be pinned to stale binary fixtures and hard to revalidate against future protocol changes. Exclude `generate-test-vectors` from the ignore condition so the test runs when vectors are being produced.

In `packages/rs-drive-proof-verifier/src/proof.rs`:
- [SUGGESTION] lines 482-483: GetIdentityKeysRequest still uses the pre-PR `limit.map(...).transpose()?` pattern
  This PR set out to cover the remaining proved paginated paths in the verifier and centralize default-limit handling in `proved_request_limit`. `FromProof<GetIdentityKeysRequest>` at proof.rs:482-483 is still on the old `v0.limit.map(try_u32_to_u16).transpose()?` form, which preserves `None` unchanged. Today the server-side handler (`packages/rs-drive-abci/src/query/identity_based_queries/keys/v0/mod.rs:136`) also builds `IdentityKeysRequest` with `limit.map(|l| l as u16)` and applies no default, so the two sides are symmetric and this path is not broken. The risk is strictly forward-looking: if a future change to `prove_identity_keys`/`verify_identity_keys_by_identity_id` ever injects a server-side default, this path will silently regress in exactly the way contested resources did. Either route this path through `proved_request_limit` for consistency, or add a brief comment documenting that `None` is intentionally preserved because the server does too.

Comment on lines +28 to +37
pub(crate) use drive::config::DEFAULT_QUERY_LIMIT;

/// Parse a proved request's optional `count`/`limit`, applying Platform's default when omitted.
pub(crate) fn proved_request_limit(limit: Option<u32>) -> Result<u16, Error> {
limit.map_or(Ok(DEFAULT_QUERY_LIMIT), |limit| {
u16::try_from(limit).map_err(|_| Error::RequestError {
error: "query limit exceeds u16::MAX".to_string(),
})
})
}
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 Suggestion: Verifier hard-codes DEFAULT_QUERY_LIMIT while server reads runtime-configurable default_query_limit

proved_request_limit defaults omitted limits to the compile-time constant drive::config::DEFAULT_QUERY_LIMIT (100), but every affected server handler applies the runtime-configurable DriveConfig::default_query_limit (e.g. packages/rs-drive-abci/src/query/voting/contested_resources/v0/mod.rs:132-146, packages/rs-drive-abci/src/query/voting/vote_polls_by_end_date_query/v0/mod.rs, packages/rs-drive-abci/src/query/group_queries/group_infos/v0/mod.rs:42-53, packages/rs-drive-abci/src/query/token_queries/token_pre_programmed_distributions/v0/mod.rs, packages/rs-drive-abci/src/query/validator_queries/proposed_block_counts_by_range/v0/mod.rs:34-46). Under the default config these agree, but default_query_limit is an operator-settable field in DriveConfig. If any deployment sets a non-default value, the verifier will reconstruct the proved request with a different bound than the server used to generate the proof, and every omitted-limit proved query will once again fail to verify — the exact class of bug this PR set out to eliminate. Consider pinning the default through PlatformVersion (making it consensus-tracked), having the server echo the applied limit in response metadata, or at minimum documenting that clients and servers must share the same default_query_limit.

source: ['claude', 'codex']

🤖 Fix this with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.

In `packages/rs-drive-proof-verifier/src/lib.rs`:
- [SUGGESTION] lines 28-37: Verifier hard-codes DEFAULT_QUERY_LIMIT while server reads runtime-configurable default_query_limit
  `proved_request_limit` defaults omitted limits to the compile-time constant `drive::config::DEFAULT_QUERY_LIMIT` (100), but every affected server handler applies the runtime-configurable `DriveConfig::default_query_limit` (e.g. `packages/rs-drive-abci/src/query/voting/contested_resources/v0/mod.rs:132-146`, `packages/rs-drive-abci/src/query/voting/vote_polls_by_end_date_query/v0/mod.rs`, `packages/rs-drive-abci/src/query/group_queries/group_infos/v0/mod.rs:42-53`, `packages/rs-drive-abci/src/query/token_queries/token_pre_programmed_distributions/v0/mod.rs`, `packages/rs-drive-abci/src/query/validator_queries/proposed_block_counts_by_range/v0/mod.rs:34-46`). Under the default config these agree, but `default_query_limit` is an operator-settable field in `DriveConfig`. If any deployment sets a non-default value, the verifier will reconstruct the proved request with a different bound than the server used to generate the proof, and every omitted-limit proved query will once again fail to verify — the exact class of bug this PR set out to eliminate. Consider pinning the default through `PlatformVersion` (making it consensus-tracked), having the server echo the applied limit in response metadata, or at minimum documenting that clients and servers must share the same `default_query_limit`.

Comment on lines +468 to +471
#[cfg_attr(
all(feature = "network-testing", not(feature = "offline-testing")),
ignore = "this regression is fixture-backed in offline mode; regenerate vectors instead of running it live"
)]
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 Suggestion: Ignore gate also disables the test during generate-test-vectors runs

generate-test-vectors = ["network-testing"] in packages/rs-sdk/Cargo.toml, and scripts/generate_test_vectors.sh invokes cargo test --features generate-test-vectors without --include-ignored. The current cfg_attr(all(feature = "network-testing", not(feature = "offline-testing")), ignore = ...) therefore fires during vector regeneration too, so the new contested_resources_start_index_values_proof_bug fixtures cannot be refreshed by the supported workflow — the regression will be pinned to stale binary fixtures and hard to revalidate against future protocol changes. Exclude generate-test-vectors from the ignore condition so the test runs when vectors are being produced.

💡 Suggested change
Suggested change
#[cfg_attr(
all(feature = "network-testing", not(feature = "offline-testing")),
ignore = "this regression is fixture-backed in offline mode; regenerate vectors instead of running it live"
)]
#[cfg_attr(
all(
feature = "network-testing",
not(any(feature = "offline-testing", feature = "generate-test-vectors"))
),
ignore = "this regression is fixture-backed in offline mode; regenerate vectors instead of running it live"
)]

source: ['codex']

🤖 Fix this with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.

In `packages/rs-sdk/tests/fetch/contested_resource.rs`:
- [SUGGESTION] lines 468-471: Ignore gate also disables the test during `generate-test-vectors` runs
  `generate-test-vectors = ["network-testing"]` in `packages/rs-sdk/Cargo.toml`, and `scripts/generate_test_vectors.sh` invokes `cargo test --features generate-test-vectors` without `--include-ignored`. The current `cfg_attr(all(feature = "network-testing", not(feature = "offline-testing")), ignore = ...)` therefore fires during vector regeneration too, so the new `contested_resources_start_index_values_proof_bug` fixtures cannot be refreshed by the supported workflow — the regression will be pinned to stale binary fixtures and hard to revalidate against future protocol changes. Exclude `generate-test-vectors` from the ignore condition so the test runs when vectors are being produced.

Comment on lines +57 to +60
#[test]
fn proved_request_limit_rejects_u32_overflow() {
assert!(proved_request_limit(Some(u16::MAX as u32 + 1)).is_err());
}
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💬 Nitpick: Overflow test only asserts is_err(), not the specific error variant

proved_request_limit_rejects_u32_overflow only checks is_err(), so a future refactor that maps the failure to a different variant (e.g. a new InvalidArgument instead of RequestError) would silently pass. Since proved_request_limit deliberately constructs an Error::RequestError { error: "query limit exceeds u16::MAX" }, the test should match that variant to lock the contract in.

💡 Suggested change
Suggested change
#[test]
fn proved_request_limit_rejects_u32_overflow() {
assert!(proved_request_limit(Some(u16::MAX as u32 + 1)).is_err());
}
#[test]
fn proved_request_limit_rejects_u32_overflow() {
match proved_request_limit(Some(u16::MAX as u32 + 1)) {
Err(Error::RequestError { error }) => assert!(error.contains("u16::MAX")),
other => panic!("expected RequestError, got {:?}", other),
}
}

source: ['claude']

Comment on lines +491 to +494
let request: GetContestedResourcesRequest = make_query(None)
.clone()
.query(true)
.expect("query should serialize");
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💬 Nitpick: Redundant .clone() on freshly constructed query

make_query(None) already returns an owned VotePollsByDocumentTypeQuery (the closure clones existing_document_type_name internally). The trailing .clone() before .query(true) is unnecessary and obscures intent — readers may assume make_query returns a reference or that the clone is load-bearing. Drop it for clarity.

💡 Suggested change
Suggested change
let request: GetContestedResourcesRequest = make_query(None)
.clone()
.query(true)
.expect("query should serialize");
let request: GetContestedResourcesRequest = make_query(None)
.query(true)
.expect("query should serialize");

source: ['claude']

shumkov and others added 2 commits April 29, 2026 19:29
Demonstrates that DriveDocumentQuery has the same proof-mismatch bug
this PR fixes for contested resources, but on the documents path:
TryFrom<&DocumentQuery> for DriveDocumentQuery maps wire `limit==0` to
`limit: None` while the server applies `default_query_limit`. The
asymmetric SizedQuery trips GroveDB's verify_query with the same
"Proof is missing data for query range" failure.

Test inserts >DEFAULT_QUERY_LIMIT documents, proves with
`limit=Some(100)`, verifies with `limit=None`, expects success — fails
today, will pass once `proved_request_limit` (or an equivalent
SDK-side fix) is extended to the documents path.

Left intentionally non-ignored so CI surfaces the gap; resolve before
merge or track as an explicit follow-up.
Copy link
Copy Markdown
Collaborator

@thepastaclaw thepastaclaw left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

PR correctly fixes the omitted-limit proof-mismatch for contested-resource, vote, group, evonode, and token paths by mirroring server-side default-bounded reconstruction in proved_request_limit. However, the PR commits a deliberately non-ignored regression test (document_proof_with_limit_some_prove_vs_limit_none_verify) that the PR's own doc comment states is left failing to surface an unfixed bug on the documents proof path. That test reproduces the same class of bug on getDocuments (SDK TryFrom<&DocumentQuery> maps wire limit == 0 to None while the server defaults it to default_query_limit), which the PR does not patch. Either extend proved_request_limit (or equivalent) to the documents path, or #[ignore] the test with a tracked issue — landing as-is is a guaranteed CI break. Two adjacent design concerns: the verifier defaults to a compile-time constant where the server reads a runtime-configurable DriveConfig::default_query_limit, and the helper accepts 0 / over-max limits that handlers reject.

Reviewed commit: c05b969

🔴 1 blocking | 🟡 2 suggestion(s) | 💬 1 nitpick(s)

🤖 Prompt for all review comments with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.

In `packages/rs-drive/src/verify/document/verify_proof_keep_serialized/v0/mod.rs`:
- [BLOCKING] lines 145-242: Deliberately failing, non-ignored regression test breaks CI and signals an unfixed bug on the documents path
  The new `document_proof_with_limit_some_prove_vs_limit_none_verify` test is committed without `#[ignore]` / `#[should_panic]`, and its own doc comment states: *"`proved_request_limit` introduced in this PR is not applied to the document path; that gap should be closed before merge or in a tracked follow-up. Until then this test fails CI to keep the regression visible."* Running `cargo test -p drive document_proof_with_limit_some_prove_vs_limit_none_verify` confirms it panics today at the `expect(...)` on line 231 with `GroveDB(InvalidProof(... Proof is missing data for query range ...))`, and the PR's macOS Rust workspace test job is currently FAILURE for exactly this reason. This is the same proof-asymmetry class of bug the PR fixes elsewhere: server-side `packages/rs-drive-abci/src/query/document_query/v0/mod.rs:123-127` defaults wire `limit == 0` to `self.config.drive.default_query_limit`, but the SDK reconstruction in `packages/rs-sdk/src/platform/documents/document_query.rs:325-329` maps wire `limit == 0` back to `limit: None`, producing a `SizedQuery` GroveDB cannot verify against the server's bounded proof. Either (a) extend the same default-on-reconstruction treatment to the documents path (in `TryFrom<&DocumentQuery> for DriveDocumentQuery` or in `DriveDocumentQuery::construct_path_query`) so production `getDocuments` calls with omitted/zero limit verify correctly, or (b) gate this test with `#[ignore = "tracking issue #NNN"]` and link a follow-up. As written, this PR cannot pass CI and a known production bug is left reachable from any client doing a proved `find` over a document type with more than `DEFAULT_QUERY_LIMIT` documents.

In `packages/rs-drive-proof-verifier/src/lib.rs`:
- [SUGGESTION] lines 24-37: Verifier hardcodes server's runtime-configurable `default_query_limit` as a compile-time constant
  `proved_request_limit` falls back to the compile-time constant `drive::config::DEFAULT_QUERY_LIMIT` (`= 100` per `packages/rs-drive/src/config.rs:16`), but every server path this fix mirrors actually reads the runtime field `self.config.drive.default_query_limit` (e.g. `packages/rs-drive-abci/src/query/voting/contested_resources/v0/mod.rs:133`, `.../contested_resource_vote_state/v0/mod.rs:130`, `.../contested_resource_voters_for_identity/v0/mod.rs:124`, `.../contested_resource_identity_votes/v0/mod.rs:44`, `.../vote_polls_by_end_date_query/v0/mod.rs:63`, `.../validator_queries/proposed_block_counts_by_range/v0/mod.rs:35`, `.../token_queries/token_pre_programmed_distributions/v0/mod.rs:40`, `.../group_queries/group_actions/v0/mod.rs:60`). `DriveConfig::default_query_limit` is a serde-deserialized field on `DriveConfig` (`packages/rs-drive/src/config.rs:44-52`), so any node operator who overrides it in `drive` config will produce proofs the SDK reconstructs against the wrong range — the same `Proof is missing data for query range` failure this PR is meant to eliminate, just silent and undebuggable on the client side. This isn't a soundness issue (no proof forgery), but it is a liveness/availability footgun. Recommend either (i) treating `default_query_limit` as a consensus-pinned protocol constant via `PlatformVersion` (so it is impossible to misconfigure) and asserting agreement at config load, or (ii) returning the effective limit in `ResponseMetadata` and having the SDK use the server's declared value. At minimum, add a doc-comment invariant on `DEFAULT_QUERY_LIMIT` documenting that it must equal `DriveConfig::default_query_limit` for every node on the network.
- [SUGGESTION] lines 31-37: `proved_request_limit` accepts wire values the handlers reject (`0` and over-limit)
  The helper only checks `u16::try_from(limit)`, but the server-side handlers it mirrors apply additional invariants: they reject `limit == 0` and `limit > config.default_query_limit` (e.g. `packages/rs-drive-abci/src/query/voting/contested_resources/v0/mod.rs:137-141`, `.../contested_resource_vote_state/v0/mod.rs:134-138`, `.../contested_resource_voters_for_identity/v0/mod.rs:128-132`, `.../vote_polls_by_end_date_query/v0/mod.rs:62-71`); the evonode path (`.../validator_queries/proposed_block_counts_by_range/v0/mod.rs:34-45`) bounds against `config.max_query_limit`. Post-fix, the verifier will happily reconstruct a request with `count = 0` or `count > default_query_limit` and try to verify a proof against it, even though an honest node would have rejected the request and never produced such a proof. In practice this is unreachable on the wire (no honest node will pair such a value with a proof), so the bug is latent — but it is also the kind of asymmetry this PR is explicitly trying to close. Either reject `Some(0)` and out-of-range values in `proved_request_limit` (returning `Error::RequestError`), or document that the helper trusts upstream validation. As-is the helper's intent ("mirror server-side request reconstruction") is only partially fulfilled.

Comment on lines +145 to +242
/// Regression test exposing the same proof-mismatch class of bug this PR
/// fixes for contested resources, but on the documents path — which this
/// PR does NOT address.
///
/// Server (`packages/rs-drive-abci/src/query/document_query/v0/mod.rs:124`)
/// applies `default_query_limit` when wire `limit == 0`, while the SDK's
/// `TryFrom<&DocumentQuery> for DriveDocumentQuery`
/// (`packages/rs-sdk/src/platform/documents/document_query.rs:325-329`)
/// maps wire `limit == 0` back to `limit: None`. The asymmetric
/// `SizedQuery` flowing through `DriveDocumentQuery::construct_path_query`
/// (`packages/rs-drive/src/query/mod.rs:1288, 1328, 2113`) then trips
/// GroveDB's `verify_query` with the same "Proof is missing data for
/// query range" failure that motivated this PR.
///
/// `proved_request_limit` introduced in this PR is not applied to the
/// document path; that gap should be closed before merge or in a tracked
/// follow-up. Until then this test fails CI to keep the regression
/// visible.
#[test]
fn document_proof_with_limit_some_prove_vs_limit_none_verify() {
let drive = setup_drive_with_initial_state_structure(None);
let platform_version = PlatformVersion::latest();

let contract = load_system_data_contract(SystemDataContract::DPNS, platform_version)
.expect("expected to load DPNS contract");

drive
.apply_contract(
&contract,
BlockInfo::default(),
true,
None,
None,
platform_version,
)
.expect("expected to apply contract");

let document_type = contract
.document_type_for_name("preorder")
.expect("expected preorder document type");

// Insert > DEFAULT_QUERY_LIMIT documents so the bounded vs unbounded
// queries cannot trivially produce identical proofs.
let document_count = 110u64;
for seed in 1..=document_count {
let document = document_type
.random_document(Some(seed), platform_version)
.expect("expected a random document");

drive
.add_document_for_contract(
DocumentAndContractInfo {
owned_document_info: OwnedDocumentInfo {
document_info: DocumentRefInfo((&document, None)),
owner_id: None,
},
contract: &contract,
document_type,
},
false,
BlockInfo::default(),
true,
None,
platform_version,
None,
)
.expect("expected to insert document");
}

// Prover applies `default_query_limit` (server side).
let prover_query = DriveDocumentQuery::all_items_query(
&contract,
document_type,
Some(crate::config::DEFAULT_QUERY_LIMIT),
);

let (proof, _cost) = prover_query
.execute_with_proof(&drive, None, None, platform_version)
.expect("expected to execute query with proof");

// Verifier reconstructs with `limit: None`, mirroring what the SDK
// does today for omitted document-query limits.
let verifier_query = DriveDocumentQuery::all_items_query(&contract, document_type, None);

let (_root, docs) = verifier_query
.verify_proof_keep_serialized(proof.as_slice(), platform_version)
.expect(
"verify with limit=None should succeed against a limit=Some(DEFAULT_QUERY_LIMIT) \
proof; failure here indicates the document path needs the same \
`proved_request_limit` treatment the contested-resource paths receive in this PR",
);

assert_eq!(
docs.len(),
crate::config::DEFAULT_QUERY_LIMIT as usize,
"expected verifier to return DEFAULT_QUERY_LIMIT documents when prover bounded the result set"
);
}
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🔴 Blocking: Deliberately failing, non-ignored regression test breaks CI and signals an unfixed bug on the documents path

The new document_proof_with_limit_some_prove_vs_limit_none_verify test is committed without #[ignore] / #[should_panic], and its own doc comment states: "proved_request_limit introduced in this PR is not applied to the document path; that gap should be closed before merge or in a tracked follow-up. Until then this test fails CI to keep the regression visible." Running cargo test -p drive document_proof_with_limit_some_prove_vs_limit_none_verify confirms it panics today at the expect(...) on line 231 with GroveDB(InvalidProof(... Proof is missing data for query range ...)), and the PR's macOS Rust workspace test job is currently FAILURE for exactly this reason. This is the same proof-asymmetry class of bug the PR fixes elsewhere: server-side packages/rs-drive-abci/src/query/document_query/v0/mod.rs:123-127 defaults wire limit == 0 to self.config.drive.default_query_limit, but the SDK reconstruction in packages/rs-sdk/src/platform/documents/document_query.rs:325-329 maps wire limit == 0 back to limit: None, producing a SizedQuery GroveDB cannot verify against the server's bounded proof. Either (a) extend the same default-on-reconstruction treatment to the documents path (in TryFrom<&DocumentQuery> for DriveDocumentQuery or in DriveDocumentQuery::construct_path_query) so production getDocuments calls with omitted/zero limit verify correctly, or (b) gate this test with #[ignore = "tracking issue #NNN"] and link a follow-up. As written, this PR cannot pass CI and a known production bug is left reachable from any client doing a proved find over a document type with more than DEFAULT_QUERY_LIMIT documents.

source: ['claude-general', 'codex-general', 'claude-rust-quality']

🤖 Fix this with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.

In `packages/rs-drive/src/verify/document/verify_proof_keep_serialized/v0/mod.rs`:
- [BLOCKING] lines 145-242: Deliberately failing, non-ignored regression test breaks CI and signals an unfixed bug on the documents path
  The new `document_proof_with_limit_some_prove_vs_limit_none_verify` test is committed without `#[ignore]` / `#[should_panic]`, and its own doc comment states: *"`proved_request_limit` introduced in this PR is not applied to the document path; that gap should be closed before merge or in a tracked follow-up. Until then this test fails CI to keep the regression visible."* Running `cargo test -p drive document_proof_with_limit_some_prove_vs_limit_none_verify` confirms it panics today at the `expect(...)` on line 231 with `GroveDB(InvalidProof(... Proof is missing data for query range ...))`, and the PR's macOS Rust workspace test job is currently FAILURE for exactly this reason. This is the same proof-asymmetry class of bug the PR fixes elsewhere: server-side `packages/rs-drive-abci/src/query/document_query/v0/mod.rs:123-127` defaults wire `limit == 0` to `self.config.drive.default_query_limit`, but the SDK reconstruction in `packages/rs-sdk/src/platform/documents/document_query.rs:325-329` maps wire `limit == 0` back to `limit: None`, producing a `SizedQuery` GroveDB cannot verify against the server's bounded proof. Either (a) extend the same default-on-reconstruction treatment to the documents path (in `TryFrom<&DocumentQuery> for DriveDocumentQuery` or in `DriveDocumentQuery::construct_path_query`) so production `getDocuments` calls with omitted/zero limit verify correctly, or (b) gate this test with `#[ignore = "tracking issue #NNN"]` and link a follow-up. As written, this PR cannot pass CI and a known production bug is left reachable from any client doing a proved `find` over a document type with more than `DEFAULT_QUERY_LIMIT` documents.

Comment on lines +24 to +37
/// Default query limit applied by Platform when proved paginated requests omit `count` or `limit`.
///
/// Proof verification must mirror this behavior; otherwise a valid proof for a default-bounded
/// server query is reconstructed locally as an unbounded query and looks truncated.
pub(crate) use drive::config::DEFAULT_QUERY_LIMIT;

/// Parse a proved request's optional `count`/`limit`, applying Platform's default when omitted.
pub(crate) fn proved_request_limit(limit: Option<u32>) -> Result<u16, Error> {
limit.map_or(Ok(DEFAULT_QUERY_LIMIT), |limit| {
u16::try_from(limit).map_err(|_| Error::RequestError {
error: "query limit exceeds u16::MAX".to_string(),
})
})
}
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 Suggestion: Verifier hardcodes server's runtime-configurable default_query_limit as a compile-time constant

proved_request_limit falls back to the compile-time constant drive::config::DEFAULT_QUERY_LIMIT (= 100 per packages/rs-drive/src/config.rs:16), but every server path this fix mirrors actually reads the runtime field self.config.drive.default_query_limit (e.g. packages/rs-drive-abci/src/query/voting/contested_resources/v0/mod.rs:133, .../contested_resource_vote_state/v0/mod.rs:130, .../contested_resource_voters_for_identity/v0/mod.rs:124, .../contested_resource_identity_votes/v0/mod.rs:44, .../vote_polls_by_end_date_query/v0/mod.rs:63, .../validator_queries/proposed_block_counts_by_range/v0/mod.rs:35, .../token_queries/token_pre_programmed_distributions/v0/mod.rs:40, .../group_queries/group_actions/v0/mod.rs:60). DriveConfig::default_query_limit is a serde-deserialized field on DriveConfig (packages/rs-drive/src/config.rs:44-52), so any node operator who overrides it in drive config will produce proofs the SDK reconstructs against the wrong range — the same Proof is missing data for query range failure this PR is meant to eliminate, just silent and undebuggable on the client side. This isn't a soundness issue (no proof forgery), but it is a liveness/availability footgun. Recommend either (i) treating default_query_limit as a consensus-pinned protocol constant via PlatformVersion (so it is impossible to misconfigure) and asserting agreement at config load, or (ii) returning the effective limit in ResponseMetadata and having the SDK use the server's declared value. At minimum, add a doc-comment invariant on DEFAULT_QUERY_LIMIT documenting that it must equal DriveConfig::default_query_limit for every node on the network.

source: ['claude-security-auditor', 'claude-rust-quality']

🤖 Fix this with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.

In `packages/rs-drive-proof-verifier/src/lib.rs`:
- [SUGGESTION] lines 24-37: Verifier hardcodes server's runtime-configurable `default_query_limit` as a compile-time constant
  `proved_request_limit` falls back to the compile-time constant `drive::config::DEFAULT_QUERY_LIMIT` (`= 100` per `packages/rs-drive/src/config.rs:16`), but every server path this fix mirrors actually reads the runtime field `self.config.drive.default_query_limit` (e.g. `packages/rs-drive-abci/src/query/voting/contested_resources/v0/mod.rs:133`, `.../contested_resource_vote_state/v0/mod.rs:130`, `.../contested_resource_voters_for_identity/v0/mod.rs:124`, `.../contested_resource_identity_votes/v0/mod.rs:44`, `.../vote_polls_by_end_date_query/v0/mod.rs:63`, `.../validator_queries/proposed_block_counts_by_range/v0/mod.rs:35`, `.../token_queries/token_pre_programmed_distributions/v0/mod.rs:40`, `.../group_queries/group_actions/v0/mod.rs:60`). `DriveConfig::default_query_limit` is a serde-deserialized field on `DriveConfig` (`packages/rs-drive/src/config.rs:44-52`), so any node operator who overrides it in `drive` config will produce proofs the SDK reconstructs against the wrong range — the same `Proof is missing data for query range` failure this PR is meant to eliminate, just silent and undebuggable on the client side. This isn't a soundness issue (no proof forgery), but it is a liveness/availability footgun. Recommend either (i) treating `default_query_limit` as a consensus-pinned protocol constant via `PlatformVersion` (so it is impossible to misconfigure) and asserting agreement at config load, or (ii) returning the effective limit in `ResponseMetadata` and having the SDK use the server's declared value. At minimum, add a doc-comment invariant on `DEFAULT_QUERY_LIMIT` documenting that it must equal `DriveConfig::default_query_limit` for every node on the network.

Comment on lines +31 to +37
pub(crate) fn proved_request_limit(limit: Option<u32>) -> Result<u16, Error> {
limit.map_or(Ok(DEFAULT_QUERY_LIMIT), |limit| {
u16::try_from(limit).map_err(|_| Error::RequestError {
error: "query limit exceeds u16::MAX".to_string(),
})
})
}
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 Suggestion: proved_request_limit accepts wire values the handlers reject (0 and over-limit)

The helper only checks u16::try_from(limit), but the server-side handlers it mirrors apply additional invariants: they reject limit == 0 and limit > config.default_query_limit (e.g. packages/rs-drive-abci/src/query/voting/contested_resources/v0/mod.rs:137-141, .../contested_resource_vote_state/v0/mod.rs:134-138, .../contested_resource_voters_for_identity/v0/mod.rs:128-132, .../vote_polls_by_end_date_query/v0/mod.rs:62-71); the evonode path (.../validator_queries/proposed_block_counts_by_range/v0/mod.rs:34-45) bounds against config.max_query_limit. Post-fix, the verifier will happily reconstruct a request with count = 0 or count > default_query_limit and try to verify a proof against it, even though an honest node would have rejected the request and never produced such a proof. In practice this is unreachable on the wire (no honest node will pair such a value with a proof), so the bug is latent — but it is also the kind of asymmetry this PR is explicitly trying to close. Either reject Some(0) and out-of-range values in proved_request_limit (returning Error::RequestError), or document that the helper trusts upstream validation. As-is the helper's intent ("mirror server-side request reconstruction") is only partially fulfilled.

source: ['codex-rust-quality', 'claude-general']

🤖 Fix this with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.

In `packages/rs-drive-proof-verifier/src/lib.rs`:
- [SUGGESTION] lines 31-37: `proved_request_limit` accepts wire values the handlers reject (`0` and over-limit)
  The helper only checks `u16::try_from(limit)`, but the server-side handlers it mirrors apply additional invariants: they reject `limit == 0` and `limit > config.default_query_limit` (e.g. `packages/rs-drive-abci/src/query/voting/contested_resources/v0/mod.rs:137-141`, `.../contested_resource_vote_state/v0/mod.rs:134-138`, `.../contested_resource_voters_for_identity/v0/mod.rs:128-132`, `.../vote_polls_by_end_date_query/v0/mod.rs:62-71`); the evonode path (`.../validator_queries/proposed_block_counts_by_range/v0/mod.rs:34-45`) bounds against `config.max_query_limit`. Post-fix, the verifier will happily reconstruct a request with `count = 0` or `count > default_query_limit` and try to verify a proof against it, even though an honest node would have rejected the request and never produced such a proof. In practice this is unreachable on the wire (no honest node will pair such a value with a proof), so the bug is latent — but it is also the kind of asymmetry this PR is explicitly trying to close. Either reject `Some(0)` and out-of-range values in `proved_request_limit` (returning `Error::RequestError`), or document that the helper trusts upstream validation. As-is the helper's intent ("mirror server-side request reconstruction") is only partially fulfilled.

get_contested_resource_vote_state_request::Version::V0(v) => {
ContestedDocumentVotePollDriveQuery {
limit: v.count.map(|v| v as u16),
limit: Some(proved_request_limit(v.count)?),
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💬 Nitpick: Previously unbounded omitted-limit FetchMany queries are now silently truncated to 100 verified items

Pre-fix, calling FetchMany with omitted count/limit failed proof verification outright. Post-fix, the same call returns up to DEFAULT_QUERY_LIMIT (100) items as a fully verified set with no indication that more rows may exist server-side. SDK consumers that previously distinguished "got an error" from "got the complete set" — e.g. enumerating votes given by an identity to check whether they have voted on a specific contested resource — could now silently make decisions on a truncated set. The proofs are sound; the concern is presentation. Worth either logging when the wire count/limit was None so the user knows pagination kicked in, or surfacing a truncated/has_more signal on the FetchMany return types so callers can detect the boundary.

source: ['claude-security-auditor']

Copy link
Copy Markdown
Collaborator

@thepastaclaw thepastaclaw left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

PR correctly fixes omitted-limit proof reconstruction for contested-resource, vote-poll, group, evonode, and token paths via the new proved_request_limit helper. However, it commits a deliberately failing, non-ignored regression test on the documents path that breaks CI and points at the same unfixed bug in getDocuments. Two design concerns also remain: the helper hardcodes the compile-time DEFAULT_QUERY_LIMIT while servers read the runtime-configurable DriveConfig::default_query_limit, and the helper accepts wire values (Some(0), over-limit) that honest server handlers reject — partially undermining its 'mirror server-side request reconstruction' contract.

Reviewed commit: c05b969

Fresh rerun of the review pipeline for this dispatcher item. The inline threads from the earlier same-SHA review already cover these findings, so I am posting this rerun as a top-level review to avoid duplicating inline comments.

🔴 1 blocking | 🟡 2 suggestion(s) | 💬 1 nitpick(s)

🔴 blocking: Non-ignored regression test is intentionally left failing because the documents proof path is unfixed

packages/rs-drive/src/verify/document/verify_proof_keep_serialized/v0/mod.rs:145-242

document_proof_with_limit_some_prove_vs_limit_none_verify is committed as a plain #[test] (no #[ignore], no #[should_panic]). Its own doc-comment states: "proved_request_limit introduced in this PR is not applied to the document path; that gap should be closed before merge or in a tracked follow-up. Until then this test fails CI to keep the regression visible." The expect(...) at line 231 panics with GroveDB InvalidProof(... Proof is missing data for query range ...) — the same proof-asymmetry class this PR fixes elsewhere.

Root cause confirmed: server packages/rs-drive-abci/src/query/document_query/v0/mod.rs:123-127 substitutes self.config.drive.default_query_limit when wire limit == 0, while SDK reconstruction in packages/rs-sdk/src/platform/documents/document_query.rs:325-329 maps wire limit == 0 back to limit: None, producing a SizedQuery GroveDB cannot verify against the server's bounded proof.

Resolve before merge by either (a) extending the same default-on-reconstruction treatment to the documents path (in TryFrom<&DocumentQuery> for DriveDocumentQuery or in DriveDocumentQuery::construct_path_query), or (b) gating this test with #[ignore = "tracking issue #NNN"] and linking a follow-up. As-is, the PR cannot pass CI and a known production bug is reachable from any client doing a proved find over a document type with more than DEFAULT_QUERY_LIMIT documents.

🟡 suggestion: Verifier hardcodes the server's runtime-configurable `default_query_limit` as a compile-time constant

packages/rs-drive-proof-verifier/src/lib.rs:24-37

proved_request_limit falls back to the compile-time drive::config::DEFAULT_QUERY_LIMIT (= 100, packages/rs-drive/src/config.rs:16), but every server path this fix mirrors reads the runtime field self.config.drive.default_query_limit (e.g. packages/rs-drive-abci/src/query/voting/contested_resources/v0/mod.rs:132-133, .../contested_resource_vote_state/v0/mod.rs, .../contested_resource_voters_for_identity/v0/mod.rs, .../vote_polls_by_end_date_query/v0/mod.rs, .../validator_queries/proposed_block_counts_by_range/v0/mod.rs, .../token_queries/token_pre_programmed_distributions/v0/mod.rs, .../group_queries/group_actions/v0/mod.rs).

DriveConfig::default_query_limit is a serde-deserialized u16 (packages/rs-drive/src/config.rs:44-52), i.e. operator-overridable. Any node running with a non-default value will produce proofs the SDK reconstructs against the wrong range — recreating the silent Proof is missing data for query range failure this PR is meant to eliminate. Not a soundness bug (no proof forgery is possible), but a liveness/availability footgun and a network-fragmentation vector: a single misconfigured masternode can DoS proof-verifying clients on whichever quorum it lands in.

Fix options: (i) treat default_query_limit as a consensus-pinned protocol constant via PlatformVersion and reject divergent config at startup; (ii) return the effective limit in ResponseMetadata and have the SDK reconstruct against the server-declared value. At minimum, document on DEFAULT_QUERY_LIMIT that it must equal DriveConfig::default_query_limit for every node on the network.

🟡 suggestion: `proved_request_limit` accepts wire values that honest handlers reject (`Some(0)` and over-limit)

packages/rs-drive-proof-verifier/src/lib.rs:30-37

The helper only checks u16::try_from(limit), but every server handler it mirrors imposes additional invariants: limit == 0 is rejected and limit > config.default_query_limit is rejected (e.g. packages/rs-drive-abci/src/query/voting/contested_resources/v0/mod.rs:137-141if limit == 0 || limit > config.default_query_limit { return InvalidArgument }; same shape in contested_resource_vote_state/v0/mod.rs, contested_resource_voters_for_identity/v0/mod.rs, vote_polls_by_end_date_query/v0/mod.rs, group_actions/v0/mod.rs, token_pre_programmed_distributions/v0/mod.rs, proposed_block_counts_by_range/v0/mod.rs); the evonode path bounds against config.max_query_limit.

Post-fix, the verifier will happily reconstruct a SizedQuery with count = Some(0) or count > default_query_limit and attempt to verify a proof against it. In practice unreachable on the wire today (no honest node pairs such a value with a proof), so this is latent rather than exploitable — but it is exactly the kind of client/server asymmetry this PR exists to close, and it weakens the helper's stated contract ("mirror server-side request reconstruction"). If a future server change relaxes one of these bounds, the verifier silently inherits the looser policy.

Reject Some(0) and values exceeding DEFAULT_QUERY_LIMIT (or DEFAULT_MAX_QUERY_LIMIT for the evonode caller) in proved_request_limit with Error::RequestError, or document explicitly that the helper trusts upstream validation.

💬 nitpick: Previously-unbounded omitted-limit FetchMany queries are now silently truncated to `DEFAULT_QUERY_LIMIT`

packages/rs-drive-proof-verifier/src/from_request.rs:85-91

Pre-fix, calling FetchMany with omitted count/limit failed proof verification outright. Post-fix, the same call returns up to DEFAULT_QUERY_LIMIT (100) items as a fully verified set with no indication that more rows may exist server-side. SDK consumers that previously distinguished "got an error" from "got the complete set" — e.g. enumerating votes given by an identity to check whether they have voted on a specific contested resource — could now silently make decisions on a truncated set. Proofs are sound; the concern is presentation. Worth either logging at debug when the wire count/limit was None, or surfacing a truncated/has_more signal on the FetchMany return types so callers can detect the boundary.

🤖 Prompt for all review comments with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.

In `packages/rs-drive/src/verify/document/verify_proof_keep_serialized/v0/mod.rs`:
- [BLOCKING] lines 145-242: Non-ignored regression test is intentionally left failing because the documents proof path is unfixed
  `document_proof_with_limit_some_prove_vs_limit_none_verify` is committed as a plain `#[test]` (no `#[ignore]`, no `#[should_panic]`). Its own doc-comment states: "`proved_request_limit` introduced in this PR is not applied to the document path; that gap should be closed before merge or in a tracked follow-up. Until then this test fails CI to keep the regression visible." The `expect(...)` at line 231 panics with GroveDB `InvalidProof(... Proof is missing data for query range ...)` — the same proof-asymmetry class this PR fixes elsewhere.

Root cause confirmed: server `packages/rs-drive-abci/src/query/document_query/v0/mod.rs:123-127` substitutes `self.config.drive.default_query_limit` when wire `limit == 0`, while SDK reconstruction in `packages/rs-sdk/src/platform/documents/document_query.rs:325-329` maps wire `limit == 0` back to `limit: None`, producing a `SizedQuery` GroveDB cannot verify against the server's bounded proof.

Resolve before merge by either (a) extending the same default-on-reconstruction treatment to the documents path (in `TryFrom<&DocumentQuery> for DriveDocumentQuery` or in `DriveDocumentQuery::construct_path_query`), or (b) gating this test with `#[ignore = "tracking issue #NNN"]` and linking a follow-up. As-is, the PR cannot pass CI and a known production bug is reachable from any client doing a proved `find` over a document type with more than `DEFAULT_QUERY_LIMIT` documents.

In `packages/rs-drive-proof-verifier/src/lib.rs`:
- [SUGGESTION] lines 24-37: Verifier hardcodes the server's runtime-configurable `default_query_limit` as a compile-time constant
  `proved_request_limit` falls back to the compile-time `drive::config::DEFAULT_QUERY_LIMIT` (= 100, `packages/rs-drive/src/config.rs:16`), but every server path this fix mirrors reads the runtime field `self.config.drive.default_query_limit` (e.g. `packages/rs-drive-abci/src/query/voting/contested_resources/v0/mod.rs:132-133`, `.../contested_resource_vote_state/v0/mod.rs`, `.../contested_resource_voters_for_identity/v0/mod.rs`, `.../vote_polls_by_end_date_query/v0/mod.rs`, `.../validator_queries/proposed_block_counts_by_range/v0/mod.rs`, `.../token_queries/token_pre_programmed_distributions/v0/mod.rs`, `.../group_queries/group_actions/v0/mod.rs`).

`DriveConfig::default_query_limit` is a serde-deserialized `u16` (`packages/rs-drive/src/config.rs:44-52`), i.e. operator-overridable. Any node running with a non-default value will produce proofs the SDK reconstructs against the wrong range — recreating the silent `Proof is missing data for query range` failure this PR is meant to eliminate. Not a soundness bug (no proof forgery is possible), but a liveness/availability footgun and a network-fragmentation vector: a single misconfigured masternode can DoS proof-verifying clients on whichever quorum it lands in.

Fix options: (i) treat `default_query_limit` as a consensus-pinned protocol constant via `PlatformVersion` and reject divergent config at startup; (ii) return the effective limit in `ResponseMetadata` and have the SDK reconstruct against the server-declared value. At minimum, document on `DEFAULT_QUERY_LIMIT` that it must equal `DriveConfig::default_query_limit` for every node on the network.
- [SUGGESTION] lines 30-37: `proved_request_limit` accepts wire values that honest handlers reject (`Some(0)` and over-limit)
  The helper only checks `u16::try_from(limit)`, but every server handler it mirrors imposes additional invariants: `limit == 0` is rejected and `limit > config.default_query_limit` is rejected (e.g. `packages/rs-drive-abci/src/query/voting/contested_resources/v0/mod.rs:137-141` — `if limit == 0 || limit > config.default_query_limit { return InvalidArgument }`; same shape in `contested_resource_vote_state/v0/mod.rs`, `contested_resource_voters_for_identity/v0/mod.rs`, `vote_polls_by_end_date_query/v0/mod.rs`, `group_actions/v0/mod.rs`, `token_pre_programmed_distributions/v0/mod.rs`, `proposed_block_counts_by_range/v0/mod.rs`); the evonode path bounds against `config.max_query_limit`.

Post-fix, the verifier will happily reconstruct a `SizedQuery` with `count = Some(0)` or `count > default_query_limit` and attempt to verify a proof against it. In practice unreachable on the wire today (no honest node pairs such a value with a proof), so this is latent rather than exploitable — but it is exactly the kind of client/server asymmetry this PR exists to close, and it weakens the helper's stated contract ("mirror server-side request reconstruction"). If a future server change relaxes one of these bounds, the verifier silently inherits the looser policy.

Reject `Some(0)` and values exceeding `DEFAULT_QUERY_LIMIT` (or `DEFAULT_MAX_QUERY_LIMIT` for the evonode caller) in `proved_request_limit` with `Error::RequestError`, or document explicitly that the helper trusts upstream validation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants