Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions Cargo.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

1 change: 1 addition & 0 deletions TODO.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,7 @@
- Formal Verification
- Padd with noop cycles to always ensure memory size >= bytecode size (liveness), and ensure this condition is checked by the verifier (soundness)
- Rewrite the compiler, it's bad right now.
- double check type 1 / type 2 dispatch, and try to simplify the various data layouts

# Ideas

Expand Down
1 change: 1 addition & 0 deletions crates/lean_prover/Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,7 @@ lean_vm.workspace = true
lean_compiler.workspace = true
backend.workspace = true
itertools.workspace = true
serde.workspace = true

[dev-dependencies]
xmss.workspace = true
Expand Down
2 changes: 2 additions & 0 deletions crates/lean_prover/src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,8 @@ pub const SNARK_DOMAIN_SEP: [F; 8] = F::new_array([
]);

pub fn default_whir_config(starting_log_inv_rate: usize) -> WhirConfigBuilder {
assert!(0 < starting_log_inv_rate);
assert!(starting_log_inv_rate <= MAX_WHIR_LOG_INV_RATE);
WhirConfigBuilder {
folding_factor: FoldingFactor::new(WHIR_INITIAL_FOLDING_FACTOR, WHIR_SUBSEQUENT_FOLDING_FACTOR),
soundness_type: if cfg!(feature = "prox-gaps-conjecture") {
Expand Down
8 changes: 5 additions & 3 deletions crates/lean_prover/src/prove_execution.rs
Original file line number Diff line number Diff line change
Expand Up @@ -3,16 +3,18 @@ use std::collections::BTreeMap;
use crate::*;
use lean_vm::*;

use serde::{Deserialize, Serialize};
use sub_protocols::*;
use tracing::info_span;
use utils::ansi::Colorize;
use utils::{build_prover_state, from_end};

#[derive(Debug)]
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct ExecutionProof {
pub proof: Proof<F>,
// benchmark / debug purpose
pub metadata: ExecutionMetadata,
#[serde(skip, default)]
pub metadata: Option<ExecutionMetadata>,
}

pub fn prove_execution(
Expand Down Expand Up @@ -265,6 +267,6 @@ pub fn prove_execution(

Ok(ExecutionProof {
proof: prover_state.into_proof(),
metadata,
metadata: Some(metadata),
})
}
2 changes: 1 addition & 1 deletion crates/lean_prover/src/test_zkvm.rs
Original file line number Diff line number Diff line change
Expand Up @@ -248,6 +248,6 @@ fn test_zk_vm_helper(program_str: &str, public_input: &[F]) {
.unwrap();
let proof_time = time.elapsed();
verify_execution(&bytecode, public_input, proof.proof).unwrap();
println!("{}", proof.metadata.display());
println!("{}", proof.metadata.as_ref().unwrap().display());
println!("Proof time: {:.3} s", proof_time.as_secs_f32());
}
10 changes: 9 additions & 1 deletion crates/lean_vm/src/isa/bytecode.rs
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

use backend::*;

use crate::{EF, F, FileId, FunctionName, Hint, SourceLocation};
use crate::{DIMENSION, EF, F, FileId, FunctionName, Hint, N_INSTRUCTION_COLUMNS, SourceLocation};

use super::Instruction;
use std::collections::BTreeMap;
Expand Down Expand Up @@ -41,6 +41,14 @@ impl Bytecode {
pub fn log_size(&self) -> usize {
log2_ceil_usize(self.size())
}

pub fn cumulated_n_vars(&self) -> usize {
self.log_size() + log2_ceil_usize(N_INSTRUCTION_COLUMNS)
}

pub fn bytecode_claim_size(&self) -> usize {
(self.cumulated_n_vars() + 1) * DIMENSION
}
}

impl Display for Bytecode {
Expand Down
63 changes: 63 additions & 0 deletions crates/rec_aggregation/TYPE1_TYPE2_LAYOUT.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,63 @@
# Type-1 / Type-2 public-input layout

**Type-1 — single `(message, slot)` aggregation.** A Type-1 multi-signature attests that *every* public key in a given list signed the **same** `message` at the **same** `slot`. A Type-1 proof can aggregate any mix of (a) raw XMSS signatures verified directly inside the snark, and (b) child Type-1 multi-signatures verified recursively.

**Type-2 — bundle of `n` independent Type-1 multi-signatures.** A Type-2 proof bundles `n` *unrelated* Type-1 multi-signatures into a single snark. Each component may have its own `(message, slot)` and its own pubkey set.

All sizes below are in field elements (FE). One **chunk** = 8 FE. The buffer is hashed chunk-by-chunk with Poseidon (zero IV) to produce the public-input digest.

Worked numbers below assume the bytecode has `log_size = 19` (the current value).

## Flags

| Flag value | Type | Meaning |
| ---------- | ------ | ------------------------------------- |
| `1` | Type-1 | Single `(message, slot)` aggregation |
| `0` | Type-2 | Bundle of `n` Type-1 multi-signatures |

## Common header

| Offset | Size | Contents |
| ------ | ----- | ---------------------------------------------- |
| `0` | `8` | `[flag, count, 0, 0, 0, 0, 0, 0]` |
| `8` | `120` | Bytecode evaluation claim (padded up to chunk) |
| `128` | `8` | Bytecode-hash domain separator |
| `136` | … | Component data — layout depends on the flag |

`count` is `n_sigs` for Type-1, `n_components` for Type-2.

The bytecode-claim region encodes a multilinear evaluation: a point + the resulting value, all over the extension field. Its size is `((log_size + 4 + 1) · 5)` rounded up to a multiple of 8:

- The bytecode is a multilinear polynomial in `log_size + 4` variables. The `+4` comes from `ceil_log2(12)` because each "instruction" occupies 12 columns, padded to 16 = 2⁴ — so addressing one column adds 4 extra variables on top of `log_size`.
- The `+1` adds room for the **value** of the polynomial at that point, alongside the point's coordinates: `(log_size + 4)` coordinates + `1` value = `log_size + 5` extension-field elements.
- The outer `· 5` is the **extension-field degree**: each extension element is 5 base-field elements.
- For `log_size = 19`: `(19 + 4 + 1) · 5 = 24 · 5 = 120` (already a multiple of 8, but otherwise we padd it with zeros).

## Type-1 component data (fixed, 4 chunks = 32 FE)

| Offset | Size | Contents |
| ------ | ---- | ---------------------------------- |
| `136` | `8` | Hash of all aggregated public keys |
| `144` | `8` | Message |
| `152` | `8` | Merkle chunks identifying the slot |
| `160` | `8` | Tweak-table hash |

**Total Type-1 buffer = 168 FE = 21 chunks** (independent of `n_sigs`).

## Type-2 component data (variable, `n_components` chunks)

| Offset | Size | Contents |
| ------ | ------------------ | -------------------------------------------------------- |
| `136` | `n_components · 8` | One 8-FE digest per inner Type-1 (its public-input hash) |

**Total Type-2 buffer = `(n_components + 17) · 8` FE**, where `17 = 15` bytecode-claim chunks `+ 1` prefix `+ 1` domsep.

## Picture

```
Type-1 (168 FE):
[flag=1 | n_sigs | 0×6] [bytecode claim, 120 FE] [domsep, 8 FE] [pubkeys_hash | message | merkle_chunks | tweaks_hash]

Type-2 ((n+17)·8 FE):
[flag=0 | n | 0×6] [bytecode claim, 120 FE] [domsep, 8 FE] [digest_0] [digest_1] … [digest_{n-1}]
```
18 changes: 14 additions & 4 deletions crates/rec_aggregation/hashing.py
Original file line number Diff line number Diff line change
Expand Up @@ -72,15 +72,25 @@ def slice_hash(data, num_chunks):
poseidon16_compress(states + (j - 1) * DIGEST_LEN, data + (j + 1) * DIGEST_LEN, states + j * DIGEST_LEN)
return states + (num_chunks - 2) * DIGEST_LEN

def slice_hash_with_iv_range(data, num_chunks, dest):
debug_assert(0 < num_chunks)
debug_assert(2 < num_chunks)
states = Array((num_chunks - 1) * DIGEST_LEN)
poseidon16_compress(ZERO_VEC_PTR, data, states)
for j in range(1, num_chunks - 1):
poseidon16_compress(states + (j - 1) * DIGEST_LEN, data + j * DIGEST_LEN, states + j * DIGEST_LEN)
poseidon16_compress(states + (num_chunks - 2) * DIGEST_LEN, data + (num_chunks - 1) * DIGEST_LEN, dest)
return

@inline
def slice_hash_with_iv(data, num_chunks):
debug_assert(0 < num_chunks)
def slice_hash_with_iv(data, num_chunks, dest):
debug_assert(2 <= num_chunks)
states = Array(num_chunks * DIGEST_LEN)
poseidon16_compress(ZERO_VEC_PTR, data, states)
for j in unroll(1, num_chunks):
for j in unroll(1, num_chunks - 1):
poseidon16_compress(states + (j - 1) * DIGEST_LEN, data + j * DIGEST_LEN, states + j * DIGEST_LEN)
return states + (num_chunks - 1) * DIGEST_LEN
poseidon16_compress(states + (num_chunks - 2) * DIGEST_LEN, data + (num_chunks - 1) * DIGEST_LEN, dest)
return


def slice_hash_with_iv_dynamic_unroll(data, num_chunks, num_chunks_bits: Const):
Expand Down
Loading
Loading