-
Notifications
You must be signed in to change notification settings - Fork 8
KIP-227: Candidate and Validator Evaluation #27
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
CLA Assistant Lite bot All contributors have signed the CLA ✍️ ✅ |
|
I have read the CLA Document and I hereby sign the CLA |
|
@jiseongnoh I assigned 227 to this KIP according to the numbering rule in KIP-201. Thanks. |
hyunsooda
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I undertood that this KIP addresses how to measure the node's performance. How to exploit measured numbers is not the scope of this KIP and delegated for KIP-286(validator's lifecylcle). Is it correct?
KIPs/kip-227.md
Outdated
| | Constant | Value/Definition | | ||
| | :--------------------------------- | :---------------------------------------------------------------------- | | ||
| | `FORK_BLOCK` | TBD | | ||
| | `CANDIDATE_READY_TIMEOUT` | 200 milliseconds (0.2 seconds) | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this timeout apply to each message type individually? Specifically, is there a 200ms limit for each stage: Pre-prepare, Prepare, and Commit?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the question.
CANDIDATE_READY_TIMEOUT is evaluation-only and applies only to the VRank CandidateReady reception deadline. It is not related to IBFT consensus and does not apply to IBFT message types.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, I see. I previously assumed candidates participated in consensus without voting power, but I was mistaken. Is that message just a artifical message like ping-pong?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, CandidateReady is synthetic in the sense that it exists purely for measurement. However, a reporter cannot fabricate a success because it requires a valid candidate signature bound to the target block.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Understood. How are message transmission and reception processed? I couldn’t find a minimal protocol for this, so I assume message testing occurs over the full EPOCH_LENGTH (one day). Additionally, if these are just artificial 'ping-pong' messages, it would be difficult to infer a candidate's machine specifications. Verifying computational capability may not align with simple message transfers. Are there any major parts that I've missed?
|
|
||
| **Definition**: Measures the number of times a candidate fails to transmit the expected `CandidateReady` message during a block proposal cycle, after removing the highest `F` of the failure counts to address measurement distortions. | ||
|
|
||
| **Measurement Method**: During the evaluation period, if the next proposer `N+1` receives a block proposal from proposer `N` and does not receive the `CandidateReady` message within the specified timeout (`CANDIDATE_READY_TIMEOUT`), the total failure count for `C` increases by 1\. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As I understood, only one of the validators measures the candidate, and the node is the next block proposer.
- Is it correct that only one validator measures the candidate's performance for each block?
- Why do the next block proposer measure, rather than the current proposer? (maybe relavant with the round change)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
mfReport for block N is committed in block N+1 by the proposer of the next finalized block (N+1).
Reporters rotate every block, so candidates are observed by many validators over time, which makes per-reporter counts and TMFS top-F filtering meaningful against Byzantine reporters.
We commit in N+1 for consistency with pfReport and to avoid a timing race. CandidateReady has a strict deadline (200ms), and including it in block N would force delaying block production.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the details. I see that both are N+1 commit.
mfReport: Wondering why it is committed N+1? Even if it's committed in N, I believe the distribution of diversity would be similar.pfReport: Got it. Is that specific number200derived from250msof the execution time because the response must be guaranteed to arrive before making the proposal?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We commit mfReport/crReport in N+1 to protect block time for N. If embedded in N, the proposer must wait up to CANDIDATE_READY_TIMEOUT before finalizing the header, which delays block time.
By committing in N+1, we keep block N production on time and treat candidate readiness as an asynchronous measurement artifact that can be recorded in the next block without delaying block time.
Yes, that is correct. How these measured metrics are used for validator/candidate lifecycle decisions is out of scope for this KIP and is expected to be defined in a separate proposal. |
KIPs/kip-227.md
Outdated
|
|
||
| ### Changes to Block Validation Process | ||
|
|
||
| Once `FORK_BLOCK` is reached, validators must validate the newly added `vrank` field in the block header. The values of the subfields (`pfReport` and `mfReport`) are used to evaluate node performance using the components of the VRank framework. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
validators must validate the newly added
vrankfield
Validate what? For pfReport, validate that it only contains the proposers that caused RC?
How about mfReport? signature validation?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let’s add explicit validation rules for vrank after FORK_BLOCK
(i) pfReport MUST be a deterministic, verifiable list of round-change proposers in round order
(ii) crReport MUST contain at most one entry per candidate and each entry MUST carry a valid CandidateReady signature bound to (block_number, proposal_hash).
KIPs/kip-227.md
Outdated
|
|
||
| ## VRank Score Components | ||
|
|
||
| The VRank framework evaluates node performance using three independent metrics. These metrics apply separately to validators and candidates, allowing for a more focused assessment of each role's responsibilities. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| The VRank framework evaluates node performance using three independent metrics. These metrics apply separately to validators and candidates, allowing for a more focused assessment of each role's responsibilities. | |
| The VRank framework evaluates node performance using three independent metrics. Each metric measures events occurred in an epoch, and resets at the epoch start block. These metrics apply separately to validators and candidates, allowing for a more focused assessment of each role's responsibilities. |
KIPs/kip-227.md
Outdated
|
|
||
| **Measurement Method**: If a validator fails to propose a block, resulting in a round change, the proposal failure count increases by one . | ||
|
|
||
| **Consensus Method**: The proposer of block `N+1` records the proposal failure information of block `N` in the form of a list of `(round number, proposer)` within the block header. Validators compare their own records of proposal failures in block `N` with the records in the header of block `N+1` to reach consensus. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should record all round changes of block N in the block N (not block N+1), so we can remove it from the council when generating block N+1. cc @hyeonLewis
| # Reset the start point | ||
| state['cf_start'] = 0 | ||
| ``` | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's define CandidateReady something like this:
message CandidateReady {
uint64 block_number;
byte[32] proposal_hash;
signature sig; // sign_ecdsa(proposal_hash)
}
- pfReport recorded in block N header - crReport recorded in block N+1 header for target N; - Define missing entry = 1 failure and EPOCH_LENGTH-1 targets - Define vrank empty bytes as pfReport=[] and crReport=[]
|
|
||
| **Measurement Method**: If a validator fails to propose a block, resulting in a round change, the proposal failure count increases by one. | ||
|
|
||
| **Consensus Method**: The proposer of block `N` MUST record the proposal failure information of block `N` in the `pfReport` field of the block header at height `N`, as an ordered list of `(round number, proposer)` for each failed round. Validators MUST compare `pfReport(N)` with their local round-change record for block `N` to reach consensus. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| **Consensus Method**: The proposer of block `N` MUST record the proposal failure information of block `N` in the `pfReport` field of the block header at height `N`, as an ordered list of `(round number, proposer)` for each failed round. Validators MUST compare `pfReport(N)` with their local round-change record for block `N` to reach consensus. | |
| **Consensus Method**: `pfReport` is not recorded in a block. Instead, a validator must derive `pfReport` from its round number and previous rounds' proposers. |
|
|
||
| **Consensus Method**: The proposer of block `N` MUST record the proposal failure information of block `N` in the `pfReport` field of the block header at height `N`, as an ordered list of `(round number, proposer)` for each failed round. Validators MUST compare `pfReport(N)` with their local round-change record for block `N` to reach consensus. | ||
|
|
||
| **Score (Aggregation)**: For epoch index `k`, PFS MUST be computed from headers `H ∈ [k*EPOCH_LENGTH, (k+1)*EPOCH_LENGTH - 1]` by counting each validator's proposal failures from `pfReport`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| **Score (Aggregation)**: For epoch index `k`, PFS MUST be computed from headers `H ∈ [k*EPOCH_LENGTH, (k+1)*EPOCH_LENGTH - 1]` by counting each validator's proposal failures from `pfReport`. | |
| **Score (Aggregation)**: For epoch index `k`, PFS MUST be computed from headers `H ∈ [k*EPOCH_LENGTH, (k+1)*EPOCH_LENGTH - 1]` by counting each validator's proposal failures from round information of `header.ExtraData`. |
|
|
||
| **Score (Aggregation)**: For epoch index `k`, PFS MUST be computed from headers `H ∈ [k*EPOCH_LENGTH, (k+1)*EPOCH_LENGTH - 1]` by counting each validator's proposal failures from `pfReport`. | ||
|
|
||
| ### 2\. Message Transmission Failure Score (MFS) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| ### 2\. Message Transmission Failure Score (MFS) | |
| ### 2\. Candidate Failure Score (CFS) |
|
|
||
| ### 2\. Message Transmission Failure Score (MFS) | ||
|
|
||
| The Message Transmission Failure Score is divided into two components: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| The Message Transmission Failure Score is divided into two components: | |
| The Candidate Failure Score is divided into two components: |
|
|
||
| #### 2.1 Total Message Transmission Failure Score (TMFS) | ||
|
|
||
| **Definition**: Measures the number of times a candidate fails to transmit the expected `CandidateReady` message during a block proposal cycle, after removing the highest `F` of the failure counts to address measurement distortions. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| **Definition**: Measures the number of times a candidate fails to transmit the expected `CandidateReady` message during a block proposal cycle, after removing the highest `F` of the failure counts to address measurement distortions. | |
| **Definition**: Measures the number of times a candidate fails to transmit the expected `VRankCandidate` message during a block proposal cycle, after removing the highest `F` of the failure counts to address measurement distortions. |
Proposed changes
Propose a framework for quantitatively assessing the performance and stability of candidates and validators in the Kaia Chain network
Types of changes
Please put an x in the boxes related to your change.
Checklist
Put an x in the boxes that apply. You can also fill these out after creating the PR. If you're unsure about any of them, don't hesitate to ask. We're here to help! This is simply a reminder of what we are going to look for before merging your code.
I have read the CLA Document and I hereby sign the CLAin first time contributionRelated issues
Further comments
If this is a relatively large or complex change, kick off the discussion by explaining why you chose the solution you proposed and what alternatives you have considered, etc.