-
Notifications
You must be signed in to change notification settings - Fork 6
Open
Description
Problem
The LLMReview and CodeDiff validators use external LLM APIs which consume tokens and are expensive to run. Currently, these validators run regardless of whether other validators have already found errors in the plugin.
When a plugin has errors reported by other validators, running LLM-based validation is wasteful since the plugin will need to be fixed and resubmitted anyway.
Proposed Solution
Add a mechanism that allows validators to declare they should only run when other validators have passed (no reported errors). This would be:
- A property/flag on the validator itself that marks it as "run only on clean results"
- Architecture changes to support running validators in phases or checking accumulated errors before running certain validators
- "Errors" in this context means errors reported by validators in their reports - not warnings, not info-level messages, and not Go runtime errors
Validators affected
LLMReviewCodeDiff
Tasks
- Research current validator architecture to understand execution flow
- Design approach for conditional validator execution
- Implement the mechanism for validators to declare "run only on clean" behavior
- Update LLMReview and CodeDiff validators to use this mechanism
- Add tests
Metadata
Metadata
Assignees
Labels
No labels
Type
Projects
Status
📅 Planned