Skip to content

feat: make AI reviewer metadata-first#15

Merged
ftchvs merged 4 commits into
mainfrom
feat/ai-reviewer-clarity
May 10, 2026
Merged

feat: make AI reviewer metadata-first#15
ftchvs merged 4 commits into
mainfrom
feat/ai-reviewer-clarity

Conversation

@ftchvs
Copy link
Copy Markdown
Owner

@ftchvs ftchvs commented May 10, 2026

Summary

Makes the optional Ollama-compatible AI reviewer safer and more measurable:

  • Keeps deterministic policy rules as the trusted baseline.
  • Makes model review metadata-only by default.
  • Adds explicit model_affects_score opt-in for letting valid model findings join policy_hits and affect scoring.
  • Treats invalid model JSON/schema responses as ignored runtime metadata, not review hits.
  • Passes bounded landing-page snapshot context into the model prompt with explicit untrusted-content boundaries.
  • Passes extracted landing-page context into model-only evals as well as hybrid/product scans.
  • Adds UI/API/CLI controls and docs for rule-only, metadata-only, and score-impact modes.
  • Adds model reviewer usefulness metrics for measuring useful model-added notes vs false review burden.

Validation

  • make test → 193 passed
  • make eval → 58 examples, no FP/FN notes
  • make benchmark → 213 examples, no FP/FN notes
  • make pr-preflight → OK
  • GitHub CI Python 3.11/3.12 → passed

Notes

This intentionally does not claim legal compliance or platform approval accuracy. The model layer is positioned as a local, optional reviewer assist; deterministic rules remain the default decision path.

@ftchvs ftchvs merged commit 59eec91 into main May 10, 2026
2 checks passed
@ftchvs ftchvs deleted the feat/ai-reviewer-clarity branch May 13, 2026 14:12
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant