|
| 1 | +--- |
| 2 | +name: advocacy-code-review |
| 3 | +description: Layered code review pipeline — automated checks first, then AI-assisted review, then human review focused on Ousterhout red flags, AI failure patterns, silent failures, and advocacy-specific concerns |
| 4 | +--- |
| 5 | +# Code Review |
| 6 | + |
| 7 | +## When to Use |
| 8 | +- Reviewing any code before merge — especially AI-generated code |
| 9 | +- Preparing your own code for review |
| 10 | +- When a PR is tagged AI-Assisted |
| 11 | +- When changes touch investigation data, coalition boundaries, or emotional safety features |
| 12 | + |
| 13 | +## Process |
| 14 | + |
| 15 | +### Layer 1: Automated Checks (Zero Human Effort) |
| 16 | +Before any human looks at the code, verify these pass: |
| 17 | +- Formatting and linting — automated, enforced in CI, not discussed in review |
| 18 | +- Static analysis and type checking — catches structural issues, type errors, known vulnerability patterns |
| 19 | +- Security scanning — detects hardcoded secrets, known vulnerability patterns, dependency issues |
| 20 | +- Test suite — all tests pass, no regressions |
| 21 | + |
| 22 | +If any automated check fails, fix it before requesting review. Do not submit "I'll fix the tests later" PRs. |
| 23 | + |
| 24 | +### Layer 2: AI-Assisted First-Pass Review |
| 25 | +Use AI to flag potential issues. AI catches well: inconsistent error handling, missing null checks, unused imports, common security patterns, deviations from project conventions, performance anti-patterns. AI misses: whether the approach is correct, whether business logic matches requirements, whether the code will be maintainable, whether tests verify meaningful properties, subtle concurrency issues. Treat AI review flags as suggestions, not verdicts. |
| 26 | + |
| 27 | +### Layer 3: Human Review — Design Quality Red Flags |
| 28 | +Walk through the Ousterhout red flags checklist. These are the structural problems most common in AI-generated code: |
| 29 | + |
| 30 | +- **Shallow module** — interface is as complex as the implementation; the abstraction hides nothing |
| 31 | +- **Information leakage** — implementation details escape through the interface; callers depend on internals |
| 32 | +- **Temporal decomposition** — code structured by execution order rather than conceptual boundaries |
| 33 | +- **Pass-through method** — method that does nothing except call another method with the same signature |
| 34 | +- **Repetition** — same logic in multiple places; AI duplicates at 4x the normal rate |
| 35 | +- **Special-general mixture** — general-purpose code polluted with special-case handling |
| 36 | + |
| 37 | +### Layer 4: Human Review — AI-Specific Failure Patterns |
| 38 | +Check specifically for patterns AI agents introduce: |
| 39 | + |
| 40 | +- **DRY violations** — does this duplicate something that already exists in the codebase? Search before accepting. |
| 41 | +- **Multi-responsibility functions** — does any function do more than one thing at one level of abstraction? Split it. |
| 42 | +- **Suppressed errors** — has the AI removed safety checks, caught exceptions too broadly, or silently swallowed failures? Review every error handling path. |
| 43 | +- **Hallucinated APIs** — does the code call libraries, methods, or endpoints that do not exist? Verify every external dependency. |
| 44 | +- **Over-patterning** — has the AI applied Strategy, Factory, or Observer where a plain function and conditional would suffice? |
| 45 | +- **Silent failure pattern** — AI may remove safety checks to make code appear to work, create fake output matching desired formats, or edit tests to pass rather than fixing the underlying code. Verify ALL safety checks from the original code are preserved in the new code. Compare the error handling paths between old and new versions explicitly. |
| 46 | + |
| 47 | +### Layer 5: Advocacy-Specific Review |
| 48 | +For any code in an advocacy project, also verify: |
| 49 | + |
| 50 | +- **Data leak vectors** — does this change create any new path for sensitive data to leave the system? Check logging, error messages, telemetry, API responses, and serialization output for investigation data, witness identities, or activist PII. |
| 51 | +- **Surveillance surface area** — does this change increase the metadata footprint? New timestamps, access logs, IP recording, or device fingerprinting that could be used to identify activists under legal discovery. |
| 52 | +- **Emotional safety** — if this code displays content to users, does it respect progressive disclosure? Is graphic content behind explicit opt-in? Are content warnings specific enough? |
| 53 | +- **Coalition boundary violations** — does this change allow data to cross organizational boundaries without going through an anti-corruption layer? AI agents optimize for expedience and import directly across bounded contexts. |
| 54 | + |
| 55 | +### Step: Render Verdict |
| 56 | +Summarize findings by layer. Distinguish between blocking issues (security vulnerabilities, data leaks, silent failures, broken tests) and suggestions (style, naming, minor refactoring). For primarily AI-generated PRs, require two human approvals. |
0 commit comments