AI Dev OS resolves rule conflicts using the Specificity Cascade, inspired by CSS specificity:
1. [Highest] frameworks/[stack]/* ← Most specific
2. [High] common/* ← Common but concrete
3. [Medium] project-specific/* ← Project context
4. [Low] 02_decision-criteria/* ← Abstract criteria
5. [Lowest] 01_philosophy/* ← Most abstract
Note:
project-specific/*refers to guidelines in the user's project (e.g.,docs/guidelines/billing.md), NOT in the rules repository. The rules repository contains onlyframeworks/andcommon/. Project-specific guidelines are created by the user for domain logic, external integrations, legal requirements, etc.
- When coding, the AI first looks for framework-specific rules (most specific)
- If no framework rule exists, fall back to common rules
- If no common rule exists, fall back to project-specific conventions
- If nothing explicit exists, apply decision criteria to reason about the best approach
- As a last resort, fall back to philosophy for value-based judgment
Suppose the AI encounters an error handling decision in a Next.js Server Action:
- frameworks/nextjs/server-actions.md says "Use ActionResult pattern" → Apply this
- common/error-handling.md says "Classify errors into user/system/validation" → Also apply
- 02_decision-criteria/error-strategy.md says "Prefer explicit error types over exceptions" → Informs the approach
- 01_philosophy/principles.md says "Correctness over convenience" → Underlying value
All layers contribute, but the most specific rule wins when there is a conflict.