Thank you for your interest in contributing to DocGuard! This document provides guidelines for contributing.
- Fork the repository
- Clone your fork:
git clone https://github.com/YOUR_USERNAME/docguard.git - Install:
npm install(only dev dependencies — DocGuard itself has zero runtime deps) - Run tests:
npm test - Run DocGuard on itself:
node cli/docguard.mjs guard
DocGuard follows Canonical-Driven Development (CDD). Before making changes:
# 1. Check current compliance
node cli/docguard.mjs guard
# 2. Make your changes
# 3. Run tests
npm test
# 4. Verify docs still pass
node cli/docguard.mjs guard
# 5. Update CHANGELOG.md with your changescli/
docguard.mjs ← Entry point, config loading, command routing
commands/ ← 11 user-facing commands
validators/ ← 9 independent validation modules
templates/ ← CDD document templates + slash commands
vscode-extension/ ← VS Code extension
tests/ ← Integration tests
docs-canonical/ ← DocGuard's own CDD documentation
- Zero dependencies: DocGuard has no
node_modulesruntime deps. Keep it that way. - Validators are pure: Each validator receives
(projectDir, config)and returns results. No side effects. - Commands don't cross-import: Commands import from validators, never from other commands.
- AI is the author: The CLI flags problems and generates prompts. It never writes doc content.
- Create
cli/commands/your-command.mjswith an exportedrunYourCommand(projectDir, config, flags)function - Import it in
cli/docguard.mjs - Add it to the help text, command routing switch, and argument parsing
- Add tests in
tests/commands.test.mjs - Update
CHANGELOG.md
- Create
cli/validators/your-validator.mjs - Import it in
cli/commands/guard.mjs - Add enable/disable support in
.docguard.jsonvalidators config - Add tests
- Update
docs-canonical/ARCHITECTURE.mdwith the new validator
Follow Conventional Commits:
feat: add new validator for API docs
fix: handle missing package.json gracefully
docs: update ARCHITECTURE.md with new component
refactor: extract scoring logic into shared function
test: add edge case tests for score command
- Ensure
npm testpasses with no failures - Ensure
node cli/docguard.mjs guardpasses - Update
CHANGELOG.mdunder[Unreleased] - Update relevant docs in
docs-canonical/if architecture changed - Request review
- ES Modules (
import/export) throughout - Node.js built-ins only (
node:fs,node:path,node:child_process,node:test) - ANSI colors via the shared
cobject fromdocguard.mjs - No TypeScript — plain JavaScript for maximum portability
Open a GitHub issue with:
- DocGuard version (
docguard --version) - Node.js version (
node --version) - OS and version
- Steps to reproduce
- Expected vs actual behavior
DocGuard's architecture is informed by peer-reviewed research in AI-driven documentation generation and multi-agent quality evaluation. We gratefully acknowledge the following contributions:
- Martin Manuel Lopez · ORCID 0009-0002-7652-2385
- Ph.D. Candidate, Dept. of Electrical and Computer Engineering, University of Arizona
- Lead author on AITPG and TRACE — the two papers that informed DocGuard's quality evaluation, multi-perspective analysis, and standards-grounded generation patterns
The following papers directly influenced DocGuard's design:
[1] M. M. Lopez, M. W. U. Rahman, C. Farthing, J. Battle, K. Buckley, G. Altintarla, and S. Hariri, "AITPG: Agentic AI-Driven Test Plan Generator using Multi-Agent Debate and Retrieval-Augmented Generation," IEEE Transactions on Software Engineering, 2026. — Introduced the three-stage pipeline (generate → debate → calibrated evaluation), RAG-grounded standards alignment, and multi-agent role specialization (Positive/Negative/Edge + Critic) for documentation generation.
[2] M. M. Lopez, M. W. U. Rahman, C. Farthing, J. Battle, K. Buckley, G. Altintarla, and S. Hariri, "TRACE: Telecommunications Root Cause Analysis through Calibrated Explainability via Multi-Agent Debate," IEEE Transactions on Machine Learning in Communications and Networking, 2026. — Introduced Calibrated Judge Evaluation (CJE) with weighted multi-signal composite scoring, HIGH/MEDIUM/LOW quality labels, the "equalizer effect" for agent-aware prompt scaling, and adversarial debate (Advocate/Challenger/Mediator/Explainer) for robust quality assessment.
| DocGuard Feature | Research Origin | Paper |
|---|---|---|
Quality labels (HIGH/MED/LOW) in guard output |
CJE quality stratification | TRACE [2] |
| Standards citations in generated docs | RAG-grounded standards alignment | AITPG [1] |
Multi-signal composite scoring in score |
5-signal weighted composite (Eq. 1) | TRACE [2] |
Traceability matrix (trace command) |
Requirements traceability | AITPG [1] |
Multi-perspective diagnose --debate prompts |
Multi-agent role specialization | AITPG [1], TRACE [2] |
| Agent-aware prompt complexity | CJE equalizer effect | TRACE [2] |
By contributing, you agree that your contributions will be licensed under the MIT License.