Stop writing docs. Let AI do it.
LazyDocs uses Groq's lightning-fast AI to generate professional documentation from your codebase. READMEs, PR descriptions, changelogs—all in seconds.
- Fast - Powered by Groq's LLM inference (seriously fast)
- Smart - Analyzes your actual code structure
- Easy - Interactive CLI that just works
- Free - Groq API is free to use
npm install -g @tfkedar/lazydocsGet a free API key from console.groq.com, then:
lazydocs config set GROQ_API_KEY=your_key_herelazydocs generate --interactiveWalks you through everything. Pick what to generate, choose your model, done.
# Generate README
lazydocs generate --type readme
# Generate PR description
lazydocs generate --type pr
# Generate changelog from git history
lazydocs generate --type changeloglazydocs generate \
--type readme \
--input ./src \
--model llama-3.1-8b-instant \
--temperature 0.7 \
--verboseLazyDocs scans your code (JS, TS, JSX, TSX) and generates:
- READMEs - Project overview, installation, usage, API docs
- PR Descriptions - Summary of changes from git diff
- Changelogs - Categorized release notes from commits
Choose your speed vs quality:
llama-3.1-70b-versatile- Best quality (default)llama-3.1-8b-instant- Fastestmixtral-8x7b-32768- Huge context windowgemma2-9b-it- Good balance
lazydocs models # See all availablecd my-awesome-project
lazydocs generate --type readmeCreates README.md with:
- Project overview
- Installation steps
- Usage examples
- API documentation
git add .
lazydocs generate --type pr --output PR.mdAnalyzes your changes and writes a clear PR description.
lazydocs generate --type changelogReads your git history and creates a formatted changelog.
Config lives in ~/.lazydocs:
# Set API key
lazydocs config set GROQ_API_KEY=your_key
# View all config
lazydocs config list
# Get specific value
lazydocs config get GROQ_API_KEYlazydocs generate [options]
Options:
-i, --input <dir> Code directory (default: "./src")
-o, --output <file> Output file (auto-detected)
-t, --type <type> readme | pr | changelog (default: "readme")
-m, --model <model> AI model to use
--temperature <n> Creativity 0-1 (default: 0.7)
--max-tokens <n> Max response length (default: 2048)
--interactive Interactive mode
--verbose Show details
-h, --help Show help
- Use
--interactivefor the easiest experience - Try different models—faster isn't always worse
- Use
--verboseto see what's happening - Works best with well-structured code
- Node.js 18 or higher
- Free Groq API key
Found a bug? Want a feature? Open an issue or submit a PR.
See CONTRIBUTING.md for guidelines.
MIT © Kedar Sathe