How to create world-class developer documentation that drives adoption and reduces support burden.
This guide distills best practices from top-tier documentation like Redis, Stripe, Auth0, and Vercel to help you create documentation that developers love to use.
❌ Don't write: "This guide covers Handit's optimization features" ✅ Do write: "This quickstart shows you how to set up your autonomous engineer in 10 minutes"
Why: Users care about what they can accomplish, not what your tool does. Lead with outcomes.
❌ Don't write: "Setting Up Your Database" ✅ Do write: "Set up your database"
Why: Action-oriented headers feel more approachable and task-focused. Users are here to do something, not read about doing something.
Always start with: "This guide shows you how to:"
- Specific outcome 1
- Specific outcome 2
- Specific outcome 3
Include: Time estimates and prerequisites upfront.
- Screenshots/videos for every major step
- Code examples that users can copy-paste
- Expected outputs so users know they're on track
- Visual confirmation at each step
# [Action-Oriented Title]
> **[Value proposition in one sentence]** [Explain what users will accomplish and why it matters]
## What you'll learn
This quickstart shows you how to:
1. **[Specific outcome 1]** - Brief description
2. **[Specific outcome 2]** - Brief description
3. **[Specific outcome 3]** - Brief description
**Time required:** X minutes
**Prerequisites:** [List requirements with links]
<Callout type="info">
**Already have [prerequisite]?** Skip to [relevant section](#anchor) or [alternative action].
</Callout>
## Set up [main thing]
### Step 1: [Clear action]
```bash
command-here[Explanation of what this does and why]
[Video/screenshot of the process]
[Natural explanation of the process]
What happens:
- Specific thing 1
- Specific thing 2
- Specific thing 3
[Video/screenshot showing results]
✅ Check [location]: You should see:
- Specific indicator 1
- Specific indicator 2
- Specific indicator 3
✅ Confirm [other location]: Look for:
- Another specific indicator
- Expected behavior
[Natural explanation without bullet lists]
[Example or analogy that makes it concrete]
✅ [Primary achievement]
You should now see:
- Specific result 1
- Specific result 2
- Specific result 3
Within [timeframe]:
- Expected outcome 1
- Expected outcome 2
Ongoing:
- Long-term benefit 1
- Long-term benefit 2
- [Action 1]: [Link] - [Brief description]
- [Action 2]: [Link] - [Brief description]
- [Resource 1]: [Link] - [What they'll learn]
- [Resource 2]: [Link] - [What they'll learn]
- [Support channel 1]: [Link] - [When to use]
- [Support channel 2]: [Link] - [When to use]
- [Advanced topic 1] - [Link] - [Brief description]
- [Advanced topic 2] - [Link] - [Brief description]
[Common issue 1]:
- Solution step 1
- Solution step 2
[Common issue 2]:
- Solution step 1
- Solution step 2
### Overview Page Template
```markdown
# [Clear Product/Feature Name]
> **[Value proposition]** [Explain the core benefit and why it matters]
[Natural paragraph explaining the problem this solves]
## The [problem] problem
[Relatable scenario that users face]
**This is the reality for most teams.** [Expand on the pain points]
[What if scenario showing the solution]
<Callout type="info">
[How your solution addresses this problem]
</Callout>
## How [solution] works
[Natural explanation without bullet lists, using storytelling]
[Concrete example or analogy]
## Real-world impact
[Stories or examples of how this helps teams]
## Key capabilities
### [Capability 1]
[Natural explanation of what this does and why it matters]
### [Capability 2]
[Natural explanation focusing on user benefits]
### [Capability 3]
[Natural explanation with concrete examples]
## Getting started
[Clear call-to-action with specific next steps]
<Cards.Card title="[Action-oriented title]" href="/link" arrow />
[Additional resources if relevant]
❌ Avoid:
- "Handit.ai's optimization system provides comprehensive quality assessment"
- "The system automatically improves your AI while you focus on building your product"
- "Leverage powerful language models to assess quality"
✅ Use:
- "Your autonomous engineer needs to know what's broken to fix it"
- "Stop being your AI's on-call engineer"
- "Picture this: It's 2 AM and your phone buzzes with an AI failure alert"
Structure: Problem → Impact → Solution → Benefit
Example:
- Problem: "Manual AI evaluation doesn't scale"
- Impact: "You can only check 50 out of 5,000 interactions"
- Solution: "AI-powered evaluation assesses every interaction"
- Benefit: "Your autonomous engineer can detect issues before users complain"
❌ Instead of:
Benefits:
- Automated assessment
- Consistent standards
- Real-time feedback
- Focused insights
✅ Write: "Automated evaluation removes the inconsistency of human review while providing real-time feedback on every interaction. Instead of generic quality scores, you get focused insights about specific quality dimensions that enable targeted improvements."
❌ Avoid: "The system detects quality issues and generates improvements"
✅ Use: "If your customer service AI starts giving incomplete responses at 2 AM, your autonomous engineer detects this pattern, generates a better system prompt, tests it against real conversations, and creates a PR to replace the problematic prompt—all while you sleep."
Always include:
- Fast track: "Already have X? Skip to Y"
- Deep dive: "Want to understand how this works? Read Z"
- Alternative approaches: "Prefer manual setup? See advanced guide"
<video
width="100%"
autoPlay
loop
muted
playsInline
style={{ borderRadius: '8px' }}
>
<source src="/assets/path/video.mp4" type="video/mp4" />
Your browser does not support the video tag.
</video>- Info callouts: For context or helpful tips
- Success callouts: For completion confirmations
- Warning callouts: For critical information or common mistakes
- Tip callouts: For pro tips and optimizations
- Always include filename:
filename="example.py" - Use realistic examples: Actual code users would write
- Include comments: Explain non-obvious parts
- Show expected output: When relevant
- Value proposition - Why this matters
- Learning outcomes - What users will accomplish
- Prerequisites - What they need before starting
- Step-by-step setup - How to do it
- Verification - How to confirm success
- Understanding - How it works conceptually
- Next steps - Where to go from here
- Troubleshooting - Common issues and solutions
- Start simple: Basic setup first
- Add complexity gradually: Advanced features later
- Provide escape hatches: "Need custom setup? See advanced guide"
- Multiple paths: Different approaches for different needs
- Hub and spoke: Main quickstart → specialized guides
- Clear progression: Setup → Understanding → Advanced features
- Contextual links: Link to relevant sections when mentioned
- No dead ends: Every page should have clear next steps
❌ Don't: List features and capabilities ✅ Do: Explain problems solved and outcomes achieved
❌ Don't: Break everything into bullet points ✅ Do: Use natural paragraphs with occasional lists for clarity
❌ Don't: "LLM-as-Judge leverages sophisticated language models" ✅ Do: "Use AI to evaluate AI—advanced models assess your AI's quality with human-level understanding"
❌ Don't: "Setup complete! Your system is now configured" ✅ Do: "Setup complete! You should see real-time data in your dashboard at [specific URL]"
❌ Don't: Assume everyone starts from scratch ✅ Do: Provide shortcuts for users who already have partial setup
- Clear value proposition in the first sentence
- Specific learning outcomes listed upfront
- Action-oriented headers throughout
- Natural, conversational language instead of technical jargon
- Concrete examples instead of abstract descriptions
- Problem-solution narrative that creates emotional connection
- Clear prerequisites with links to dependencies
- Step-by-step instructions that are easy to follow
- Visual confirmation for each major step
- Verification steps so users know they succeeded
- Multiple learning paths for different user types
- Clear next steps that guide users forward
- All commands tested and work as documented
- Code examples are copy-pasteable and functional
- Links work and point to current content
- Screenshots/videos are current and accurate
- Expected outputs match actual results
- Messaging alignment with overall product positioning
- Terminology consistency across all pages
- Visual styling matches design system
- Cross-references are accurate and helpful
- Tone and voice consistent throughout
- Time to first success - How quickly users complete quickstart
- Completion rates - Percentage who finish setup successfully
- Drop-off points - Where users abandon the process
- Return visits - How often users come back to docs
- Support ticket reduction - Fewer questions about covered topics
- Common questions - What users still struggle with
- Feature adoption - How quickly new features get adopted
- User feedback - Direct feedback on documentation quality
- Faster onboarding - Reduced time from signup to value
- Higher activation - More users successfully implementing features
- Better retention - Users who understand the product stick around
- Community growth - Good docs drive word-of-mouth adoption
- Monitor user behavior - Use analytics to identify problem areas
- Collect feedback - Regular surveys and user interviews
- Test with real users - Watch people use your docs
- Iterate based on data - Update content based on evidence
- Measure impact - Track improvements in key metrics
- Audit existing content using the quality checklist
- Identify top user journeys and ensure they're well-documented
- Standardize templates for consistency across pages
- Update main entry points (overview, quickstart) first
- Rewrite feature pages using problem-solution narratives
- Add verification steps to all setup guides
- Include visual confirmation for major steps
- Create clear cross-reference strategy
- Add videos/screenshots for visual learners
- Create alternative learning paths for different user types
- Enhance troubleshooting with specific solutions
- Add "more info" sections for advanced users
- Monitor user behavior and identify improvement opportunities
- Collect user feedback and iterate based on insights
- A/B test different approaches to see what works best
- Keep content current as features evolve
- Nextra - React-based documentation framework
- Figma - Design mockups and user flows
- Loom - Quick video creation for walkthroughs
- GitHub Issues - Track documentation improvements
- Google Analytics - Track user behavior
- Hotjar - See how users interact with docs
- Typeform - Collect structured feedback
- Discord/Slack - Community feedback channels
- Grammarly - Writing assistance
- Hemingway Editor - Readability improvement
- Notion - Content planning and collaboration
- Miro - Information architecture planning
What they do well:
- Clear learning outcomes: "You'll learn how to: 1. Create account 2. Connect to database"
- Context-aware guidance: "If you already have an account, see..."
- Visual confirmation: Screenshots for every step
- Multiple connection methods: Different paths for different needs
What they do well:
- Immediate code examples that work
- Progressive complexity (simple → advanced)
- Real-world scenarios and use cases
- Excellent error handling documentation
What they do well:
- Clear user journey mapping
- Multiple framework examples
- Security best practices integrated throughout
- Excellent troubleshooting sections
What they do well:
- Deployment-focused outcomes
- Framework-specific guidance
- Performance optimization tips
- Community examples and templates
- Feature laundry lists without context or benefits
- Technical jargon without explanation
- Vague success criteria like "setup complete"
- Missing verification steps
- No clear next steps after completing a guide
- Too many nested sections that confuse navigation
- Inconsistent terminology across pages
- Broken or outdated links
- Missing prerequisites or assumptions about user knowledge
- No alternative paths for different user scenarios
- Walls of text without visual breaks
- Code examples that don't work
- Missing context about when to use different approaches
- No troubleshooting for common issues
- Outdated screenshots or videos
- Check for broken links across all pages
- Review user feedback and support tickets
- Update any outdated information
- Monitor analytics for drop-off patterns
- User journey testing - Have someone new try the docs
- Content freshness review - Update examples and screenshots
- Cross-reference validation - Ensure all links are accurate
- Competitive analysis - See what others are doing well
- Major content restructuring based on user behavior data
- Template updates to incorporate new best practices
- Video/screenshot refresh to keep visuals current
- User research to understand changing needs
- Page completion rates - Users finishing guides successfully
- Time to first success - Speed of initial value achievement
- Return usage patterns - Users coming back to reference docs
- Community engagement - Questions, contributions, discussions
- Support ticket reduction - Fewer questions about documented topics
- Feature adoption rates - How quickly new features get used
- User satisfaction scores - Direct feedback on documentation quality
- Business metrics - Faster onboarding, higher activation, better retention
Remember: Great documentation is never finished—it's continuously improved based on user feedback and changing needs. Start with these principles and iterate based on what you learn from your users.