This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
Botdocs is an npm CLI tool that converts markdown documentation into static websites with optional client-side semantic search chatbots. The chatbot uses vector embeddings and cosine similarity search with Transformers.js running entirely in the browser.
# Build TypeScript (CLI and builder code)
npm run build
# Build client-side code (browser bundle)
npm run build:client
# Watch mode for development
npm run dev# Link package globally for testing
npm link
# Run the CLI
botdocs ./test-docs
# Run with verbose output for debugging
botdocs ./test-docs -v
# Disable chatbot
botdocs ./test-docs --no-chat# Prepublish hook automatically runs npm run build
npm publishThe project has two separate build configurations that must both be executed:
-
Server/CLI Build (
npm run build):- Uses TypeScript compiler (tsc)
- Config:
tsconfig.json - Input:
src/(excludessrc/client/) - Output:
dist/ - Builds Node.js CLI and builder code
-
Client Build (
npm run build:client):- Uses Vite bundler
- Config:
tsconfig.client.json+vite.config.ts - Input:
src/client/ - Output:
dist-client/ - Creates browser-ready
bundle.jswith Transformers.js
src/
├── cli/ # CLI entry point (Commander.js)
├── builder/ # Build pipeline (runs at build-time)
│ ├── index.ts # Build orchestrator
│ ├── markdown-processor.ts # Parses markdown → HTML
│ ├── chunker.ts # Splits docs into chunks
│ ├── embedder.ts # Generates embeddings (build-time)
│ ├── vector-db-builder.ts # Creates vector-db.json
│ ├── site-generator.ts # Generates HTML pages
│ └── template-engine.ts # HTML templating
├── client/ # Browser code (runs at runtime)
│ ├── main.ts # Entry point
│ ├── navigation.ts # Site navigation
│ ├── theme/ # Dark mode, etc.
│ └── chat/ # Semantic search chatbot
│ ├── embedder.ts # Client-side query embeddings
│ ├── vector-search.ts # Cosine similarity search
│ ├── rag-engine.ts # Search orchestration and result formatting
│ └── chatbox.ts # UI component
├── types/ # TypeScript type definitions
├── templates/ # HTML templates
└── styles/ # CSS files (bundled at build time)
When a user runs botdocs ./docs, the build process executes:
- Scan & Parse:
markdown-processor.tsfinds.mdfiles, parses front matter, converts to HTML with markdown-it - Chunk & Embed:
chunker.tssplits by headings,embedder.tsgenerates 384-dim embeddings,vector-db-builder.tscreatesvector-db.json - Generate Site:
site-generator.tsapplies templates, builds navigation, writes HTML files - Bundle Assets: Copies
dist-client/bundle.jsand CSS tooutput/assets/
When a user opens the generated site and uses the chatbot:
- Initialization:
embedder.tsloads e5-small-v2 embedding model via Transformers.js (cached after first load) - User asks question →
embedder.tsembeds query into 384-dim vector vector-search.tssearchesvector-db.jsonvia cosine similarity → retrieves top K most relevant chunksrag-engine.tsformats results into a conversational response →chatbox.tsdisplays with citations
Note: The system uses semantic search only - no text generation. Results are presented in a natural format but are direct excerpts from the documentation.
- ES Modules: Package uses
"type": "module"- all imports require.jsextensions even for.tsfiles - Path Resolution: Builder code uses
fileURLToPath(import.meta.url)andresolve(__dirname, '../../..')to navigate fromdist/back to project root - No Backend: Generated sites are fully static - vector DB and embedding model run in browser
- Semantic Search Only: No LLM text generation - the chatbot retrieves and formats relevant documentation chunks
- Build-time vs Runtime: Embeddings generated twice - once at build time (Node.js, for all chunks), once at runtime (browser, for user queries only)
- Two tsconfigs:
tsconfig.jsonfor Node.js (no DOM),tsconfig.client.jsonfor browser (DOM, no emit)
Users can create botdocs.config.json in their docs directory:
{
"title": "My Documentation",
"chat": { "enabled": true },
"build": { "chunkSize": 500, "chunkOverlap": 50, "topK": 5 }
}CLI options (--no-chat, -v, -o) override config file settings.
- Update
src/cli/options.tswith new option type - Add option to Commander program in
src/cli/index.ts - Pass through to
build()insrc/builder/index.ts - Update
BuildOptionstype insrc/types/config.ts
- Edit relevant builder in
src/builder/(markdown-processor, chunker, embedder, etc.) - Run
npm run buildto recompile - Test with
botdocs ./test-docs -vto see verbose output - Check generated files in
output/
- Edit files in
src/client/ - Run
npm run build:clientto rebuild browser bundle - Run full build:
botdocs ./test-docs - Open
output/index.htmlin browser to test
- Use
-vflag for verbose logging - Check that both builds succeeded:
ls dist/andls dist-client/ - Inspect generated
output/vector-db.jsonto verify embeddings - Use browser DevTools to debug client-side code (sourcemaps enabled)
@xenova/transformers: Embedding generation during buildmarkdown-it+ plugins: Markdown parsing, syntax highlighting (Shiki)gray-matter: Front matter parsingcommander: CLI argument parsingfs-extra,glob: File operations
- Transformers.js: Loads e5-small-v2 embedding model (384-dim, 2.2x faster than all-MiniLM-L6-v2)
- Bundled into single
bundle.jsvia Vite
Note: @mlc-ai/web-llm is listed in dependencies but may not be actively used - check model-loader.ts for current implementation.