🔍 Semantic search across your Claude Code conversation history
Agent Genesis indexes and searches your Claude Code and Claude.ai/Desktop conversations, making it easy to find past discussions, decisions, and solutions.
🔒 Privacy First: Runs 100% locally. No API keys required. Your conversations never leave your machine.
Ask questions like:
- "Find my conversations about authentication"
- "When did I discuss database optimization?"
- "What was that solution for the API rate limiting issue?"
Agent Genesis finds relevant conversations using semantic search (meaning-based, not just keyword matching).
- Prerequisites
- Quick Start
- Stopping the Service
- Importing Claude.ai Data
- Searching Your Conversations
- MCP Integration
- Troubleshooting
- How It Compares
- Configuration Reference
- Contributing
Before you start, you need:
- Docker (version 20.10+) - Install Docker
- 6GB+ RAM available for Docker (required for the local embedding model)
- Claude Code conversations in your projects folder (created automatically when you use Claude Code)
Verify Docker is installed:
docker --version
# Should output: Docker version 20.10.x or higherFind your Claude Code projects path:
| OS | Path |
|---|---|
| Linux | ~/.claude/projects/ |
| macOS | ~/.claude/projects/ |
| Windows | C:\Users\YourName\.claude\projects\ |
| Windows (WSL) | /home/youruser/.claude/projects/ |
git clone https://github.com/Platano78/agent-genesis.git
cd agent-genesiscp .env.example .env
cp docker-compose.template.yml docker-compose.ymlOpen .env in any text editor and set your Claude Code projects path:
Linux/macOS:
CLAUDE_PROJECTS_PATH=/home/youruser/.claude/projectsWindows (use forward slashes):
CLAUDE_PROJECTS_PATH=C:/Users/YourName/.claude/projectsVerify your path exists:
# Linux/macOS
ls ~/.claude/projects/
# Windows (PowerShell)
dir $env:USERPROFILE\.claude\projectsYou should see folders with names like -home-user-project-myapp.
docker-compose up -dWait about 30 seconds for startup, then verify it's running:
curl http://localhost:8080/health
# Should return: {"status": "OK", ...}Windows (PowerShell):
Invoke-RestMethod http://localhost:8080/healthcurl -X POST http://localhost:8080/index/triggerThis scans your Claude Code conversations and indexes them. First run may take 1-5 minutes depending on history size.
curl -X POST http://localhost:8080/search \
-H "Content-Type: application/json" \
-d '{"query": "authentication", "n_results": 5}'Example output:
{
"query": "authentication",
"results_count": 3,
"results": [
{
"content": "Let me help you implement JWT authentication...",
"project": "my-webapp",
"score": 0.89
},
{
"content": "For OAuth2, you'll need to set up...",
"project": "api-server",
"score": 0.76
}
]
}🎉 Done! Your Claude Code conversations are now searchable.
When you're done, stop the service to free up resources:
# Stop the containers
docker-compose down
# To restart later
docker-compose up -dNote: Your indexed data is preserved. You don't need to re-index after restarting.
⚠️ Important: Claude.ai and Claude Desktop store conversations in the cloud, not on your computer. You must request a data export from Anthropic.
- Go to claude.ai
- Click your profile → Settings → Account
- Request a data export
- Wait for the email (can take 24-48 hours)
- Download the ZIP file
The indexer automatically detects data-*.zip files in the exports volume. Copy your ZIP into the container's exports directory, then trigger indexing:
# Copy the ZIP into the container's exports volume
docker cp /path/to/your/data-export.zip agent-genesis:/app/data/exports/
# Trigger indexing (picks up the new export automatically)
curl -X POST http://localhost:8080/index/triggercurl http://localhost:8080/statsYou should see counts in both alpha (Claude Code) and beta (Claude.ai) collections.
curl -X POST http://localhost:8080/search \
-H "Content-Type: application/json" \
-d '{"query": "how to fix the login bug", "n_results": 10}'curl -X POST http://localhost:8080/search \
-H "Content-Type: application/json" \
-d '{"query": "database", "collections": ["alpha"]}'curl -X POST http://localhost:8080/search \
-H "Content-Type: application/json" \
-d '{"query": "project planning", "collections": ["beta"]}'curl -X POST http://localhost:8080/search \
-H "Content-Type: application/json" \
-d '{"query": "API design", "project_filter": "my-project"}'Search your conversations directly from Claude Code using the MCP protocol.
How it works: The MCP server runs on your host machine and acts as a bridge to the Agent Genesis API running in Docker.
Run this on your host machine (not inside Docker):
pip install agent-genesis-mcpAdd to your ~/.claude.json:
{
"mcpServers": {
"agent-genesis": {
"command": "agent-genesis-mcp",
"args": []
}
}
}Restart Claude Code to load the new MCP server.
Ask Claude Code:
"Search my conversation history for discussions about authentication"
Claude will use the search_conversations tool automatically.
| Tool | Description |
|---|---|
search_conversations |
Search your conversation history |
get_api_stats |
See indexed conversation counts |
check_api_health |
Check if API is running |
index_conversations |
Trigger re-indexing |
manage_scheduler |
Set up automatic re-indexing |
# Check container status
docker-compose ps
# View logs for errors
docker-compose logs agent-genesis- Wait 30 seconds after
docker-compose up -d - Check if container is running:
docker-compose ps - Check logs:
docker-compose logs agent-genesis
Edit docker-compose.yml and change the port:
ports:
- "8081:8080" # Change 8081 to any free portThen restart: docker-compose down && docker-compose up -d
-
Check if data is indexed:
curl http://localhost:8080/stats # Should show count > 0 -
If count is 0, trigger indexing:
curl -X POST http://localhost:8080/index/trigger
-
Verify your
.envpath is correct:ls ~/.claude/projects/ # Should show folders
The service needs ~4-6GB RAM for the embedding model. Increase Docker memory:
-
Edit
docker-compose.yml:mem_limit: 8g
-
Restart:
docker-compose down && docker-compose up -d
- Ensure you're using an official Anthropic data export
- The ZIP must contain
conversations.json - Check the import output for specific errors
- Verify Docker container is running:
curl http://localhost:8080/health - Verify MCP is installed:
pip show agent-genesis-mcp - Check Claude Code logs for MCP errors
| Feature | Agent Genesis | episodic-memory | claude-mem |
|---|---|---|---|
| Deployment | Docker container | Claude Code plugin | Claude Code plugin |
| Claude Code | ✅ | ✅ | ✅ |
| Claude.ai/Desktop | ✅ ZIP import | ❌ | ❌ |
| REST API | ✅ | ❌ | ❌ |
| MCP Server | ✅ | ✅ | ✅ |
| 100% Local | ✅ | ✅ | ✅ |
Choose Agent Genesis if you:
- Need a REST API for external integrations
- Want to search Claude.ai/Desktop history (only option with ZIP import)
- Prefer a standalone service over a Claude Code plugin
| Variable | Description | Default |
|---|---|---|
CLAUDE_PROJECTS_PATH |
Path to Claude Code projects | Required |
API_PORT |
Port for the API | 8080 |
LOG_LEVEL |
Logging level | INFO |
| Endpoint | Method | Description |
|---|---|---|
/health |
GET | Health check (collection stats + disk usage) |
/health/deep |
GET | Deep health check with ChromaDB search validation |
/live |
GET | Liveness probe (no DB calls, always fast) |
/ready |
GET | Readiness probe (collection stats + disk usage) |
/stats |
GET | Conversation counts per collection |
/search |
POST | Search conversations |
/index/trigger |
POST | Trigger indexing |
/index/status |
GET | Poll current indexing job status (no_job, running, complete, failed) |
scripts/sync-and-index.sh rsyncs your Claude Code conversations to the remote host, triggers indexing, and optionally runs the Faulkner-DB relationship extractor after indexing completes. This chains together the sync + indexing + knowledge-graph update as a single nightly job.
Set these env vars (in a wrapper script, cron environment, or your scheduler's env section) to enable the extractor step:
| Variable | Purpose | Example |
|---|---|---|
GENESIS_REMOTE_HOST |
SSH target where Agent Genesis runs | user@my-server |
FAULKNER_REPO |
Local path to faulkner-db checkout (needed for extractor venv) | ~/project/faulkner-db |
FAULKNER_LLM_ENDPOINT |
OpenAI-compatible base URL for relationship classification | http://localhost:8081/v1 |
FALKORDB_HOST / FALKORDB_PORT / FALKORDB_PASSWORD / FALKORDB_GRAPH |
FalkorDB connection for extractor | See faulkner-db docs |
The extractor step is skipped gracefully if the faulkner-db venv or state file is missing, so the sync works fine without it. A 20-hour run-gate on reports/extraction_state.json mtime prevents re-running within the same day.
The script also rsyncs ~/.claude/docs/ to the remote host's QMD doc store (for session-handoff files and other persistent notes that should be searchable across sessions). Safe no-op if the directory doesn't exist.
┌─────────────────────────────────────────────────────────────┐
│ Your Computer (everything runs locally) │
│ │
│ ┌─────────────────┐ ┌───────────────────────────────┐ │
│ │ Claude Code │ │ Agent Genesis (Docker) │ │
│ │ │ │ │ │
│ │ ~/.claude/ │───▶│ ChromaDB + Local Embeddings │ │
│ │ projects/ │ │ (bge-small model) │ │
│ └─────────────────┘ │ │ │
│ │ ┌─────────────┐ ┌─────────────┐ │ │
│ ┌─────────────────┐ │ │ Alpha │ │ Beta │ │ │
│ │ MCP Server │───▶│ │ Claude Code │ │ Claude.ai │ │ │
│ │ (optional) │ │ └─────────────┘ └─────────────┘ │ │
│ └─────────────────┘ └───────────────────────────────┘ │
│ │
│ 🔒 No data leaves your machine. No API keys needed. │
└─────────────────────────────────────────────────────────────┘
python -m venv venv
source venv/bin/activate # Windows: venv\Scripts\activate
pip install -r requirements.txt
python -m daemon.api_servercd mcp-server
pip install -e ".[dev]"
pytestContributions welcome! Please:
- Fork the repository
- Create a feature branch
- Make your changes
- Run tests
- Submit a pull request
MIT License - see LICENSE
- MCP server updated to align package and runtime version metadata at
1.3.0 - Added MCP resources and prompts for MCP 2025-11-25 compliance
- Added scheduler management and manual indexing MCP tools to the documented tool surface
- Fixed MCP validation scripts to verify the current release dynamically instead of hard-coding stale versions
- Improved daemon import fallback so missing optional dependencies report the real error cause
- Removed stale Claude Desktop LevelDB setup references from the public setup flow
- Generalized sync tooling so the tracked script no longer contains personal infrastructure hooks
- Complete documentation rewrite
- Fixed: Removed incorrect LevelDB claims (Claude.ai uses cloud storage)
- Added: Windows path instructions
- Added: Privacy statement (100% local)
- Added: Troubleshooting section
- Added: Stopping the service instructions
- Increased container memory to 6GB
- Added health monitoring scripts
- Added automated backups
- Initial release