- How It Works
- Session Lifecycle
- MCP Tools
- Progressive Disclosure
- Memory Hygiene
- Topic Key Workflow
- Project Structure
- CLI Reference
The agent proactively calls mem_save after significant work — structured, searchable, no noise.
Engram trusts the agent to decide what's worth remembering — not a firehose of raw tool calls.
1. Agent completes significant work (bugfix, architecture decision, etc.)
2. Agent calls mem_save with a structured summary:
- title: "Fixed N+1 query in user list"
- type: "bugfix"
- content: What/Why/Where/Learned format
3. Engram persists to SQLite with FTS5 indexing
4. Next session: agent searches memory, gets relevant context
Session starts → Agent works → Agent saves memories proactively
↓
Session ends → Agent writes session summary (Goal/Discoveries/Accomplished/Files)
↓
Next session starts → Previous session context is injected automatically
| Tool | Purpose |
|---|---|
mem_save |
Save a structured observation (decision, bugfix, pattern, etc.) |
mem_update |
Update an existing observation by ID |
mem_delete |
Delete an observation (soft-delete by default, hard-delete optional) |
mem_suggest_topic_key |
Suggest a stable topic_key for evolving topics before saving |
mem_search |
Full-text search across all memories |
mem_session_summary |
Save end-of-session summary |
mem_context |
Get recent context from previous sessions |
mem_timeline |
Chronological context around a specific observation |
mem_get_observation |
Get full content of a specific memory |
mem_save_prompt |
Save a user prompt for future context |
mem_stats |
Memory system statistics |
mem_session_start |
Register a session start |
mem_session_end |
Mark a session as completed |
mem_capture_passive |
Extract learnings from text output |
mem_merge_projects |
Merge project name variants into canonical name (admin) |
Token-efficient memory retrieval — don't dump everything, drill in:
1. mem_search "auth middleware" → compact results with IDs (~100 tokens each)
2. mem_timeline observation_id=42 → what happened before/after in that session
3. mem_get_observation id=42 → full untruncated content
mem_savenow supportsscope(projectdefault,personaloptional)mem_savealso supportstopic_key; with a topic key, saves become upserts (same project+scope+topic updates the existing memory)- Exact dedupe prevents repeated inserts in a rolling window (hash + project + scope + type + title)
- Duplicates update metadata (
duplicate_count,last_seen_at,updated_at) instead of creating new rows - Topic upserts increment
revision_countso evolving decisions stay in one memory mem_deleteuses soft-delete by default (deleted_at), with optional hard deletemem_search,mem_context, recent lists, and timeline ignore soft-deleted observations
Use this when a topic evolves over time (architecture, long-running feature decisions, etc.):
1. mem_suggest_topic_key(type="architecture", title="Auth architecture")
2. mem_save(..., topic_key="architecture-auth-architecture")
3. Later change on same topic -> mem_save(..., same topic_key)
=> existing observation is updated (revision_count++)
Different topics should use different keys (e.g. architecture/auth-model vs bug/auth-nil-panic) so they never overwrite each other.
mem_suggest_topic_key now applies a family heuristic for consistency across sessions:
architecture/*for architecture/design/ADR-like changesbug/*for fixes, regressions, errors, panicsdecision/*,pattern/*,config/*,discovery/*,learning/*when detected
engram/
├── cmd/engram/main.go # CLI entrypoint
├── internal/
│ ├── store/store.go # Core: SQLite + FTS5 + all data ops
│ ├── server/server.go # HTTP REST API (port 7437)
│ ├── mcp/mcp.go # MCP stdio server (15 tools)
│ ├── setup/setup.go # Agent plugin installer (go:embed)
│ ├── project/ # Project name detection + similarity matching
│ │ └── project.go # DetectProject, FindSimilar, Levenshtein
│ ├── sync/sync.go # Git sync: manifest + compressed chunks
│ └── tui/ # Bubbletea terminal UI
│ ├── model.go # Screen constants, Model, Init()
│ ├── styles.go # Lipgloss styles (Catppuccin Mocha)
│ ├── update.go # Input handling, per-screen handlers
│ └── view.go # Rendering, per-screen views
├── plugin/
│ ├── opencode/engram.ts # OpenCode adapter plugin
│ └── claude-code/ # Claude Code plugin (hooks + skill)
│ ├── .claude-plugin/plugin.json
│ ├── .mcp.json
│ ├── hooks/hooks.json
│ ├── scripts/ # session-start, post-compaction, subagent-stop, session-stop
│ └── skills/memory/SKILL.md
├── skills/ # Contributor AI skills (repo-wide standards + Engram-specific guardrails)
├── setup.sh # Links repo skills into .claude/.codex/.gemini (project-local)
├── assets/ # Screenshots and media
├── DOCS.md # Full technical documentation
├── CONTRIBUTING.md # Contribution workflow and standards
├── go.mod
└── go.sum
engram setup [agent] Install/setup agent integration (opencode, claude-code, gemini-cli, codex)
engram serve [port] Start HTTP API server (default: 7437)
engram mcp Start MCP server (stdio transport)
engram tui Launch interactive terminal UI
engram search <query> Search memories
engram save <title> <msg> Save a memory
engram timeline <obs_id> Chronological context around an observation
engram context [project] Recent context from previous sessions
engram stats Memory statistics
engram export [file] Export all memories to JSON
engram import <file> Import memories from JSON
engram sync Export new memories as compressed chunk to .engram/
engram sync --all Export ALL projects (ignore directory-based filter)
engram projects list Show all projects with obs/session/prompt counts
engram projects consolidate Interactive merge of similar project names [--all] [--dry-run]
engram projects prune Remove projects with 0 observations [--dry-run]
engram obsidian-export Export memories to Obsidian vault (beta)
engram version Show version
- Agent Setup — connect your agent to Engram
- Plugins — what the OpenCode and Claude Code plugins add
- Obsidian Brain — visualize memories as a knowledge graph (beta)