Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
167 changes: 167 additions & 0 deletions .env.example
Original file line number Diff line number Diff line change
@@ -0,0 +1,167 @@
# =============================================================================
# agentmemory configuration
# =============================================================================
#
# Copy this file to `~/.agentmemory/.env` (or to your project root if you
# prefer scoped config) and uncomment the lines you want to override.
#
# Every line is OFF by default — `agentmemory` runs out of the box with no
# LLM key, no embedding key, and no API auth. Set keys here only when you
# want to enable the corresponding feature.
#
# Run `npx @agentmemory/agentmemory init` to copy this file into place
# automatically. Run `npx @agentmemory/agentmemory doctor` to verify that
# the daemon reads the env you expect.
#
# Defaults shown in comments. Listed in priority order — the first key
# present wins on the LLM detection path (see src/config.ts::detectProvider).

# -----------------------------------------------------------------------------
# 1. LLM provider — pick ONE
# -----------------------------------------------------------------------------
#
# Without a provider key, agentmemory runs in noop mode: observations are
# indexed via zero-LLM synthetic compression, hybrid search still works,
# but LLM-backed summarisation / reflection / consolidation are disabled.
# The detection order is OPENAI_API_KEY → MINIMAX_API_KEY → ANTHROPIC_API_KEY
# → GEMINI_API_KEY → OPENROUTER_API_KEY → noop.

# OPENAI_API_KEY=sk-... # Used for OpenAI-compatible embeddings today. PR #307 will extend this to chat completions (DeepSeek, SiliconFlow, vLLM, LM Studio, Ollama via `/v1`).
# OPENAI_BASE_URL=https://api.openai.com # Override for OpenAI-compatible providers

# ANTHROPIC_API_KEY=sk-ant-...
# ANTHROPIC_MODEL=claude-sonnet-4-20250514 # Default Anthropic model
# ANTHROPIC_BASE_URL=https://api.anthropic.com # Override for Anthropic-compatible proxies / Azure AI Foundry

# GEMINI_API_KEY=... # Either env name works; GEMINI_API_KEY takes precedence
# GOOGLE_API_KEY=... # Alias for GEMINI_API_KEY when set alone (emits a one-time stderr hint)
# GEMINI_MODEL=gemini-2.5-flash # Default Gemini model (auto-detected GA model)

# OPENROUTER_API_KEY=sk-or-...
# OPENROUTER_MODEL=anthropic/claude-sonnet-4-20250514

# MINIMAX_API_KEY=...
# MINIMAX_MODEL=MiniMax-M2.7

# MAX_TOKENS=4096 # Cap LLM completion tokens for compression / summarise calls

# Opt-in Claude-subscription fallback (spawns @anthropic-ai/claude-agent-sdk
# child sessions). Off by default — the agent-sdk fallback can trigger
# Stop-hook recursion (#149 follow-up) when invoked from inside Claude Code.
# AGENTMEMORY_ALLOW_AGENT_SDK=true

# FALLBACK_PROVIDERS=anthropic,gemini # Comma-separated chain tried after the primary provider returns an error (e.g. rate limit)

# -----------------------------------------------------------------------------
# 2. Embedding provider — auto-detected, override via EMBEDDING_PROVIDER
# -----------------------------------------------------------------------------
#
# Without an embedding key, agentmemory runs in BM25-only mode for hybrid
# search. Detection order: EMBEDDING_PROVIDER override → GEMINI_API_KEY →
# OPENAI_API_KEY → VOYAGE_API_KEY → COHERE_API_KEY → OPENROUTER_API_KEY →
# local (Xenova/all-MiniLM-L6-v2, 384-dim).

# EMBEDDING_PROVIDER=local # local | openai | voyage | cohere | gemini | openrouter

# VOYAGE_API_KEY=pa-... # Optimised for code embeddings

# COHERE_API_KEY=... # General-purpose embeddings

# Reuses OPENAI_API_KEY / OPENAI_BASE_URL above when EMBEDDING_PROVIDER=openai.
# OPENAI_EMBEDDING_MODEL=text-embedding-3-small # Embedding model when EMBEDDING_PROVIDER=openai
# OPENAI_EMBEDDING_DIMENSIONS=1536 # Required when the model is not in the known-models table

# OPENROUTER_EMBEDDING_MODEL=openai/text-embedding-3-small # When EMBEDDING_PROVIDER=openrouter

# -----------------------------------------------------------------------------
# 3. Auth & security
# -----------------------------------------------------------------------------
#
# Bearer-token auth for the REST API + viewer + all integration plugins.
# Without a secret, REST endpoints are open on loopback. Set this when
# you expose the daemon beyond loopback or run behind a reverse proxy.

# AGENTMEMORY_SECRET=your-secret-here

# -----------------------------------------------------------------------------
# 4. Search tuning
# -----------------------------------------------------------------------------

# BM25_WEIGHT=0.4 # Hybrid search weight for BM25 leg
# VECTOR_WEIGHT=0.6 # Hybrid search weight for vector leg
# AGENTMEMORY_GRAPH_WEIGHT=0.2 # Graph traversal bonus on smart-search ranking
# TOKEN_BUDGET=2000 # Max tokens injected via mem::context per session
# MAX_OBS_PER_SESSION=500 # Per-session observation cap before consolidation kicks in

# -----------------------------------------------------------------------------
# 5. Behaviour flags
# -----------------------------------------------------------------------------

# AGENTMEMORY_AUTO_COMPRESS=true # Run LLM compression on every observation batch (requires a provider key). Default off — synthetic compression handles most cases.
# AGENTMEMORY_INJECT_CONTEXT=true # Inject recalled memories back into agent prompts (#143). Default off — hooks capture observations but do not modify conversation.
# CONSOLIDATION_ENABLED=true # Run the 4-tier consolidation pipeline (memories → semantic → procedural). Default off — opt in once you've measured the LLM cost.
# CONSOLIDATION_DECAY_DAYS=30 # Age (days) after which non-reinforced memories decay during consolidation
# GRAPH_EXTRACTION_ENABLED=true # Extract concept-graph edges on remember; powers the graph-traversal recall path
# GRAPH_EXTRACTION_BATCH_SIZE=8 # Memories per graph-extraction batch
# AGENTMEMORY_REFLECT=true # Periodically auto-synthesize lessons from memories
# AGENTMEMORY_DROP_STALE_INDEX=true # Drop on-disk BM25 / vector index on startup if dim guard fires (#248). Recovery toggle for stuck-state debugging.
# AGENTMEMORY_IMAGE_EMBEDDINGS=true # Enable image embeddings when an image provider is present (experimental).

# -----------------------------------------------------------------------------
# 6. CLI / runtime knobs
# -----------------------------------------------------------------------------

# AGENTMEMORY_TOOLS=all # core (7 tools, default) | all (51 tools) — surface exposed to MCP clients
# AGENTMEMORY_SLOTS=memory # Comma-separated plugin slot names the CLI should claim
# AGENTMEMORY_DEBUG=1 # Trace MCP shim probe + standalone fallback decisions to stderr
# AGENTMEMORY_FORCE_PROXY=1 # Skip the MCP shim livez probe and trust AGENTMEMORY_URL (for sandboxed MCP clients that can't reach localhost)
# AGENTMEMORY_PROBE_TIMEOUT_MS=2000 # MCP shim livez probe timeout
# AGENTMEMORY_URL=http://localhost:3111 # REST base URL — honored by status, doctor, MCP shim
# AGENTMEMORY_VIEWER_URL=http://localhost:3113 # Override the viewer URL printed by `agentmemory status`
# AGENTMEMORY_EXPORT_ROOT=~/agentmemory-backup # Default destination for `agentmemory export`

# STANDALONE_MCP=1 # MCP shim only — bypass the worker and run @agentmemory/mcp in-process
# STANDALONE_PERSIST_PATH=~/.agentmemory/local.db # Path used by the standalone MCP shim's local fallback store

# Snapshot exporter — periodic snapshots of state_store + stream_store.
# SNAPSHOT_ENABLED=true
# SNAPSHOT_DIR=~/.agentmemory/snapshots
# SNAPSHOT_INTERVAL=3600 # Seconds between snapshots

# Team sharing — when set, memories are scoped to (TEAM_ID, USER_ID) tuples.
# TEAM_MODE=shared
# TEAM_ID=acme
# USER_ID=rohit

# -----------------------------------------------------------------------------
# 7. Ports
# -----------------------------------------------------------------------------

# III_REST_PORT=3111 # REST API port (also affects viewer at +2)
# III_STREAMS_PORT=3112 # Streams API port
# III_ENGINE_URL=ws://localhost:49134 # iii-engine WebSocket URL (used by the worker)

# -----------------------------------------------------------------------------
# 8. iii engine pin
# -----------------------------------------------------------------------------
#
# agentmemory currently pins iii-engine to v0.11.2 — v0.11.6 introduces a
# new sandbox-everything-via-`iii worker add` model that agentmemory
# hasn't been refactored for yet. Override with AGENTMEMORY_III_VERSION
# only after migrating to the sandbox model manually.

# AGENTMEMORY_III_VERSION=0.11.2

# -----------------------------------------------------------------------------
# 9. Claude Code bridge (opt-in)
# -----------------------------------------------------------------------------

# CLAUDE_MEMORY_BRIDGE=true # Mirror compressed memories into Claude Code's CLAUDE.md
# CLAUDE_PROJECT_PATH=/path/to/your/project # Required when CLAUDE_MEMORY_BRIDGE=true
# CLAUDE_MEMORY_LINE_BUDGET=200 # Lines of memory CLAUDE.md should hold

# -----------------------------------------------------------------------------
# 10. Obsidian export (opt-in)
# -----------------------------------------------------------------------------

# OBSIDIAN_AUTO_EXPORT=true # Auto-export memories to an Obsidian vault on every consolidation
36 changes: 33 additions & 3 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,13 +6,43 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/),

## [Unreleased]

## [0.9.13] — 2026-05-15

Six PRs landed since v0.9.12 — `.env.example` discovery shipped (#372), CJK BM25 tokenizer landed (#344 / PR #362), `benchmark/load-100k.ts` load harness landed (#346 / PR #363), one-click deploy templates for fly.io / Railway / Render / Coolify added (#343 / PR #361), Gemini provider defaults moved to current GA models (#246 + #368 / PR #370), and the in-tree Python ecosystem story switched from a duplicate REST client to a one-page `iii-sdk` example (#342 / PR #364). Plus 14 Dependabot security advisories closed via Next.js + PostCSS bumps.

### Added

- **`benchmark/load-100k.ts` load harness** ([#346](https://github.com/rohitg00/agentmemory/issues/346)). Hand-rolled, dependency-free harness that seeds N synthetic memories against a local daemon at `http://localhost:3111` and records p50 / p90 / p99 latency + throughput for `POST /agentmemory/remember`, `POST /agentmemory/smart-search`, and `GET /agentmemory/memories?latest=true` across the matrix N ∈ {1k, 10k, 100k} × concurrency C ∈ {1, 10, 100}. Content drawn from a seedable `mulberry32` PRNG so re-running against the same build produces the same seed corpus. Results land in `benchmark/results/load-100k-<short-git-sha>.json` (schema-versioned). Wired as `npm run bench:load`. See `benchmark/README.md` for the matrix and env knobs.
- **`.env.example` at repo root + bundled in the npm tarball** ([#372](https://github.com/rohitg00/agentmemory/issues/372), closes [#47](https://github.com/rohitg00/agentmemory/issues/47), [#293](https://github.com/rohitg00/agentmemory/issues/293), partial [#233](https://github.com/rohitg00/agentmemory/issues/233)). Every env var actually read by `src/` is now documented in one place, grouped by surface (LLM provider, embedding provider, auth, search tuning, behaviour flags, CLI runtime, ports, iii engine pin, Claude Code bridge, Obsidian export). Every line is commented out by default so the file ships as a config template, not a config. The npm package now lists `.env.example` in its `files` field so `npm i -g @agentmemory/agentmemory` carries it.

- **`agentmemory init` command**. Copies the bundled `.env.example` to `~/.agentmemory/.env` if that file doesn't already exist; refuses to overwrite an existing config and prints a diff command pointing at the latest template. Wired into the CLI help block alongside `status` / `doctor` / `demo` / `upgrade` / `mcp` / `import-jsonl`.

- **CI sync-checker for `.env.example`** (`scripts/check-env-example.mjs`). Walks every `.ts` / `.mts` / `.mjs` / `.js` file under `src/`, extracts `process.env["KEY"]` / `env["KEY"]` / `getMergedEnv()["KEY"]` / `getEnvVar("KEY")` references, and fails CI when `src/` reads an env var the template doesn't document (or vice versa). Plugged into `.github/workflows/ci.yml` after `npm test`. Initial bootstrap: 60 keys in sync.

- **CJK tokenizer for BM25 search** ([#344](https://github.com/rohitg00/agentmemory/issues/344), PR [#362](https://github.com/rohitg00/agentmemory/pull/362)). New `src/state/cjk-segmenter.ts` detects CJK input by Unicode block and routes to `@node-rs/jieba` (Chinese, native, no model download), `tiny-segmenter` (Japanese, pure JS, ~25 KB), or rule-based syllable-block split (Korean). Both segmenters declared in `optionalDependencies` so the base install stays lean; soft-fail with a one-time stderr hint when the dep is missing. Order-preserving single-pass tokenization across mixed CJK + non-CJK runs (regression test for `"abc 메모리 def 项目 ghi"` returns `["abc","메모리","def","项目","ghi"]`).

- **`benchmark/load-100k.ts` load harness** ([#346](https://github.com/rohitg00/agentmemory/issues/346), PR [#363](https://github.com/rohitg00/agentmemory/pull/363)). Hand-rolled, dependency-free harness that seeds N synthetic memories against a local daemon at `http://localhost:3111` and records p50 / p90 / p99 latency + throughput for `POST /agentmemory/remember`, `POST /agentmemory/smart-search`, and `GET /agentmemory/memories?latest=true` across the matrix N ∈ {1k, 10k, 100k} × concurrency C ∈ {1, 10, 100}. Content drawn from a seedable `mulberry32` PRNG so re-running against the same build produces the same seed corpus. Results land in `benchmark/results/load-100k-<short-git-sha>.json`. Wired as `npm run bench:load`.

- **One-click deploy templates** for fly.io, Railway, Render, and Coolify ([#343](https://github.com/rohitg00/agentmemory/issues/343), PR [#361](https://github.com/rohitg00/agentmemory/pull/361)). Each template under `deploy/<platform>/` ships a multi-stage Dockerfile that `COPY --from=iiidev/iii:0.11.2`s the engine binary into a `node:22-slim` runtime, npm-installs `@agentmemory/agentmemory` under `/opt/agentmemory` with `iii-sdk` pinned via `package.json` overrides (avoids the caret-resolves-to-0.11.6 drift), and runs an entrypoint that rewrites the bundled `iii-config.yaml` to bind `0.0.0.0` + use absolute `/data` paths, chowns the platform-mounted volume to `node:node` via `gosu`, generates a first-boot HMAC secret, and exec's the agentmemory CLI as the unprivileged `node` user under `tini` (with `TINI_SUBREAPER=1`). Verified end-to-end on fly.io (machine in `iad`, 1 GB volume, healthcheck passing).

- **`examples/python/`** quickstart + observation/recall flow showing `iii-sdk` (Python) calling `mem::remember` / `mem::smart-search` / `mem::context` directly over `ws://localhost:49134` ([#342](https://github.com/rohitg00/agentmemory/issues/342), PR [#364](https://github.com/rohitg00/agentmemory/pull/364)). Replaces a duplicate-transport Python REST client (initial PR #360, closed) with a single-SDK story — the same `iii-sdk` install (`pip install iii-sdk` / `cargo add iii-sdk` / `npm install iii-sdk`) talks to every agentmemory function from any language.

### Changed

- **Gemini provider defaults bumped to current GA models** (PR [#370](https://github.com/rohitg00/agentmemory/pull/370), closes [#368](https://github.com/rohitg00/agentmemory/pull/368), [#246](https://github.com/rohitg00/agentmemory/pull/246)). LLM default `gemini-2.0-flash` → `gemini-2.5-flash` (the moving `gemini-flash-latest` alias was rejected — release behaviour should be deterministic). Embedding default `text-embedding-004` → `gemini-embedding-001` (the previous default is deprecated and shuts down 2026-01-14 per `ai.google.dev/gemini-api/docs/deprecations`). Three implementation details ride along: (1) URL path `:batchEmbedContent` → `:batchEmbedContents`, (2) every request now sends `outputDimensionality: 768` so the returned vectors match `GeminiEmbeddingProvider.dimensions = 768` and the index-restore dim guard from #248 — no reindex needed, (3) returned vectors are L2-normalized before the result-array push because `gemini-embedding-001` does **not** normalize by default unlike `text-embedding-004` and without this the downstream cosine-similarity math silently collapses recall. `l2Normalize` warns once on a zero-norm embedding so operators can correlate index quality dips with upstream regressions.

### Security

- **14 open Dependabot advisories closed via Next.js + PostCSS bumps** (PR [#348](https://github.com/rohitg00/agentmemory/pull/348)). Closed: 13 Next.js advisories (middleware/proxy bypass + SSRF on WebSocket upgrades + DoS via connection exhaustion + CSP-nonce XSS + image-opt DoS + RSC cache poisoning + beforeInteractive XSS + segment-prefetch routes) by bumping the website's Next.js to `^16.2.6`. Plus the PostCSS XSS-via-unescaped-`</style>` advisory closed by pinning to `^8.5.10` via `overrides` in `website/package.json`. Verified `npm audit --omit=dev` returns 0 and `npm run build` clean on Next 16.2.6. Dependabot now runs weekly against six update streams (npm × 5 paths + github-actions) per the new `.github/dependabot.yml`.

### Contributors

External contributors landed this release:

### Performance
- [@fatinghenji](https://github.com/fatinghenji) — pre-cleanup work on the OpenAI-compatible LLM provider (PR #240 / PR #307); the universal-adapter shape will land in the next minor once branch maintenance catches up.
- [@AmmarSaleh50](https://github.com/AmmarSaleh50) — Gemini embedding migration with L2-norm + 768-dim plumbing (PR #246, folded into #370).
- [@yut304](https://github.com/yut304) — Gemini LLM default deprecation fix (PR #368, folded into #370).

- This is the placeholder for per-release p50 / p90 / p99 numbers from `benchmark/load-100k.ts`. Each release should land a `benchmark/results/load-100k-<sha>.json` and reference the headline p99 here. Format suggestion: one bullet per (N, C) cell that materially regressed or improved versus the previous release. p99 is the capacity-planning number; p50 + throughput are context. See [`benchmark/README.md`](benchmark/README.md) for how to reproduce.
Thanks also to the issue reporters whose precise repros drove the search-quality + viewer + config-template work this cycle.

## [0.9.12] — 2026-05-13

Expand Down
Loading
Loading