Store β Retrieve β Forget β just like humans do.
Quick Start Β· How It Works Β· API Β· OpenClaw Β· Docker
AI agents forget everything between sessions. MemVault gives them a real memory system:
- π§ Ebbinghaus Decay β Memories fade over time. Old, unused memories weaken; frequently accessed ones stay strong.
- β‘ Strength-Weighted Retrieval β Results ranked by
similarity Γ strength. Important memories surface first. - π 100% Local β Ollama for LLM, sentence-transformers for embeddings, PostgreSQL for storage. No API keys needed.
- π Multi-Language β Optionally translate memory summaries to your language via local LLM.
- π Built-in Analytics β Track memory distribution, access patterns, and health.
Built on memU as the extraction engine. MemVault adds: HTTP API, memory decay, weighted retrieval, multi-agent tracking, and one-command setup.
Running 3+ weeks with 12,000+ memories:
Total: 12,599 memories
Strong (β₯0.8): 2,142 | Medium: 7,273 | Weak: 3,150 | Fading: 34
Avg strength: 0.51 | Max access count: 305
If you use OpenClaw:
clawhub install memvault
cd ~/.openclaw/workspace/skills/memvault
bash scripts/install.shDone. memvault CLI is ready. Add the snippet from SKILL.md to your TOOLS.md.
git clone https://github.com/wjy9902/memvault.git
cd memvault
docker compose up -dThat's it. PostgreSQL, embedding server, and MemVault are all running.
Note: You still need Ollama running on your host for LLM extraction:
# Install: https://ollama.com ollama pull qwen2.5:3b ollama serveOr set
MEMVAULT_LLM_BASE_URLto any OpenAI-compatible endpoint (OpenAI, Groq, etc.)
β οΈ Requires Python 3.13+ (memU dependency). Check withpython3 --version. If you have an older Python, use Docker (Option A) instead.
git clone https://github.com/wjy9902/memvault.git
cd memvault
bash scripts/setup.sh # PostgreSQL + Ollama + Python deps
bash scripts/start.sh # Start all services# Store a memory
curl -X POST http://localhost:8002/memorize \
-H "Content-Type: application/json" \
-d '{"conversation": [{"role": "user", "content": "I love Python and dark mode"}], "user_id": "alice"}'
# Retrieve
curl -X POST http://localhost:8002/retrieve \
-H "Content-Type: application/json" \
-d '{"query": "what does the user prefer?", "user_id": "alice"}'
# Or use the CLI
memvault memorize-text alice "I love Python and dark mode" "Noted!"
memvault retrieve alice "user preferences"
memvault stats aliceConversation β Extract (LLM) β Embed β Store (pgvector) β Retrieve (weighted) β Decay (daily)
β |
βββββ access_count ββββββ
Every memory has a strength (0.01 β 1.0) that decays over time:
strength = exp(-rate Γ days / (1 + damping Γ ln(1 + access_count)))
- New memories start at strength 1.0
- Unused memories fade over weeks
- Frequently accessed memories decay much slower
- Fading memories (strength < 0.1) are excluded from retrieval
- Run
/decaydaily via cron
Standard: rank = cosine_similarity(query, memory)
MemVault: rank = cosine_similarity(query, memory) Γ strength
A highly relevant but ancient memory scores lower than a moderately relevant recent one β matching how human recall works.
| Method | Endpoint | Description |
|---|---|---|
| POST | /memorize |
Store a conversation |
| POST | /retrieve |
Search memories (strength-weighted) |
| POST | /decay |
Run Ebbinghaus forgetting curve |
| GET | /stats |
Memory statistics |
| GET | /health |
Health check |
{
"conversation": [
{"role": "user", "content": "I just finished reading Project Hail Mary"},
{"role": "assistant", "content": "Great book! What did you think?"}
],
"user_id": "alice"
}{"query": "what books has the user read?", "user_id": "alice", "limit": 5}Response includes summary, strength, score, access_count, source_agent for each memory.
POST /decay?user_id=alice&decay_rate=0.1&damping=0.5
Tag memories with source_agent via metadata role:
{
"conversation": [
{"role": "metadata", "content": "{\"source_agent\": \"research-bot\"}"},
{"role": "user", "content": "Found 3 new papers on transformers"}
],
"user_id": "team"
}All via environment variables or .env file:
| Variable | Default | Description |
|---|---|---|
MEMVAULT_DB_DSN |
postgresql://...localhost:5432/memvault |
PostgreSQL connection |
MEMVAULT_EMBEDDING_URL |
http://127.0.0.1:8001 |
Embedding server |
MEMVAULT_LLM_BASE_URL |
http://127.0.0.1:11434/v1 |
LLM endpoint |
MEMVAULT_LLM_MODEL |
qwen2.5:3b |
LLM model |
MEMVAULT_PORT |
8002 |
Server port |
MEMVAULT_TRANSLATION |
false |
Enable translation |
MEMVAULT_TRANSLATION_LANG |
Chinese |
Target language |
MEMVAULT_MEMORY_TYPES |
event,knowledge |
Types to extract |
# OpenAI
MEMVAULT_LLM_BASE_URL=https://api.openai.com/v1
MEMVAULT_LLM_API_KEY=sk-...
MEMVAULT_LLM_MODEL=gpt-4o-mini
# Groq (fast & free)
MEMVAULT_LLM_BASE_URL=https://api.groq.com/openai/v1
MEMVAULT_LLM_API_KEY=gsk_...
MEMVAULT_LLM_MODEL=llama-3.1-8b-instant
# Local Ollama (default)
MEMVAULT_LLM_BASE_URL=http://127.0.0.1:11434/v1
MEMVAULT_LLM_MODEL=qwen2.5:3bMemVault was built for OpenClaw multi-agent systems.
## MemVault π§
memvault memorize-text "<user_id>" "<content>" "<context>"
memvault retrieve "<user_id>" "<query>"
- API: 127.0.0.1:8002 | Embedding: 127.0.0.1:80010 3 * * * curl -s -X POST 'http://127.0.0.1:8002/decay?user_id=YOUR_USER'
# Start everything
docker compose up -d
# Check health
curl http://localhost:8002/health
# View logs
docker compose logs -f memvault
# Stop
docker compose down
# Stop and remove data
docker compose down -v# Use OpenAI instead of local Ollama
MEMVAULT_LLM_BASE_URL=https://api.openai.com/v1 \
MEMVAULT_LLM_API_KEY=sk-... \
MEMVAULT_LLM_MODEL=gpt-4o-mini \
docker compose up -dmemvault/
βββ memvault_server.py # Main API server (FastAPI)
βββ embedding_server.py # Local embedding server
βββ memvault # CLI tool
βββ Dockerfile # Multi-stage build
βββ docker-compose.yml # One-command deployment
βββ scripts/
β βββ setup.sh # Native installation
β βββ start.sh # Start services
βββ examples/
β βββ basic_usage.py # Python examples
βββ docs/
β βββ ARCHITECTURE.md # Technical architecture
βββ .env.example # Config template
βββ requirements.txt
PRs welcome! Ideas:
- SQLite backend (simpler deployments)
- Memory export/import (JSON/Markdown)
- Web dashboard for visualization
- Alternative decay strategies
- Memory consolidation (merge similar)
- LangChain / AutoGen integration examples
See CONTRIBUTING.md.
Apache 2.0 β See LICENSE.
Built on memU by NevaMind AI.
If MemVault helps your agents remember, give it a β