Skip to content

matrixorigin/Memoria

Repository files navigation

Memoria Logo

Memoria

Secure · Auditable · Programmable Memory for AI Agents

MatrixOne MCP Git for Data License

Quick Start · Steering Rules · API Reference · For AI Agents


Overview

Memoria is a persistent memory layer for AI agents with Git-level version control. Every memory change is tracked, auditable, and reversible — snapshots, branches, merges, and time-travel rollback, all powered by MatrixOne's native Copy-on-Write engine.

Core Capabilities:

  • Cross-conversation memory — preferences, facts, and decisions persist across sessions
  • Semantic search — retrieves memories by meaning, not just keywords
  • Git for Data — zero-copy branching, instant snapshots, point-in-time rollback
  • Audit trail — every memory mutation has a snapshot + provenance chain
  • Self-maintaining — built-in governance detects contradictions, quarantines low-confidence memories
  • Private by default — local embedding model option, no data leaves your machine

Supported Agents: Kiro · Cursor · Claude Code · OpenClaw · Any MCP-compatible agent

Storage Backend: MatrixOne — Distributed database with native vector indexing


Why Memoria?

Capability Memoria Letta / Mem0 / Traditional RAG
Git-level version control Native zero-copy snapshots & branches File-level or none
Isolated experimentation One-click branch, merge after validation Manual data duplication
Audit trail Full snapshot + provenance on every mutation Limited logging
Semantic retrieval Vector + full-text hybrid search Vector only
Self-governance Automatic contradiction detection & quarantine Manual cleanup

Quick Start

1. Start MatrixOne

git clone https://github.com/matrixorigin/Memoria.git
cd Memoria
docker compose up -d

Or use MatrixOne Cloud (free tier, no Docker needed).

2. Install Memoria

curl -sSL https://raw.githubusercontent.com/matrixorigin/Memoria/main/scripts/install.sh | bash

Or download from GitHub Releases.

3. Configure your AI tool

cd your-project
memoria init -i   # Interactive wizard (recommended)

This creates MCP config + steering rules for your AI tool (Kiro, Cursor, or Claude).

🦞 OpenClaw Plugin (Already Using OpenClaw?)

Use the native OpenClaw plugin guide: OpenClaw Plugin Setup.

openclaw plugins install @matrixorigin/memory-memoria
openclaw plugins enable memory-memoria
openclaw memoria install

4. Restart & verify

Restart your AI tool, then ask: "Do you have memory tools available?"

For detailed setup, see Setup Skill.


Steering Rules

Steering rules teach your AI agent when and how to use memory tools. Without them, the agent has tools but no guidance — like having a database without knowing the schema.

What They Do

Rule Purpose
memory Core memory tools — when to store, retrieve, correct, purge
session-lifecycle Bootstrap at conversation start, cleanup at end
memory-hygiene Proactive governance, contradiction resolution, snapshot cleanup
memory-branching-patterns Isolated experiments with branches
goal-driven-evolution Track goals, plans, progress across conversations

Example: Conversation Lifecycle

┌─────────────────────────────────────────────────────────────────────────────┐
│  CONVERSATION START                                                         │
│  ┌─────────────────────────────────────────────────────────────────────┐   │
│  │ 1. memory_retrieve(query="<user's question>")  ← load context       │   │
│  │ 2. memory_search(query="GOAL ACTIVE")          ← check active goals │   │
│  └─────────────────────────────────────────────────────────────────────┘   │
├─────────────────────────────────────────────────────────────────────────────┤
│  MID-CONVERSATION                                                           │
│  ┌─────────────────────────────────────────────────────────────────────┐   │
│  │ • User states preference → memory_store(type="profile")             │   │
│  │ • User corrects a fact   → memory_correct(query="...", new="...")   │   │
│  │ • Topic shifts           → memory_retrieve(query="<new topic>")     │   │
│  └─────────────────────────────────────────────────────────────────────┘   │
├─────────────────────────────────────────────────────────────────────────────┤
│  CONVERSATION END                                                           │
│  ┌─────────────────────────────────────────────────────────────────────┐   │
│  │ 1. memory_purge(topic="<task>")  ← clean up working memories        │   │
│  │ 2. memory_store(type="episodic") ← save session summary             │   │
│  └─────────────────────────────────────────────────────────────────────┘   │
└─────────────────────────────────────────────────────────────────────────────┘

Example: Goal-Driven Evolution

You: "I want to add OAuth support to the API"

AI:  → memory_search(query="GOAL OAuth")           ← check for existing goal
     → memory_store(content="🎯 GOAL: Add OAuth support\nStatus: ACTIVE", type="procedural")
     
     ... works on implementation, stores progress as working memories ...
     
     → memory_store(content="✅ STEP 1/3: Added OAuth routes", type="working")
     → memory_store(content="❌ STEP 2/3: Token refresh failed — need to fix expiry logic", type="working")

... next conversation ...

AI:  → memory_search(query="GOAL ACTIVE")          ← finds OAuth goal
     → memory_search(query="STEP for GOAL OAuth")  ← loads progress
     "Last time we were working on OAuth. Step 2 failed on token refresh. Want to continue?"

... goal completed ...

AI:  → memory_correct(query="GOAL OAuth", new_content="🎯 GOAL: OAuth — ✅ ACHIEVED")
     → memory_store(content="💡 LESSON: Token refresh needs 5min buffer before expiry", type="procedural")
     → memory_purge(topic="STEP for GOAL OAuth")   ← clean up working memories

Example: Branch for Risky Experiments

You: "Let's try switching from PostgreSQL to SQLite"

AI:  → memory_branch(name="eval_sqlite")
     → memory_checkout(name="eval_sqlite")
     
     ... experiments on branch, stores findings ...
     
     → memory_diff(source="eval_sqlite")     ← preview changes
     → memory_checkout(name="main")
     → memory_merge(source="eval_sqlite")    ← or delete if failed

File Locations

  • Kiro: .kiro/steering/*.md
  • Cursor: .cursor/rules/*.mdc
  • Claude: .claude/rules/*.md

Update Rules

After upgrading Memoria:

memoria rules --force

API Reference

Core Tools

Tool Description
memory_store Store a new memory
memory_retrieve Retrieve relevant memories (call at conversation start)
memory_correct Update an existing memory
memory_purge Delete by ID or topic keyword
memory_search Semantic search across all memories
memory_profile Get user's memory-derived profile

Snapshots & Branches

Tool Description
memory_snapshot Create named snapshot
memory_rollback Restore to snapshot
memory_branch Create isolated branch
memory_checkout Switch branch
memory_merge Merge branch back
memory_diff Preview merge changes

Maintenance

Tool Description
memory_governance Quarantine low-confidence memories (1h cooldown)
memory_consolidate Detect contradictions (30min cooldown)
memory_reflect Synthesize insights (2h cooldown)
memory_extract_entities Build entity graph

Full API details: API Reference Skill


Memory Types

Type Use for Example
semantic Project facts, decisions "Uses Go 1.22 with modules"
profile User preferences "Prefers pytest over unittest"
procedural Workflows, how-to "Deploy: make build && kubectl apply"
working Temporary task context "Currently debugging auth module"
episodic Session summaries "Session: optimized DB, added indexes"

Commands

Command Description
memoria init -i Interactive setup wizard
memoria status Show config and rule versions
memoria rules Update steering rules (auto-detect, --tool, or -i)
memoria mcp Start MCP server
memoria serve Start REST API server
memoria benchmark Run benchmark suite

For AI Agents

If you're an AI agent helping a user set up Memoria:

  1. Load the Setup Skill — it has step-by-step instructions
  2. Ask before acting:
    • Which AI tool? (Kiro / Cursor / Claude)
    • MatrixOne database? (Docker / Cloud / existing)
    • Embedding service? (OpenAI / SiliconFlow / local)
  3. Run memoria init -i in the user's project directory
  4. Tell user to restart their AI tool
  5. Verify with memory_retrieve("test")

⚠️ Configure embedding BEFORE first MCP server start — dimension is locked into schema.


Architecture

┌─────────────┐     MCP (stdio)     ┌──────────────────────────────────────┐     SQL      ┌────────────┐
│  AI Agent   │ ◄─────────────────► │  Memoria MCP Server                  │ ◄──────────► │ MatrixOne  │
│             │   store / retrieve  │  ├── Canonical Storage               │  vector +    │  Database  │
│             │                     │  ├── Retrieval (vector / semantic)   │  fulltext    │            │
│             │                     │  └── Git-for-Data (snap/branch/merge)│              │            │
└─────────────┘                     └──────────────────────────────────────┘              └────────────┘

For codebase details, see Architecture Skill.


Development

make up              # Start MatrixOne + API
make test            # Run all tests
make release VERSION=0.2.0   # Bump, tag, push

Developer documentation (for contributing to Memoria):

Skill Description
Architecture Codebase layout, traits, tables
API Reference REST endpoints, request/response
Deployment Docker, K8s, multi-instance
Plugin Development Governance plugins
Release Version bump, CI/CD
Local Embedding Offline embedding build

License

Apache-2.0 © MatrixOrigin

About

Secure memory management for AI Agents • Ensures data integrity • Reduces hallucinations • Maintains consistent long-term context

Topics

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors