This document explains DevAll's memory system: memory list config, built-in store implementations, how agent nodes attach memories, and troubleshooting tips. Core code lives in entity/configs/memory.py and node/agent/memory/*.py.
- Memory Store – Declared under
memory[]in YAML withname,type, andconfig. Types are registered viaregister_memory_store()and point to concrete implementations. - Memory Attachment – Referenced inside agent nodes via
AgentConfig.memories. EachMemoryAttachmentConfigdefines read/write strategy and retrieval stages. - MemoryManager – Builds store instances at runtime based on attachments and orchestrates
load(),retrieve(),update(),save(). - Embedding –
SimpleMemoryConfigandFileMemoryConfigembedEmbeddingConfig, andEmbeddingFactoryinstantiates OpenAI or local vector models.
memory:
- name: convo_cache
type: simple
config:
memory_path: WareHouse/shared/simple.json
embedding:
provider: openai
model: text-embedding-3-small
api_key: ${API_KEY}
- name: project_docs
type: file
config:
index_path: WareHouse/index/project_docs.json
file_sources:
- path: docs/
file_types: [".md", ".mdx"]
recursive: true
embedding:
provider: openai
model: text-embedding-3-smallmemory:
- name: agent_memory
type: mem0
config:
api_key: ${MEM0_API_KEY}
agent_id: my-agent| Type | Path | Highlights | Best for |
|---|---|---|---|
simple |
node/agent/memory/simple_memory.py |
Optional disk persistence (JSON) after runs; FAISS + semantic rerank; read/write capable. | Small conversation history, prototypes. |
file |
node/agent/memory/file_memory.py |
Chunks files/dirs into a vector index, read-only, auto rebuilds when files change. | Knowledge bases, doc QA. |
blackboard |
node/agent/memory/blackboard_memory.py |
Lightweight append-only log trimmed by time/count; no vector search. | Broadcast boards, pipeline debugging. |
mem0 |
node/agent/memory/mem0_memory.py |
Cloud-managed by Mem0; semantic search + graph relationships; no local embeddings or persistence needed. Requires mem0ai package. |
Production memory, cross-session persistence, multi-agent memory sharing. |
All stores register through register_memory_store() so summaries show up in UI via MemoryStoreConfig.field_specs().
| Field | Description |
|---|---|
name |
Target Memory Store name (must be unique inside stores[]). |
retrieve_stage |
Optional list limiting retrieval to certain AgentExecFlowStage values (pre, plan, gen, critique, etc.). Empty means all stages. |
top_k |
Number of items per retrieval (default 3). |
similarity_threshold |
Minimum similarity cutoff (-1 disables filtering). |
read / write |
Whether this node can read from / write back to the store. |
Agent node example:
nodes:
- id: answer
type: agent
config:
provider: openai
model: gpt-4o-mini
prompt_template: answer_user
memories:
- name: convo_cache
retrieve_stage: ["gen"]
top_k: 5
read: true
write: true
- name: project_docs
read: true
write: falseExecution order:
- When the node enters
gen,MemoryManageriterates attachments. - Attachments matching the stage and
read=truecallretrieve()on their store. - Retrieved items are formatted under a "===== Related Memories =====" block in the agent context.
- After completion, attachments with
write=truecallupdate()and optionallysave().
All memory stores persist a unified MemoryItem structure containing:
content_summary– trimmed text used for embedding search.input_snapshot/output_snapshot– serialized message blocks (with base64 attachments) preserving multimodal context.metadata– store-specific telemetry (role, previews, attachment IDs, etc.). This schema lets multimodal outputs flow into Memory/Thinking modules without extra plumbing.
- Path –
SimpleMemoryConfig.memory_path(orauto). Defaults to in-memory. - Retrieval – Build a query from the prompt, trim it, embed, query FAISS
IndexFlatIP, then apply semantic rerank (Jaccard/LCS). - Write –
update()builds aMemoryContentSnapshot(text + blocks) for both input/output, deduplicates via hashed summary, embeds the summary, and stores the snapshots/attachments metadata. - Tips – Tune
max_content_length,top_k, andsimilarity_thresholdto avoid irrelevant context.
- Config – Requires at least one
file_sourcesentry (paths, suffix filters, recursion, encoding).index_pathis mandatory for incremental updates. - Indexing – Scan files → chunk (default 500 chars, 50 overlap) → embed → persist JSON with
file_metadata. - Retrieval – Uses FAISS cosine similarity. Read-only;
update()unsupported. - Maintenance –
load()checks file hashes and rebuilds if needed. Storeindex_pathon persistent storage.
- Config –
memory_path(orauto) plusmax_items. Creates the file in the session directory if missing. - Retrieval – Returns the latest
top_kentries ordered by time. - Write –
update()appends the latest snapshot (input/output blocks, attachments, previews). No embeddings are generated, so retrieval is purely recency-based.
- Config – Requires
api_key(from app.mem0.ai). Optionaluser_id,agent_id,org_id,project_idfor scoping. - Entity scoping:
user_idandagent_idare independent dimensions — both can be included simultaneously inadd()andsearch()calls. When both are configured, retrieval uses an OR filter ({"OR": [{"user_id": ...}, {"agent_id": ...}]}) to search across both scopes. Writes include both IDs when available. - Retrieval – Uses Mem0's server-side semantic search. Supports
top_kandsimilarity_thresholdviaMemoryAttachmentConfig. - Write –
update()sends only user input to Mem0 via the SDK (asrole: "user"messages). Assistant output is excluded to prevent noise memories from the LLM's responses being extracted as facts. - Persistence – Fully cloud-managed.
load()andsave()are no-ops. Memories persist across runs and sessions automatically. - Dependencies – Requires
mem0aipackage (pip install mem0ai).
- Fields:
provider,model,api_key,base_url,params. provider=openaiuses the official client; overridebase_urlfor compatibility layers.paramscan includeuse_chunking,chunk_strategy,max_length, etc.provider=localexpectsparams.model_pathand depends onsentence-transformers.
- Duplicate names – The memory list enforces unique
memory[]names. Duplicates raiseConfigError. - Missing embeddings –
SimpleMemorywithout embeddings downgrades to append-only;FileMemoryerrors out. Provide an embedding config whenever semantic search is required. - Permissions – Ensure directories for
memory_path/index_pathare writable. Mount volumes when running inside containers. - Performance – Pre-build large
FileMemoryindexes offline, useretrieve_stageto limit retrieval frequency, and tunetop_k/similarity_thresholdto balance recall vs. token cost.
- Implement a Config + Store (subclass
MemoryBase). - Register via
register_memory_store("my_store", config_cls=..., factory=..., summary="...")innode/agent/memory/registry.py. - Add
FIELD_SPECS, then runpython -m tools.export_design_template ...so the frontend picks up the enum. - Update this guide or ship a README detailing configuration knobs and boundaries.