Working with auto memory (the ~/.claude/projects/<encoded>/memory/*.md files Claude Code v2.1.59+ writes) is the one workflow CCO doesn't quite cover yet. Two specific gaps:
1. No cross-project aggregation
I have 63 auto-memory entries across 9 projects. The scanner discovers them perfectly — scanMemories() correctly resolves scope.claudeProjectDir + "/memory". But the dashboard surfaces them per-project in the sidebar, so reviewing "all my feedback memories" or "all my Postgres notes" means clicking through 9 scopes one by one.
For Skills/MCP/Plans this is fine — they live in 1–2 scopes. Memories are different. They accumulate everywhere, so the natural unit to browse is "my whole memory corpus", not "what's in this project".
2. Search doesn't match memory body
Search... matches name, fileName, description (frontmatter). But auto-memory files are short notes where the body is the whole point.
Concrete: I want to find every memory mentioning subprocess.run across all projects. Today I fall back to grep -r ~/.claude/projects/*/memory/. The dashboard could do this with one extra read pass.
Proposal
Two related changes — small enough for one PR, or split if you'd rather review separately.
Part A — "All Memories" cross-project view
A virtual aggregator that flattens memory items across project scopes. Three UX shapes, any of them works:
- Global-scope category — Make the existing "Memories" pill under Global show the union of every project's memory items (currently it scans
~/.claude/memory/, empty for most users).
- Top-level "All Memories" tab — A fourth scope-tree entry alongside Global / Workspace / Project, purely a virtual aggregator.
- Dedicated dashboard panel — Separate icon that opens a full-bleed memory browser (like Backup Center).
Option 1 is the smallest diff: scanMemories({id: "global"}) already returns [] because ~/.claude/memory/ doesn't exist for most users. We could surface "global" memory as the union of all project memories, with a sourceScopeId on each item so the UI can show the origin project.
Part B — Memory content search
Extend search to match memory body. Two options:
- Always-on — scanner reads body for every memory (capped at e.g. 8KB), exposes a
body field, frontend substring match.
- Opt-in via prefix — type
body:authentication (or a 🔎 toggle) to search bodies. Skips the body-read pass otherwise.
For 63 memories at ~500 bytes, that's ~30KB extra /api/scan payload. Probably fine. For users with thousands of memories the opt-in route is safer.
Implementation sketch
Patch surface is small. In src/scanner.mjs → scanMemories():
items.push({
category: "memory",
scopeId: scope.id,
name: fm.name || f.replace(".md", ""),
// …existing fields…
path: fullPath,
+ body: content.slice(fm.bodyStart, fm.bodyStart + 8000),
});
Then a small aggregation step in scanAll():
const aggregated = items
.filter(i => i.category === "memory" && i.scopeId !== "global")
.map(i => ({ ...i, scopeId: "global", sourceScopeId: i.scopeId, isAggregated: true }));
items.push(...aggregated);
Frontend (src/ui/app.js) needs:
- A "from project: X" badge on aggregated items so they don't look like duplicates
- Search predicate also testing
body (or body: prefix)
Open questions for you:
- Aggregation logic in scanner or frontend?
- Body search always-on, or behind toggle/prefix?
- Any concern with
/api/scan payload growing ~30–100KB for heavy memory users?
Environment
- OS: Windows 11
- CCO: 0.18.2
- Node: v20+
- Auto memory: 63 across 9 projects
- Claude Code: v2.1.59+
Want to PR?
Happy to put up a PR for either or both parts once you say which UX shape you prefer — I'll match the existing style (zero deps, pure ESM, E2E tests).
Working with auto memory (the
~/.claude/projects/<encoded>/memory/*.mdfiles Claude Code v2.1.59+ writes) is the one workflow CCO doesn't quite cover yet. Two specific gaps:1. No cross-project aggregation
I have 63 auto-memory entries across 9 projects. The scanner discovers them perfectly —
scanMemories()correctly resolvesscope.claudeProjectDir + "/memory". But the dashboard surfaces them per-project in the sidebar, so reviewing "all my feedback memories" or "all my Postgres notes" means clicking through 9 scopes one by one.For Skills/MCP/Plans this is fine — they live in 1–2 scopes. Memories are different. They accumulate everywhere, so the natural unit to browse is "my whole memory corpus", not "what's in this project".
2. Search doesn't match memory body
Search... matches
name,fileName,description(frontmatter). But auto-memory files are short notes where the body is the whole point.Concrete: I want to find every memory mentioning
subprocess.runacross all projects. Today I fall back togrep -r ~/.claude/projects/*/memory/. The dashboard could do this with one extra read pass.Proposal
Two related changes — small enough for one PR, or split if you'd rather review separately.
Part A — "All Memories" cross-project view
A virtual aggregator that flattens memory items across project scopes. Three UX shapes, any of them works:
~/.claude/memory/, empty for most users).Option 1 is the smallest diff:
scanMemories({id: "global"})already returns[]because~/.claude/memory/doesn't exist for most users. We could surface "global" memory as the union of all project memories, with asourceScopeIdon each item so the UI can show the origin project.Part B — Memory content search
Extend search to match memory body. Two options:
bodyfield, frontend substring match.body:authentication(or a 🔎 toggle) to search bodies. Skips the body-read pass otherwise.For 63 memories at ~500 bytes, that's ~30KB extra
/api/scanpayload. Probably fine. For users with thousands of memories the opt-in route is safer.Implementation sketch
Patch surface is small. In
src/scanner.mjs→scanMemories():Then a small aggregation step in
scanAll():Frontend (
src/ui/app.js) needs:body(orbody:prefix)Open questions for you:
/api/scanpayload growing ~30–100KB for heavy memory users?Environment
Want to PR?
Happy to put up a PR for either or both parts once you say which UX shape you prefer — I'll match the existing style (zero deps, pure ESM, E2E tests).