Skip to content

[Refactor] Consolidate MCP Memory Tools to Reduce Cognitive Load and Token Overhead #90

@ayder

Description

@ayder

📋 Pre-flight Checks

  • I have searched existing issues and this is not a duplicate
  • I understand this issue needs status:approved before a PR can be opened

🔍 Problem Description

The current implementation of the Memory MCP Server exposes 13 distinct tools. While functionally complete, this granularity introduces significant "Cognitive Overhead" for the LLM, leading to increased token usage in the system prompt and occasional "Decision Paralysis" when selecting between similar saving or retrieval methods.

The Problem: "Tool Bloat"

  1. Context Window Pressure: 13 tool definitions (schemas + descriptions) consume a large portion of the initial context window.
  2. Semantic Overlap: Tools like mem_save, mem_update, and mem_save_prompt have overlapping intents, which sometimes cause the model to hallucinate parameters or use the wrong "save" method.
  3. Planning Complexity: The LLM must navigate a complex decision tree just to manage a simple session. Small quantized LLM's struggle to follow.

💡 Proposed Solution

I propose a Hybrid Consolidation strategy. Maintaining the Progressive Disclosure pattern (the 3-layer retrieval) while collapsing administrative and lifecycle tools into unified schemas.

The Proposed Solution: The "Clean 7" Suite

Reorganize the 13 tools into 7 high-level tools grouped by functional intent. Logic will be shifted from the tool name to tool parameters.

1. Write & State Management (mem_manage)

*Consolidates: mem_save, mem_update, mem_delete, mem_save_prompt*

  • Why: Uses the existing topic_key logic to handle upserts/updates automatically.
  • Action Parameter: upsert, delete (soft), hard_delete.
  • Type Parameter: observation, prompt.

2. Lifecycle Management (mem_session)

*Consolidates: mem_session_start, mem_session_end, mem_session_summary*

  • Why: Simplifies the session state machine. The LLM only needs to remember one tool for all session-related events.

3. Contextual Intelligence (mem_context)

*Consolidates: mem_context, mem_timeline, mem_stats*

  • Why: These are all "meta-retrieval" tools.
  • Mode Parameter: recent (last N items), timeline (surrounding a specific ID), or stats (system health).

4. The "Progressive Disclosure" Tier (Keep Separate)

To maintain the Token-Efficient Retrieval pattern described in the docs, these remain distinct to force the LLM to "drill in" rather than dump data:

  1. mem_search: Fast, compact full-text search (ID + Title only).
  2. mem_get: Full, untruncated content for a specific ID.
  3. mem_suggest_topic_key: A "Pre-flight" tool to ensure consistent naming conventions before saving.

Proposed Mapping Table

New Consolidated Tool Replaces Original Tools Key Parameters
mem_manage save, update, delete, save_prompt action, topic_key, scope, content
mem_session session_start, session_end, session_summary event (start/end), summary_text
mem_context context, timeline, stats mode, target_id (optional)
mem_search search query, limit
mem_get get_observation id
mem_suggest_topic suggest_topic_key type, title

Expected Benefits

  • ~40% Reduction in tool-definition tokens.
  • Improved Accuracy: Reduced ambiguity between "save" vs "update" logic.
  • Better Hygiene: Centralizing mem_manage ensures that deduplication and topic-key heuristics are applied consistently across all write operations.

📦 Affected Area

MCP Server (tools, transport)

🔄 Alternatives Considered

No response

📎 Additional Context

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions