Comprehensive reference for the mnemonic Python library modules.
- Overview
- Core Modules
- paths - Path resolution and filesystem layout
- config - Configuration management
- memory_reader - Memory file parsing
- migrate_filenames - Memory file migration utilities
- search - Memory search and ranking
- relationships - Relationship types and linking
- ontology - Custom ontology support
- Installation
- Quick Start
- API Reference
The lib/ directory contains shared Python utilities used across mnemonic's hooks, commands, and tools. All modules are designed for:
- Testability: Dependency injection with optional resolvers
- Type Safety: Full Python type hints (3.8+)
- Pure Functions: Minimal side effects, clear inputs/outputs
- Documentation: Comprehensive docstrings
File: lib/paths.py
Centralized path resolution for all mnemonic memory operations. Provides single source of truth for memory directories, search paths, and blackboard locations.
Key Classes:
PathResolver- Main path computation enginePathContext- Environment context (org,project,home_dir,project_dir,memory_root,scheme)PathScheme- Path scheme enum (LEGACY, V2)Scope- Memory scope enum (USER, PROJECT, ORG)
Quick Example:
from lib.paths import PathResolver, Scope
resolver = PathResolver()
memory_dir = resolver.get_memory_dir("_semantic/decisions", Scope.PROJECT)
memory_dir.mkdir(parents=True, exist_ok=True)See Also: ADR-009: Unified Path Resolution
File: lib/config.py
Configuration management for mnemonic memory store location. Reads/writes ~/.config/mnemonic/config.json (XDG-compliant).
Key Functions:
from lib.config import get_memory_root, MnemonicConfig
# Get configured memory root (most common usage)
root = get_memory_root() # Returns Path
# Advanced: Load and modify config
config = MnemonicConfig.load()
config.memory_store_path # Returns expanded Path
config.memory_store_path_raw # Returns raw string
# Save new config
new_config = MnemonicConfig(memory_store_path="~/custom/path")
new_config.save()Config Schema:
{
"version": "1.0",
"memory_store_path": "~/.claude/mnemonic"
}Default Values:
- Config file:
~/.config/mnemonic/config.json - Default memory root:
~/.claude/mnemonic - Config version:
1.0
Design Notes:
- Fixed config location (XDG-compliant, not configurable)
- Tilde expansion handled automatically
- Returns default config if file missing or invalid
- Thread-safe read operations
File: lib/memory_reader.py
Memory file parsing for extracting frontmatter metadata and generating summaries. Used by hooks to inject memory context.
Key Functions:
from lib.memory_reader import get_memory_metadata, get_memory_summary
# Get full frontmatter metadata
metadata = get_memory_metadata("memory.memory.md")
# Returns: {'id': '...', 'title': '...', 'namespace': '...', 'tags': [...], 'relationships': [...], 'summary': '...', 'path': '...'}
# Get memory summary (title and first paragraph)
summary_info = get_memory_summary("memory.memory.md", max_summary=300)
# Returns: {"title": "My Decision", "summary": "First paragraph of content..."}Parsing Strategy:
- PyYAML (if available): Full YAML parsing with proper type handling
- Regex fallback: Lightweight parsing when PyYAML unavailable
- Extracts:
id,title,type,namespace,tags - Handles inline lists:
tags: [a, b, c] - Handles multi-line lists:
tags:\n - a\n - b - Parses relationship blocks
- Extracts:
API Reference:
Extract frontmatter as dictionary.
Returns:
id(str): UUIDtitle(str): Memory titlenamespace(str): Namespace pathtags(list): Tag listrelationships(list): Relationship objectssummary(str): First paragraph summarypath(str): Absolute path to memory file
Returns None if file doesn't exist or can't be read.
Extract title and summary from memory file.
Returns: Dictionary with keys:
title(str): Memory title from frontmatter or filenamesummary(str): First paragraph after frontmatter (truncated to max_summary)
Design Notes:
- Graceful degradation (YAML → regex fallback)
- No exceptions raised on parse errors
- Returns empty dict/string on failure
- Used by hooks for
additionalContextinjection
File: lib/migrate_filenames.py
Migration utilities for transitioning memory files from {uuid}-{slug}.memory.md to {slug}.memory.md naming convention. Handles collision detection, content merging, and idempotent execution.
Key Functions:
from lib.migrate_filenames import migrate_all, is_migration_complete, migrate_file
from pathlib import Path
# Check if migration already completed
memory_root = Path.home() / ".claude" / "mnemonic"
if not is_migration_complete(memory_root):
# Perform dry run first
results = migrate_all(memory_root, dry_run=True)
print(f"Would migrate {len(results)} files")
# Execute migration
results = migrate_all(memory_root, dry_run=False)
for result in results:
print(f"{result.action}: {result.original.name} → {result.target.name}")Migration Behavior:
- Rename:
{uuid}-decision.memory.md→decision.memory.md(preserves git history) - Merge: If
decision.memory.mdalready exists, merges content from UUID-prefixed file - Idempotent: Marks completion with
.migration_slug_only_completemarker file
API Reference:
Migrate all UUID-prefixed memory files in directory tree.
Parameters:
mnemonic_root: Root directory to scan recursivelydry_run: If True, return results without modifying files
Returns: List of MigrationResult objects with fields:
original(Path): Source file pathtarget(Path): Destination file pathaction(str): One of "renamed", "merged", "skipped"merged_with(Optional[Path]): Target file if merged
Behavior:
- Skips if
.migration_slug_only_completemarker exists - Uses
git mvwhen possible to preserve history - Merges content on collision (appends under "Merged Content" heading)
- Creates marker file on completion
Migrate a single memory file.
Parameters:
file_path: Path to memory filedry_run: If True, return result without modifying files
Returns: MigrationResult object
Collision Handling: When target file exists, merges content by:
- Keeping existing frontmatter (preserves original ID, dates)
- Appending incoming body under separator
- Adding provenance note with incoming UUID
Check if migration has been run.
Returns: True if .migration_slug_only_complete marker exists
Create marker file to prevent re-running migration.
Use Cases:
- Batch Migration: Run once during setup to clean up legacy filenames
- Continuous Integration: Check
is_migration_complete()before operations - Manual Recovery: Use
migrate_file()for selective migration after conflicts
Example - Bulk Migration:
from lib.migrate_filenames import migrate_all, migration_summary
from pathlib import Path
memory_root = Path("~/.claude/mnemonic").expanduser()
# Execute with logging
results = migrate_all(memory_root, dry_run=False)
summary = migration_summary(results)
print(f"Migration complete:")
print(f" Renamed: {summary['renamed']} files")
print(f" Merged: {summary['merged']} files")
print(f" Skipped: {summary['skipped']} files")Design Notes:
- Atomic writes via
.tmpfiles to prevent corruption - Git-aware (uses
git mvwhen possible) - Collision-safe (merges rather than overwrites)
- Idempotent by default (marker file prevents re-runs)
- CLI support:
python -m lib.migrate_filenames --dry-run
File: lib/search.py
Centralized memory search logic with scoring, ranking, and context detection. Consolidates search functions from all hooks.
Search Functions:
from lib.search import (
search_memories,
find_related_memories_scored,
find_memories_for_context,
detect_file_context,
extract_topic
)
# 1. Basic keyword search
results = search_memories("authentication", max_results=5)
# Returns: list of Path objects
# 2. Scored semantic search
scored = find_related_memories_scored(
title="API Authentication Design",
tags=["security", "api"],
namespace="_semantic/decisions",
content_keywords=["JWT", "OAuth"],
max_results=10
)
# Returns: list of (Path, score) tuples, sorted by relevance
# 3. File context detection
context = detect_file_context("src/auth/login.py")
# Returns: dict with namespace, tags, keywords
# 4. Topic extraction from prompt
topic = extract_topic("How do we handle user authentication?")
# Returns: "user authentication"Scoring Algorithm:
Memories are scored based on multiple signals:
| Signal | Weight | Description |
|---|---|---|
| Same cognitive type | 30 | Both in _semantic/*, _episodic/*, etc. |
| Exact namespace match | 20 | Same sub-namespace (e.g., decisions) |
| Tag overlap | 20/tag | Shared tags |
| Title keyword match | 15/keyword | Keywords in title |
| Content keyword match | 5/keyword | Keywords in body |
Minimum threshold: 15 points (configurable via SCORE_MIN_THRESHOLD)
API Reference:
Basic ripgrep search across all memory roots.
Args:
topic: Search keywordsmax_results: Maximum results to return
Returns: List of absolute paths to matching .memory.md files
Semantic search with relevance scoring.
Args:
title: Reference memory titletags: Reference tags listnamespace: Reference namespacecontent_keywords: Additional keywordsmax_results: Maximum results
Returns: List of (path, score) tuples, sorted by score descending
Example:
results = find_related_memories_scored(
title="PostgreSQL migration",
tags=["database", "migration"],
namespace="_procedural/migrations",
content_keywords=["schema", "version"],
max_results=5
)
for path, score in results:
print(f"{score:3d} - {path.name}")
# Output:
# 65 - postgres-upgrade-v14.memory.md
# 50 - database-schema-changes.memory.md
# 35 - migration-rollback-procedure.memory.mdInfer namespace and tags from file path.
Pattern Detection:
- Authentication:
auth,login,jwt,oauth - API:
api,endpoint,route,controller - Database:
db,model,schema,migration - Testing:
test,spec,__tests__ - Security:
security,crypto,hash - Configuration:
config,settings,.env
Returns:
{
'namespace': str, # Inferred namespace
'tags': list, # Suggested tags
'keywords': list # Extracted keywords
}Extract searchable topic from user prompt.
Processing:
- Remove stopwords (the, a, is, etc.)
- Extract keywords
- Normalize spacing
- Limit to reasonable length
Example:
extract_topic("What is our authentication strategy?")
# Returns: "authentication strategy"
extract_topic("How do we deploy to production?")
# Returns: "deploy production"Design Notes:
- All search uses
ripgrepfor speed - Searches across all memory roots (user + project)
- Case-insensitive matching
- Graceful fallback if ripgrep unavailable
- No exceptions on search failures
File: lib/relationships.py
MIF-compliant relationship type registry and bidirectional linking. Implements MIF Section 8.2 relationship semantics.
Relationship Types (MIF Section 8.2):
| PascalCase | Inverse | Symmetric | Use Case |
|---|---|---|---|
RelatesTo |
RelatesTo |
✓ | General association |
DerivedFrom |
Derives |
✗ | Conclusions from evidence |
Supersedes |
SupersededBy |
✗ | Decision evolution |
ConflictsWith |
ConflictsWith |
✓ | Contradictory information |
PartOf |
Contains |
✗ | Hierarchical structure |
Implements |
ImplementedBy |
✗ | Spec to implementation |
Uses |
UsedBy |
✗ | Dependency tracking |
Created |
CreatedBy |
✗ | Provenance |
MentionedIn |
Mentions |
✗ | Cross-references |
Quick Example:
from lib.relationships import add_bidirectional_relationship
# Add bidirectional link
add_bidirectional_relationship(
source_path="decision-a.memory.md",
target_path="decision-b.memory.md",
rel_type="supersedes", # snake_case or PascalCase
label="Replaced by new approach"
)
# Creates:
# - decision-a.memory.md: supersedes -> decision-b
# - decision-b.memory.md: superseded_by -> decision-aAPI Reference:
Add single relationship to frontmatter.
Args:
memory_path(str): Memory file to modifyrel_type(str): Relationship type (snake_case or PascalCase)target_id(str): Target memory UUIDlabel(str, optional): Human-readable description
Validation:
- Checks type is valid via
is_valid_type() - For asymmetric types, rejects inverse direction
- Prevents duplicate relationships
Add bidirectional relationship pair.
Args:
source_path(str): Source memory file pathtarget_path(str): Target memory file pathrel_type(str): Relationship type (snake_case or PascalCase)label(str, optional): Human-readable description
Returns: Tuple of (forward_success, inverse_success) booleans.
Automatic Inverse:
supersedes→ createssuperseded_byback-referencerelates_to→ createsrelates_to(symmetric)derived_from→ createsderivesback-reference
Example:
# Link new decision to old one
add_bidirectional_relationship(
source_path="new-approach.memory.md",
target_path="old-approach.memory.md",
rel_type="supersedes"
)
# Frontmatter updates:
# new-approach.memory.md:
# relationships:
# - type: Supersedes
# target: <old-uuid>
#
# old-approach.memory.md:
# relationships:
# - type: SupersededBy
# target: <new-uuid>from lib.relationships import (
to_pascal,
to_snake,
get_inverse,
is_valid_type,
is_symmetric,
get_all_valid_types
)
# Convert naming
to_pascal("supersedes") # → "Supersedes"
to_snake("SupersededBy") # → "superseded_by"
# Get inverse
get_inverse("Supersedes") # → "SupersededBy"
get_inverse("RelatesTo") # → "RelatesTo" (symmetric)
# Validate
is_valid_type("Supersedes") # → True
is_valid_type("Invalid") # → False
# Check symmetry
is_symmetric("RelatesTo") # → True
is_symmetric("Supersedes") # → False
# Get all types
get_all_valid_types()
# → {'RelatesTo', 'Supersedes', 'SupersededBy', ...}Design Notes:
- MIF Section 8.2 compliant
- PascalCase in frontmatter (MIF standard)
- snake_case for backward compatibility
- Automatic bidirectional linking
- Validation prevents invalid relationships
- Used by capture workflow, gc compression, custodian
File: lib/ontology.py
Custom ontology loading and validation. Supports namespace hierarchies, entity types, and discovery patterns.
Key Functions:
from lib.ontology import (
load_ontology_data,
load_ontology_namespaces,
validate_memory_against_ontology,
get_ontology_file
)
# Load ontology YAML
ontology = load_ontology_data()
# Returns: dict with namespaces, entities, relationships, discovery
# Get namespace hierarchy
namespaces = load_ontology_namespaces()
# Returns: dict mapping namespace -> metadata
# Validate memory
is_valid, errors = validate_memory_against_ontology(
memory_type="semantic",
namespace="_semantic/decisions"
)
# Get ontology file path
ontology_path = get_ontology_file()
# Currently resolves to the built-in fallback ontology file:
# skills/ontology/fallback/ontologies/mif-base.ontology.yamlOntology Resolution:
Currently, get_ontology_file() resolves only the built-in fallback ontology:
- Fallback:
skills/ontology/fallback/ontologies/mif-base.ontology.yaml
Project-level (.claude/mnemonic/ontology.yaml) and user-level
(~/.claude/mnemonic/ontology.yaml) ontology files are not currently
used in the resolution process and may be supported in a future version.
Example Ontology:
namespaces:
- id: semantic/decisions
type: semantic
description: Architectural choices and rationale
retention_days: 365
- id: semantic/knowledge/security
type: semantic
description: Security policies and practices
retention_days: 180
entities:
- type: technology
schema:
- field: name
required: true
- field: version
required: false
namespaces: [semantic/entities]
relationships:
- type: depends_on
description: Technical dependency
discovery:
file_patterns:
- pattern: "auth|login"
suggest_namespace: semantic/knowledge/security
suggest_tags: [authentication]API Reference:
Load ontology YAML with precedence.
Returns: Complete ontology dict or default if not found
Validate memory conforms to loaded ontology.
Returns: (is_valid, error_messages)
See Also:
The library is included with mnemonic and requires no separate installation.
Dependencies:
- Python 3.8+
- PyYAML (optional but recommended for memory_reader)
# Install optional dependencies
pip install pyyaml
# Or use project requirements
pip install -r requirements.txt # if available# Recommended: import specific functions
from lib.paths import PathResolver, Scope
from lib.search import search_memories
from lib.memory_reader import get_memory_summary
# Alternative: import lib module
import lib
resolver = lib.PathResolver()from pathlib import Path
from lib.paths import PathResolver, Scope
from lib.search import find_related_memories_scored
from lib.memory_reader import get_memory_summary
# Initialize
resolver = PathResolver()
# Find related memories
results = find_related_memories_scored(
title="API Authentication",
tags=["security", "api"],
namespace="_semantic/decisions"
)
# Read summaries
for path, score in results[:3]:
summary = get_memory_summary(path)
print(f"[{score}] {summary['title']}: {summary['summary']}\n")import pytest
from pathlib import Path
from lib.paths import PathResolver, PathContext, PathScheme
@pytest.fixture
def test_resolver(tmp_path):
"""Isolated resolver for testing."""
context = PathContext(
org="testorg",
project="testproject",
home_dir=tmp_path / "home",
project_dir=tmp_path / "project",
memory_root=tmp_path / "mnemonic",
scheme=PathScheme.LEGACY
)
return PathResolver(context)
def test_memory_operations(test_resolver):
memory_dir = test_resolver.get_memory_dir("_semantic/decisions", Scope.PROJECT)
assert memory_dir.parent.name == "mnemonic"| Module | Primary Use Case | Key Functions |
|---|---|---|
paths.py |
Path resolution | get_memory_dir(), get_search_paths() |
config.py |
Configuration | get_memory_root(), MnemonicConfig.load() |
memory_reader.py |
Parsing | get_memory_metadata(), get_memory_summary() |
migrate_filenames.py |
File migration | migrate_all(), migrate_file(), is_migration_complete() |
search.py |
Search/ranking | search_memories(), find_related_memories_scored() |
relationships.py |
Linking | add_bidirectional_relationship(), get_inverse() |
ontology.py |
Validation | load_ontology_data(), validate_memory_against_ontology() |
# Paths
from lib.paths import PathResolver, Scope, get_memory_dir
# Config
from lib.config import get_memory_root
# Memory reading
from lib.memory_reader import get_memory_metadata, get_memory_summary
# Migration
from lib.migrate_filenames import migrate_all, is_migration_complete
# Search
from lib.search import search_memories, find_related_memories_scored
# Relationships
from lib.relationships import add_bidirectional_relationship, get_inverse
# Ontology
from lib.ontology import load_ontology_data, validate_memory_against_ontology- Architecture - System design overview
- CLI Usage - Command-line operations
- Validation - MIF schema validation
- Ontologies - Custom ontology guide
- ADRs - Architectural decisions
When adding new library modules:
- Add comprehensive docstrings (Google style)
- Include type hints for all public functions
- Write unit tests in
tests/unit/test_<module>.py - Update
lib/__init__.pywith exports - Document in this reference guide
- Add examples to README or ADR
MIT - Same as mnemonic project