Releases: JasonDocton/lucid-memory
v0.6.5 — Fix Cross-Platform Native Binaries
Fix: Cross-Platform Native Binaries (0.6.5)
All .node binaries shipped since January 30th were macOS ARM64 Mach-O files regardless of target platform, causing immediate segfaults on Linux and Windows. Only macOS Apple Silicon users were unaffected.
Root Cause
The CI workflow had two interacting bugs:
- Upload step (
path: *.node) uploaded all checked-out.nodefiles per artifact — not just the one built for that platform - Rename step picked the first
.nodefile alphabetically from checkout (the pre-existingdarwin-arm64binary) and renamed it to the target platform name, overwriting the correctly built binary
Fixes
- Clean stale binaries before building so only the fresh build output exists
- Upload only the specific target file (
lucid-native.{platform}.node) instead of*.node - Add binary format validation (
filecommand checks ELF/Mach-O/PE32+) in both publish and release jobs - Add format validation to
install.shdownload fallback — wrong-format downloads are now rejected and deleted - Fix release job permissions for asset uploads
- Add Linux ARM64 pre-built binary for lucid-native (new platform — previously required building from source)
Defense in Depth
Six layers now prevent this from recurring:
rm -f *.nodebefore build (eliminates checkout contamination)- Verify step confirms expected filename exists after build
- Upload only the specific target file
- Publish job validates binary format before committing
- Release job validates binary format before attaching
install.shvalidates downloaded binary matches host OS
Platform Support
| Platform | lucid-native | lucid-perception |
|---|---|---|
| macOS ARM64 (Apple Silicon) | pre-built | pre-built |
| macOS x64 (Intel) | build from source | pre-built |
| Linux x64 | pre-built | pre-built |
| Linux ARM64 | pre-built (new!) | build from source |
| Windows x64 | pre-built | pre-built |
Thanks to @loster-nimmer for reporting the binary issue in #7!
v0.6.4 — Fix Silent Embedding Failures
What's Changed
When embedding providers fail, users previously got vague messages like "No embedding provider configured" with zero indication of WHY. This release makes every failure visible.
Provider Diagnostics
detectProvider() now returns ProviderDiagnostics that records the outcome of each provider attempt instead of silently falling through:
Embeddings:
✗ No embedding provider configured
Tried:
Native: model failed to load (ORT error: unsupported operator)
OpenAI: no OPENAI_API_KEY set
Ollama: connection refused (not running)
lucid status tells the truth
Previously checked file existence, not whether the model actually loads. Now uses the real diagnostics from provider detection — if the model is corrupt or incompatible, you'll see the specific error.
Query results surface retrieval method
memory_query responses now include method and warning fields so callers can tell when results are degraded:
{
"count": 3,
"method": "recency",
"warning": "Embedding failed — results ranked by recency only, not relevance",
"memories": [...]
}Files Changed
embeddings.ts—ProviderDiagnosticsinterface,detectProvider()returns{ config, diagnostics },loadNativeEmbeddingModel()returns{ success, error? }retrieval.ts—RetrievalResultinterface wrapping candidates + method + warningcli.ts— Status command uses diagnostics, shows per-provider failure reasonsserver.ts—initializeEmbeddings()logs diagnostics,memory_querysurfaces method/warningindex.ts— Exports new types (RetrievalResult,ProviderDiagnostics,DetectProviderResult)
Verification
- 215 tests pass, 0 failures
- tsc, biome, clippy all clean
- No Rust changes — TypeScript only
v0.6.3
Fix: ONNX Embedding Model Compatibility
Switches the bundled embedding model from model_fp16.onnx to model_quantized.onnx, fixing the "No embedding provider configured" error that affected all users.
Thanks to @io41 for finding and reporting this issue!
What happened
The FP16 model from Xenova/bge-base-en-v1.5 contains ORT graph optimizations (SimplifiedLayerNormFusion, InsertedPrecisionFreeCast) that are incompatible with the ort 2.0.0-rc.11 runtime bundled in the native binary. The model file would download successfully, but silently fail to load.
What changed
- Switched to
model_quantized.onnx(INT8, 110MB vs 218MB) — fully compatible with ort 2.x - Installers now clean up the old FP16 model on re-install
- Updated CI to build and ship the quantized model
Upgrade
Re-run the installer to download the correct model:
curl -fsSL https://raw.githubusercontent.com/JasonDocton/lucid-memory/main/install.sh | bashv0.6.2
Minor fixes: removed dead NAPI bindings, orphaned fields, and resolved lint issues.
v0.6.1
Wire 5 Unused Storage Methods — Gate 2 Fix
The 0.6.0 five-gate audit identified 5 storage methods as unused externally and made them private. Biome then flagged them as unused private members, and they were deleted to clear lint — introducing real breakage. This release restores each method and wires it into its correct call site.
New MCP Tool
visual_list— Browse/filter visual memories by media type, sharedBy, and significance without requiring semantic search (complementsvisual_search)
Wiring Fixes
| Method | Wired Into | Purpose |
|---|---|---|
queryVisualMemories() |
visual_list MCP tool |
Filter-based visual browsing |
getVisualEmbedding() |
Consolidation visual pruning | Skip visuals still being processed |
getEpisode() |
Public API | Single-episode lookup (was only batch) |
updateProjectContext() |
store() flow |
Accumulates memoryCount, typesSeen, allTags per project |
hasEmbedding() |
Public API | Per-memory embedding existence check |
Enriched Diagnostics
memory_consolidation_statusnow reports:- Recent episodes with event counts and boundary types
- Pending embedding count (memories awaiting background processing)
Verification
- 0 tsc errors
- 215 tests pass
- 0 new biome warnings
0.5.2 OpenCode Support
- OpenCode plugin with 4 hooks (system prompt injection, continuous encoding, compaction, env isolation)
- Multi-client installer selection (any combo of Claude Code, Codex, OpenCode)
- Full install/uninstall on macOS/Linux + Windows
- Security fix: execFileSync/execFile instead of execSync (no command injection)
- Bug fixes: conditional Claude restart, profile name sanitization
v0.5.1: Native BGE Embeddings & Installer Hardening
What's New
Native In-Process Embeddings
Lucid Memory no longer requires Ollama. Embeddings now run in-process via ONNX Runtime using the BGE-base-en-v1.5 model (768-dim, FP16). This means:
- Zero external dependencies for semantic search
- Faster startup — no waiting for Ollama to be ready
- ~218MB model downloaded once during install
Provider priority: native → OpenAI → Ollama (legacy). Existing Ollama/OpenAI users are automatically migrated.
Automatic Embedding Migration
Users upgrading from Ollama or OpenAI embeddings get a seamless migration:
- Stale embeddings (wrong model) are detected and deleted on startup
- Background processor re-embeds at ~10 memories per 5 seconds
- Both text and visual embeddings are migrated
lucid statusshows real-time migration progress with time estimates
Installer Hardening
- Fixed broken downloads: Switched from GitHub release URLs (0 assets) to HuggingFace CDN
- Partial download protection: Download-to-temp-then-rename pattern — Ctrl+C can't leave corrupt files
- Size validation: Model files verified >100MB to catch CDN error pages
- PowerShell performance:
$ProgressPreference = 'SilentlyContinue'fixes extremely slow downloads on PS 5.1 - Removed Ollama references from all installers and uninstallers
Storage & Reliability
- Added indexes on
embeddings.modelandvisual_embeddings.modelfor fast migration queries PRAGMA busy_timeout = 5000prevents SQLite deadlock between CLI and server- Minimized Mutex lock scope in Rust ONNX inference (pooling runs outside lock)
- Added
MutexPoisonederror variant for better diagnostics
Testing
- 27 new embedding migration tests covering full Ollama→native and OpenAI→native lifecycle
- Installer lint script (
scripts/lint-installers.sh) validates URLs, patterns, and consistency - 190 tests pass across 7 test files, 509 assertions
Checksums
Published to crates.io: lucid-core v0.5.1
Full Changelog
v0.5.0 - Episodic Memory
Episodic Memory
Memories now form temporal episodes with forward/backward temporal links, enabling "what was I working on before X?" queries. Built on the Temporal Context Model (Howard & Kahana, 2002).
Highlights
- Episode lifecycle: Auto-creates episodes on
store()with boundary detection (time gap >5min, >50 events, project switch) - Temporal spreading: Top 5 seeds spread activation through episode links during
retrieve(), with native Rust + TypeScript fallback memory_narrativeMCP tool: New tool for temporal context queries — "what was I doing before/after X?"retrieveTemporalNeighbors(): Programmatic API for before/after temporal queries
Benchmark Impact
| Metric | Before | After |
|---|---|---|
| Cognitive Avg | 79.3% | 87.3% |
| Delta vs RAG | +8% | +16% |
| Episode Retrieval | 0% | 80% |
Schema Additions
episodestable — temporal sequence containersepisode_eventstable — links memories to episodes with positionepisode_temporal_linkstable — forward/backward links between events
Testing
- 16 new episodic memory tests (163 total)
- Episodic e2e benchmark: 5/5 scenarios passing (96% avg)
0.4.5
Auto-association feature:
- New memories automatically link to recent similar memories (last 30 min, similarity > 0.4)
- Creates up to 5 associations per memory
- Enables spreading activation to find related context that pure semantic search would miss (e.g., finding Button.module.css when querying "Button component")
0.4.4
Fixed Recency vs Similarity Imbalance (v0.4.4)
Problem: Additive formula allowed recent irrelevant files to beat older relevant files
Solution: Multiplicative formula where similarity is primary, recency is boost:
recency_boost = clamp((base_level + 10) / 10, 0, 1)
total = probe * (1 + recency_boost) + spreading