Local-first RAG for personal knowledge management.
LSM ingests local documents, builds embeddings, retrieves relevant context, and produces cited answers with configurable LLM providers.
0.10.0
This project is maintained for personal use first.
- Pull requests are not accepted. See CONTRIBUTING.md for how to help.
- Bugs and feature requests are welcome as GitHub issues.
- Until
v1.0.0, breaking changes can happen between releases, especially in configuration schema and interfaces. - Pin versions and review
docs/CHANGELOG.mdbefore upgrading.
- Ruff linting and formatting enforced across the entire codebase.
- mypy gradual type checking with zero errors across 335 source files.
- Bandit security scan clean at medium+ severity; pre-commit hooks wired.
- Config files discovered from platform-native locations via
platformdirs. .envloaded from config directory first, then CWD.lsm config initcreates a starter config in the platform config directory.
Breaking release: lsm now starts a FastAPI web server instead of the TUI. Use lsm cli for the TUI.
- Web UI: Full-featured FastAPI web interface with query chat, ingest operations, agent management, settings, admin tools, and help/docs screens
- LAN auth: Session-based authentication with CSRF protection for multi-device access
- Chat persistence: DB-backed conversations with retry/variant/branch support, full-text search, and export
- Agent coordination: DB-tracked agent runs with scheduling, interaction inbox, live log streaming, and start/stop/pause controls
- Compaction: Shared conversation compaction primitive for managing long chat contexts
- Schema heal:
lsm db healrepairs current-version databases with missing tables/indexes without replaying migrations - TUI simplification: Reduced from 5 tabs to 2 screens (Command REPL + Settings)
- Remote chains to DB: Remote provider chains moved from config to DB-backed storage
- Notes removed: Standalone notes system replaced by conversation export
- Historical regression: Golden archive test matrix validates upgrade paths across releases
pip install -e .- Copy config:
cp example_config.json config.json-
Add API keys to
.env(see.env.example). -
Build embeddings:
lsm ingest build- Start the web server:
lsm- Or use the TUI:
lsm cli{
"global": {
"global_folder": "C:/Users/You/Local Second Mind",
"embed_model": "sentence-transformers/all-MiniLM-L6-v2",
"device": "cpu",
"batch_size": 32
},
"ingest": {
"roots": ["C:/Users/You/Documents"]
},
"llms": {
"providers": [{ "provider_name": "openai" }],
"services": {
"default": { "provider": "openai", "model": "gpt-5.2" }
}
}
}lsm- start web server (default)lsm cli- launch TUIlsm ingest build [--dry-run] [--force] [--skip-errors]lsm ingest tag [--max N]lsm ingest wipe --confirmlsm db init|check|upgrade|heal|migrate|sync|golden-create
Global flags:
--config path/to/config.json--verbose--log-level DEBUG|INFO|WARNING|ERROR|CRITICAL--log-file path/to/lsm.log--lan- expose to LAN--host/--hostname/--port- server overrides
docs/README.mddocs/user-guide/GETTING_STARTED.mddocs/user-guide/CONFIGURATION.mddocs/user-guide/QUERY_MODES.mddocs/user-guide/QUERY_PIPELINE.mddocs/user-guide/REMOTE_SOURCES.mddocs/user-guide/UPGRADING_TO_0_8.mddocs/user-guide/UPGRADING_TO_0_8_1.mddocs/user-guide/UPGRADING_TO_0_9.md.agents/docs/INDEX.mddocs/CHANGELOG.md
MIT (LICENSE)