Auto-tag notes · Summarise your week · Extract action items · Chat with your vault
# 1️⃣ Build the plugin
npm install && npm run build
# 2️⃣ Install into your vault
mkdir -p <your-vault>/.obsidian/plugins/memex
cp main.js manifest.json <your-vault>/.obsidian/plugins/memex/
# 3️⃣ Enable in Obsidian
# Settings → Community Plugins → Reload → Toggle "Memex" ON
# Then set your AI provider (Local or Gemini) in Settings → MemexUsing Gemini? Grab a free API key from Google AI Studio — no local server needed.
Using a local LLM? Start LM Studio or Ollama and load a chat + embedding model.
- Streaming responses — tokens appear in real-time as the LLM generates them, with automatic fallback to non-streaming if the server doesn't support it.
- Multiple conversations — create, rename, and delete chats; sidebar lists them sorted by last activity.
- Resizable sidebar — drag the divider or toggle the sidebar open/closed.
- Message actions (per-message
⋯menu):- Copy any message to clipboard.
- Edit & re-submit a user message (trims history and re-generates).
- Regenerate an assistant response.
- Delete a single message.
- Export to Note — saves a message as a new Markdown file in your vault.
- Per-conversation settings — override temperature, max tokens, system prompt, RAG top-K, and similarity threshold on a per-chat basis via the ⚙️ button.
- Personas — switch the assistant's personality (e.g., Zettelkasten Guide, Daily Reflector, Concise Summarizer). Fully customisable in settings.
- PDF export — right-click a chat in the sidebar → Export to PDF. Renders full Markdown with styled headings, code blocks, and lists into an A4 PDF saved to
Memex/PDFs/.
- Intelligent query rewriting — follow-up questions are automatically rewritten into standalone search queries using the LLM, so context isn't lost across turns.
- Content-hash indexing — only re-embeds notes whose content has actually changed; hashes are persisted to disk across reloads, eliminating redundant API calls.
- Idle-based auto-indexing — dirty files are queued and re-indexed when you navigate away from a note (not on every keystroke).
- Optimised vector store:
Float32Arrayembeddings with pre-computed norms for ~2–3× faster cosine similarity.- Min-heap top-K search — avoids sorting the entire index.
- Path index for O(1) document lookups and deletions.
- Compact JSON serialisation (~40% smaller on disk).
- Excluded folders — keep
Templates,.obsidian, or any other folders out of the index. - Manual & automatic — index the full vault on demand, or let the watcher handle it.
- Auto Tag Current Note — LLM suggests 3–5 relevant tags and prepends them.
- Extract Action Items — finds TODOs/tasks and appends them as a checklist.
- Generate Weekly Summary — summarises all notes modified in the last 7 days and saves the result to
Weekly Summaries/{Year}/.
Swap between providers at any time — no restart required:
| Local (LM Studio / Ollama) | Google Gemini | |
|---|---|---|
| Chat model | qwen/qwen3-vl-4b (default) |
gemini-2.5-flash (default) |
| Embedding model | text-embedding-nomic-embed-text-v1.5 |
gemini-embedding-001 |
| Server | Your local machine | Google Cloud (API key) |
Both providers implement the same ILLMProvider / IEmbeddingProvider interfaces using OpenAI-compatible endpoints, so any model that speaks that protocol works.
Open the Command Palette (Cmd/Ctrl + P) and search for Memex:
| Command | Description |
|---|---|
| Open Chat with Journal | Opens the chat sidebar |
| Auto Tag Current Note | Analyses the note and prepends tags |
| Extract Action Items | Finds TODOs and appends a checklist |
| Generate Weekly Summary | Summarises the last 7 days of notes |
| Index Vault for RAG | Full vault embedding index (with progress) |
| Clear RAG Index | Wipes the index for a fresh rebuild |
| View RAG Index Statistics | Shows total indexed document chunks |
| Debug RAG Retrieval | Select text → retrieves matching chunks (logged to console) |
- Click the 💬 ribbon icon or run the Open Chat with Journal command.
- Type a message and press Enter (or Shift+Enter for a new line).
- Use the ⚙️ button next to Send to adjust per-chat settings.
- Right-click a conversation in the sidebar for rename / export / delete options.
Go to Settings → Memex.
- Provider: Local (LM Studio / Ollama) or Google Gemini.
- LLM Endpoint — URL of your local server (default
http://localhost:1234). - Chat Model — model identifier for chat completions.
- Embedding Model — model identifier for embeddings.
- API Key — your Gemini API key.
- Chat Model — Gemini model (default
gemini-2.5-flash). - Embedding Model — Gemini embedding model (default
gemini-embedding-001).
- Weekly Summary Path — folder for weekly summaries.
- Default Temperature — controls randomness (0.0–1.0).
- Default Max Tokens — maximum response length.
- Enable RAG — toggle retrieval features on/off.
- Chunk Size — words per chunk (default 200).
- Chunk Overlap — overlapping words between chunks (default 30).
- Top K Results — number of chunks to retrieve (default 6).
- Similarity Threshold — minimum relevance score (default 0.4).
- Auto-Index on Change — re-index notes on create/modify/delete.
- Excluded Folders — comma-separated list of folders to skip.
- Personas JSON — edit the array of
{name, prompt}objects to customise assistant behaviour.
main.ts → Plugin entry point, settings, commands
├── providers.ts → LLM & Embedding provider interfaces + Local/Gemini implementations
├── llm_service.ts → LLM service (completion + streaming)
├── embedding_service.ts → Chunking + batch embedding generation
├── vector_store.ts → JSON-backed vector store with Float32Array + min-heap search
├── rag_service.ts → RAG orchestration (indexing, retrieval, query rewriting)
├── processor.ts → Note processing (tags, action items, weekly summary)
├── conversation_manager.ts → Conversation CRUD (JSON files in .memex/)
└── chat_view.ts → Chat UI (sidebar, messages, streaming, PDF export)
- Multi-modal notes — image and PDF understanding via vision models
- Graph-aware RAG — leverage Obsidian's link graph to boost retrieval relevance
- Ollama auto-detect — automatically discover running models, no manual config
- Mobile support — optimise the chat UI and indexing for Obsidian Mobile
- Semantic search command — vault-wide natural language search from the command palette
- Note generation — create new notes from chat responses with backlinks
- Scheduled summaries — automatic daily/weekly/monthly summaries on a cron
- Plugin marketplace — submit to the Obsidian Community Plugins directory
Have an idea? Open an issue — PRs welcome!
Contributions are welcome — whether it's a bug fix, new feature, or documentation improvement.
- Fork the repo and create a new branch:
git checkout -b feature/my-feature
- Make your changes — follow the existing code style and add comments where needed.
- Build & test to make sure everything compiles:
npm run build
- Submit a Pull Request with a clear description of what you changed and why.
git clone https://github.com/manas-33/memex.git
cd memex
npm install
npm run dev # Watch mode — rebuilds on file changesThen symlink or copy the built files into your vault's .obsidian/plugins/memex/ folder and reload Obsidian.
MIT — see LICENSE for details.
