Skip to content

Latest commit

 

History

History
494 lines (327 loc) · 12.3 KB

File metadata and controls

494 lines (327 loc) · 12.3 KB

Getting Started

This guide is written for someone who just wants Memory Layer working with as little setup friction as possible.

Table of Contents

Prerequisites

Before you install or run the wizard, have these ready:

  • a PostgreSQL connection string
  • optional: an OpenAI-compatible API key if you want memory scan
  • PostgreSQL with pgvector installed if you want semantic retrieval
  • go on PATH if you plan to use the repo-local Memory Layer skills through go run

You do not need to invent a Memory Layer service token yourself for normal installs. Setup generates a machine-local token automatically in memory-layer.env, and local write-capable tools use that token to authenticate to mem-service.

Fast Install: Debian

  1. Download the latest .deb package from the GitHub Releases page.
  2. Install it:
sudo dpkg -i memory-layer_<version>_amd64.deb
  1. Configure the shared/global settings once on this machine:
memory wizard --global

This is where you set the shared database URL. The shared service API token is provisioned automatically if it is missing or still using the development placeholder. A writer ID is optional; if you do not set one, Memory Layer derives a stable writer identity automatically.

  1. Go to the project you want to use:
cd /path/to/your-project
  1. Run the repo-local setup wizard:
memory wizard

The repo-local skill bundle that memory wizard installs uses a shared Go helper under .agents/skills/memory-layer/scripts/, so agent-driven skill usage in that repository requires go to be available on PATH.

Most mutating memory commands also support --dry-run, so you can preview setup, write, indexing, bundle, and checkpoint operations before they touch local files, services, or backend state.

  1. Start the backend service:
sudo systemctl enable --now memory-layer.service
  1. Open the UI you prefer:
memory tui

Fast Install: macOS

  1. Tap this repository and install the formula:
brew tap 3vilM33pl3/memory https://github.com/3vilM33pl3/memory
brew install --HEAD 3vilM33pl3/memory/memory-layer
  1. Configure the shared/global settings once on this machine:
memory wizard --global
  1. Go to the project you want to use:
cd /path/to/your-project
  1. Run the repo-local setup wizard:
memory wizard

The repo-local skill bundle that memory wizard installs uses a shared Go helper under .agents/skills/memory-layer/scripts/, so agent-driven skill usage in that repository requires go to be available on PATH.

  1. Start the backend LaunchAgent:
memory service enable
  1. Open the TUI:
memory tui

or in a browser:

http://127.0.0.1:4040/

What The Wizard Will Ask For

The wizard can set up:

  • shared/global settings when that scope is enabled:
    • the PostgreSQL database URL
    • the shared service API token override, if you want to replace the auto-generated one
    • an optional shared writer.id
  • optional LLM settings for scan
  • repo-local .mem/ files
  • optional watcher setup
  • the repo-local memory skill bundle, which uses a shared Go helper under .agents/skills/memory-layer/scripts/

Important detail:

  • inside a repository, memory wizard is local-first by default
  • use memory wizard --global when you want to edit the shared/global config
  • or enable shared/global setup in the first wizard step

File Locations

Shared configuration

Debian install:

  • /etc/memory-layer/memory-layer.toml
  • /etc/memory-layer/memory-layer.env

macOS install:

  • ~/Library/Application Support/memory-layer/memory-layer.toml
  • ~/Library/Application Support/memory-layer/memory-layer.env

Local install:

  • ~/.config/memory-layer/memory-layer.toml
  • ~/.config/memory-layer/memory-layer.env

Per-project configuration

Inside each project:

  • .mem/config.toml
  • .mem/project.toml
  • .mem/memory-layer.env
  • .mem/runtime/
  • .agents/memory-layer.toml

What To Put Where

Shared/global config

Use this for values shared by many repos:

  • database.url
  • service.api_token
  • [cluster] settings for backend relay discovery on a local network
  • [llm] settings

The shared service API token normally lives in the adjacent memory-layer.env file and is provisioned automatically during setup.

Repo-local config

Use this for project-specific overrides:

  • watcher settings
  • local backend ports
  • project-specific DB override if needed
  • repo-specific writer.id override if one project should write under a different custom writer identity

Project memory behavior

Use .agents/memory-layer.toml for project-owned behavior that should be easy to adapt without digging through service config:

  • include and ignore path hints for repository scans
  • enabled analyzers
  • curation replacement policy for memory updates
  • future graph and plugin controls

Example:

[curation]
replacement_policy = "balanced"

Available policies are conservative, balanced, and aggressive. balanced is the default.

Env files

Use these for secrets such as:

MEMORY_LAYER__SERVICE__API_TOKEN=auto-generated-or-manually-overridden
OPENAI_API_KEY=your-api-key-here

Writer ID

Each coding agent or tool that writes memory gets a writer ID.

If you do nothing, Memory Layer derives one automatically from:

  • the writing tool
  • the local user
  • the local host name

That gives stable defaults such as:

  • memory-olivier-monolith
  • memory-watcher-olivier-monolith

For most setups, that automatic writer identity is enough.

You can configure it in TOML:

[writer]
id = "codex-cli-main"
name = "Codex CLI"

or with an environment variable:

export MEMORY_LAYER_WRITER_ID=codex-cli-main

Use an explicit writer ID only when you want a custom stable label shared across tools or machines.

Primary And Relay Services

If a machine can reach PostgreSQL, mem-service runs as a primary.

If a machine cannot reach PostgreSQL but can see another Memory Layer service on the local network, mem-service can run as a relay. In relay mode it discovers a primary over UDP multicast and forwards the normal HTTP API and browser WebSocket traffic to it.

Relay discovery is controlled from shared config:

[cluster]
enabled = true

The wizard exposes this as a shared setup option, and memory service enable can offer to turn it on after a database-connect failure.

Daily Use

Open the TUI:

memory tui

For a visual walkthrough of each tab, use the TUI Guide.

Open the web UI:

http://127.0.0.1:4040/

Check health:

memory service status
memory health
memory doctor

Save a useful project fact:

memory remember --project my-project --note "Deployment uses a systemd service."

Search project memory:

memory query --project my-project --question "How is deployment handled here?"

Export a shareable memory bundle:

memory bundle export --project my-project --out my-project.mlbundle.zip

For semantic-search maintenance commands such as memory embeddings reindex, memory embeddings reembed, and memory embeddings prune, see Embedding Operations. For project memory backup and restore, see Memory Bundles. For watcher health states, restart behavior, and recovery signals in the TUI, see Watcher Health. For the direct write command, see Remember Command. For service management and setup diagnostics, see Service Commands and Doctor Diagnostics. For bootstrap behavior, see Wizard And Bootstrap.

For getting back into flow after an interruption, see Resume Briefings.

Optional Background Watcher

If you want Memory Layer to capture useful work in the background:

memory watcher enable --project my-project

When the backend service restarts, service-managed watchers will restart too so they reconnect cleanly to the new backend instance.

Check it:

memory watcher status --project my-project

In the TUI:

  • the Watchers tab shows each watcher's health, restart attempts, and last heartbeat
  • the Activity tab shows watcher-health transitions such as stale, restarting, failed, and recovery back to healthy
  • recovery events now show what state the watcher recovered from and, when relevant, how many restart attempts happened before recovery

Disable it later:

memory watcher disable --project my-project

Upgrading An Existing Install

If you already use Memory Layer and are upgrading to a newer release:

  1. install the new .deb
  2. make sure PostgreSQL has pgvector installed for your server version
  3. enable the extension in your target database:
CREATE EXTENSION IF NOT EXISTS vector;
  1. restart the backend service:
sudo systemctl restart memory-layer.service
  1. verify the setup:
memory doctor
  1. rebuild embeddings for existing project memories:
memory embeddings reindex --project my-project

If you later switch the embedding model, Memory Layer keeps the old embedding space instead of overwriting it. Use:

memory embeddings reembed --project my-project

to materialize vectors for the newly active space, and:

memory embeddings prune --project my-project

only when you want to delete non-active embedding spaces explicitly.

For the command-level explanation of when to use each of those operations, see Embedding Operations.

If memory doctor reports that pgvector is missing, install the PostgreSQL package first and rerun the check.

On Debian, upgrades should preserve local edits to:

  • /etc/memory-layer/memory-layer.env
  • /etc/memory-layer/memory-layer.toml

Those files are treated as package-managed configuration files rather than being overwritten with package defaults on every upgrade.

Using scan

scan reads a repository, sends a structured summary to the configured LLM, and writes useful durable memories back into Memory Layer.

Full command documentation:

Try it safely first:

memory scan --project my-project --dry-run

Then write the results:

memory scan --project my-project

If scan fails, the two most common causes are:

  • missing [llm].model in config
  • missing OPENAI_API_KEY

Web UI Notes

The browser UI is served by mem-service itself. In a normal install it should work automatically once the service is running.

If you build from source, build the frontend first:

npm --prefix web ci
npm --prefix web run build

Then start the backend:

cargo run --bin memory -- service run

Importing Commit History

Memory Layer can also store git commits as project evidence without turning every commit into canonical memory.

Import recent or full history:

memory commits sync --project my-project

Browse imported commits:

memory commits list --project my-project
memory commits show --project my-project <commit-hash>

If memory doctor reports that no commit history has been imported yet, the fix is:

memory commits sync --project my-project

Running From Source

If you are developing Memory Layer itself:

cargo run --bin memory -- wizard
cargo run --bin memory -- service run
cargo run --bin memory -- tui --project memory

Optional watcher:

cargo run --bin memory -- watcher run --project memory

Related Docs