+
+One-click templates for managed hosts. Each one ships a self-contained
+Dockerfile that pulls `@agentmemory/agentmemory` from npm and copies
+the iii engine binary in from the official `iiidev/iii` Docker Hub
+image — no pre-built agentmemory image required. Persistent storage
+mounts at `/data`; the first-boot entrypoint overwrites the
+npm-bundled iii config (which binds `127.0.0.1`) with a deploy-tuned
+one that binds `0.0.0.0` and uses absolute `/data` paths, generates
+the HMAC secret, then drops privileges from `root` to `node` via
+`gosu` before exec'ing the agentmemory CLI.
+
+
+
+
+
+
+Render's one-click deploy button requires `render.yaml` at the repository root, which we deliberately keep clean. Use the Render Blueprint flow documented in [`deploy/render/`](./deploy/render/README.md) to point at the in-repo blueprint manually.
+
+Full setup details (HMAC capture, viewer SSH tunnel, rotation, backup,
+cost floors) live in [`deploy/`](./deploy/README.md):
+
+- [`deploy/fly`](./deploy/fly/README.md) — single machine with
+ `auto_stop_machines = "stop"`; cheapest idle.
+- [`deploy/railway`](./deploy/railway/README.md) — Hobby plan flat fee,
+ volume in the dashboard.
+- [`deploy/render`](./deploy/render/README.md) — Blueprint flow,
+ automatic disk snapshots on paid plans.
+- [`deploy/coolify`](./deploy/coolify/README.md) — self-hosted on your
+ own VPS via [Coolify](https://coolify.io/self-hosted); same Docker
+ Compose stack, you own the host and the data.
+
+Only port `3111` is published. The viewer on `3113` stays bound to
+loopback inside the container — every template's README documents the
+SSH-tunnel pattern for reaching it.
+
+---
+
Every coding agent forgets everything when the session ends. You waste the first 5 minutes of every session re-explaining your stack. agentmemory runs in the background and eliminates that entirely.
diff --git a/deploy/README.md b/deploy/README.md
new file mode 100644
index 00000000..91aa199e
--- /dev/null
+++ b/deploy/README.md
@@ -0,0 +1,100 @@
+# One-click deploy templates
+
+Stand up agentmemory on managed infrastructure without rolling your own
+Docker host. Each template ships a self-contained Dockerfile that pulls
+`@agentmemory/agentmemory` from npm at build time and copies the iii
+engine binary in from the official `iiidev/iii` image — no pre-built
+agentmemory image required. Storage mounts at `/data`; an HMAC secret
+is generated by the first-boot entrypoint and persisted to the volume.
+The entrypoint overwrites the npm-bundled iii config with a
+deploy-tuned one that binds `0.0.0.0` and uses absolute `/data` paths,
+then drops privileges from `root` to `node` via `gosu` before
+exec'ing the agentmemory CLI.
+
+| Platform | Pitch | Cost floor |
+|----------|-------|------------|
+| [fly.io](./fly/README.md) | Single machine with auto-stop. Cheapest idle cost on a managed host; cold-start on first request after sleep. | ~$0.15/month at full idle |
+| [Railway](./railway/README.md) | Push from GitHub, volume in the dashboard. Easiest managed dashboard flow. | $5/month (Hobby plan flat fee) |
+| [Render](./render/README.md) | Blueprint-driven; persistent disk attaches automatically. Most "set it and forget it." | $7.25/month (Starter web + 1 GB disk) |
+| [Coolify](./coolify/README.md) | Self-hosted on your own VPS. Same Docker Compose stack, you own the host and the data. | VPS cost only (Hetzner CX22 ~€3.79/month) |
+
+## What every template guarantees
+
+- **Volume mounted at `/data`.** Matches the path the engine has used
+ since v0.9.10.
+- **HMAC secret generated on first boot** via `openssl rand -hex 32`,
+ written to `/data/.hmac` with `chmod 600`, and printed to stdout
+ exactly once so the operator can capture it from the deploy logs.
+ Subsequent boots load the secret from the file. The secret is never
+ committed to a config file or set as a platform env var.
+- **Only port 3111 is exposed publicly.** The viewer on port 3113
+ stays bound to the container's localhost. Reach it via SSH tunnel
+ (see each platform's README).
+- **TLS upstream of the container.** Every managed platform terminates
+ TLS at its edge proxy; the templates publish a single internal port
+ (`3111`) to that proxy, never to the host. Integration plugins
+ configured with `AGENTMEMORY_REQUIRE_HTTPS=1` will refuse to send the
+ bearer over plaintext HTTP to a non-loopback host, so a
+ misconfigured TLS layer fails loud instead of silently leaking the
+ secret.
+
+## Pick a platform
+
+- Pick **fly.io** if you want the lowest idle cost and don't mind a
+ cold-start latency hit on the first request after sleep.
+- Pick **Railway** if you want a clicky dashboard flow and a flat
+ monthly bill.
+- Pick **Render** if you want the most "set it and forget it"
+ Blueprint flow with automatic disk snapshots on paid plans.
+- Pick **Coolify** if you already run a VPS and want a self-hosted
+ control plane — same Docker Compose stack, no third-party host has
+ your memories.
+
+All four give you the same agentmemory API at the same port (3111)
+with the same auth model. Migrating between them later is a `tar` of
+`/data` and a re-import — see each platform's README for the exact
+commands.
+
+## Optional: LLM + embedding provider keys
+
+Every template runs out of the box without any LLM or embedding key —
+search falls back to BM25-only mode and synthetic (zero-LLM)
+compression keeps memories indexable. To unlock LLM-powered
+compression and hybrid (BM25 + vector) recall, add one of the
+following to your platform's environment variables (Fly:
+`flyctl secrets set`; Railway / Render / Coolify: dashboard
+*Variables / Environment* tab):
+
+| Variable | Purpose |
+|---------------------------|----------------------------------------------------------|
+| `ANTHROPIC_API_KEY` | LLM-backed compression + summarization |
+| `GEMINI_API_KEY` | LLM provider alternative |
+| `OPENROUTER_API_KEY` | LLM provider alternative |
+| `OPENAI_API_KEY` | Embedding provider (text-embedding-3-small by default) |
+| `VOYAGE_API_KEY` | Embedding provider alternative |
+| `AGENTMEMORY_AUTO_COMPRESS=true` | Run LLM compression on every observation batch |
+| `AGENTMEMORY_INJECT_CONTEXT=true` | Inject recalled memories back into agent prompts |
+
+The defaults are intentionally conservative: provider keys default to
+absent (no third-party calls), `AGENTMEMORY_AUTO_COMPRESS` is off,
+and `AGENTMEMORY_INJECT_CONTEXT` is off. Opt in only after you've
+confirmed your provider quota can absorb the workload.
+
+## Cold-start budget
+
+Measured against fly.io's `iad` region with a 1 GB volume:
+
+```
+machine image prepared : 5.1 s
+volume mount + format : 2.5 s
+firecracker boot : 1.0 s
+entrypoint + chown : 0.5 s
+iii-engine ready : 3.0 s
+agentmemory worker reg : 2.0 s
+─────────────────────────────────
+healthcheck passes : ~9-10 s
+```
+
+Every template's health-check `grace_period` (or compose
+`start_period`) is set to 30 s for a 3x safety margin. Tune lower
+once you've measured your own platform's image-pull characteristics.
diff --git a/deploy/coolify/Dockerfile b/deploy/coolify/Dockerfile
new file mode 100644
index 00000000..3f6c433f
--- /dev/null
+++ b/deploy/coolify/Dockerfile
@@ -0,0 +1,32 @@
+ARG III_VERSION=0.11.2
+
+FROM iiidev/iii:${III_VERSION} AS iii-image
+
+FROM node:22-slim
+
+ARG AGENTMEMORY_VERSION=0.9.12
+ARG III_VERSION=0.11.2
+ARG III_SDK_VERSION=0.11.2
+
+RUN apt-get update \
+ && apt-get install -y --no-install-recommends openssl ca-certificates tini gosu curl \
+ && rm -rf /var/lib/apt/lists/*
+
+COPY --from=iii-image /app/iii /usr/local/bin/iii
+
+WORKDIR /opt/agentmemory
+RUN printf '{"name":"agentmemory-deploy","version":"1.0.0","private":true,"overrides":{"iii-sdk":"%s"}}\n' "${III_SDK_VERSION}" > package.json \
+ && npm install "@agentmemory/agentmemory@${AGENTMEMORY_VERSION}" --omit=optional --no-fund --no-audit \
+ && ln -s /opt/agentmemory/node_modules/.bin/agentmemory /usr/local/bin/agentmemory
+
+ENV AGENTMEMORY_III_VERSION=${III_VERSION} \
+ TINI_SUBREAPER=1
+
+COPY --chmod=0755 entrypoint.sh /usr/local/bin/agentmemory-entrypoint.sh
+
+EXPOSE 3111
+
+HEALTHCHECK --interval=30s --timeout=5s --start-period=30s --retries=3 \
+ CMD curl -fsS http://127.0.0.1:3111/agentmemory/livez || exit 1
+
+ENTRYPOINT ["/usr/bin/tini", "--", "/usr/local/bin/agentmemory-entrypoint.sh"]
diff --git a/deploy/coolify/README.md b/deploy/coolify/README.md
new file mode 100644
index 00000000..feec4757
--- /dev/null
+++ b/deploy/coolify/README.md
@@ -0,0 +1,132 @@
+# Deploy agentmemory on Coolify
+
+[Coolify](https://coolify.io/self-hosted) is an open-source, self-hosted
+Heroku/Render alternative that you run on your own VPS. This template
+deploys agentmemory as a Coolify *Application* backed by a Docker
+Compose stack — Coolify handles TLS termination, persistent volume
+provisioning, log aggregation, and the deploy webhook for you.
+
+## What you get
+
+- A public HTTPS endpoint serving the agentmemory REST API behind
+ Coolify's built-in Traefik/Caddy proxy. The container port (`3111`)
+ is exposed to the proxy network only — never bound to the host — so
+ TLS termination and domain routing stay under proxy control.
+- A persistent Docker volume backing `/data` for memories, BM25 index,
+ and stream backlog. Coolify auto-prefixes the volume name with the
+ application's UUID so the data survives redeploys.
+- An HTTP health-check at `/agentmemory/livez` declared in the
+ Dockerfile (`HEALTHCHECK` directive). Coolify reuses it for
+ rolling-deploy decisions.
+
+## One-time setup
+
+1. **Open your Coolify dashboard** and click **+ New → Application**.
+2. **Source**: pick *Public Repository*. Paste:
+ ```
+ https://github.com/rohitg00/agentmemory
+ ```
+ Branch: `main`.
+3. **Build Pack**: select *Docker Compose*.
+4. **Base Directory**: `deploy/coolify`
+5. **Compose Path**: `docker-compose.yml`
+6. Click **Save**, then on the application settings screen set a
+ **Domain** in the form `https://:3111` (the `:3111`
+ suffix tells Coolify's proxy which container port to forward to;
+ it still serves over 443/80 publicly).
+7. Click **Deploy**.
+
+That's it. Coolify clones the repo, builds the Dockerfile under
+`deploy/coolify/`, provisions the `agentmemory-data` named volume on
+the host, attaches Traefik (or Caddy) for the public domain, and starts
+the service. The container is reachable only through the proxy — there
+is no published host port.
+
+## Capture the HMAC secret
+
+Once the deploy logs show the service is up, open the application's
+**Logs** tab in Coolify and search for `AGENTMEMORY_SECRET=`. You will
+see exactly one line of the form `AGENTMEMORY_SECRET=<64 hex chars>`.
+Copy it into your client environment (`~/.bashrc`, Claude Desktop
+config, etc.). The secret is never printed again on subsequent boots.
+
+## Verify the deployment
+
+```bash
+curl "https:///agentmemory/livez"
+# {"status":"ok"}
+```
+
+For an authenticated call, your client must send
+`Authorization: Bearer `.
+
+## Viewer access (port 3113 stays internal)
+
+The viewer port is not exposed by the compose file on purpose — it
+holds the unauthenticated admin surface in older releases and the
+proxied surface in current ones, neither of which belongs on the open
+internet. Two paths to reach it:
+
+**Option A — SSH tunnel from the Coolify host.** Coolify gives you SSH
+access to the underlying VPS. From your laptop:
+
+```bash
+ssh -L 3113:127.0.0.1:3113 @
+# inside the SSH session, find the container:
+docker ps --filter name=agentmemory --format "{{.Names}}"
+# tunnel into the container's port from the host:
+docker exec -it sh -c "curl http://localhost:3113"
+```
+
+Cleaner version: bind the container's 3113 to the host's loopback by
+adding `- "127.0.0.1:3113:3113"` to the `ports:` block in
+`docker-compose.yml`, redeploy, then `ssh -L 3113:127.0.0.1:3113
+@` is enough.
+
+**Option B — expose 3113 as a second Coolify domain protected by HTTP
+basic auth.** Coolify's per-service routing supports adding a second
+public endpoint with basic-auth middleware. Useful if you want to
+share the viewer with a teammate without giving them SSH.
+
+## Rotate the HMAC secret
+
+```bash
+ssh @
+docker exec -it sh -c "rm /data/.hmac"
+exit
+```
+
+Then click **Redeploy** in the Coolify dashboard. The next boot prints
+a fresh secret to the logs.
+
+## Back up `/data`
+
+Coolify exposes the named volume on the host filesystem under
+`/var/lib/docker/volumes/_agentmemory-data/_data`. Back it
+up with your existing host-level snapshot tooling (Restic, Borg,
+`rsync`, BTRFS snapshots, etc.) or via Coolify's built-in *Backups*
+feature for Docker volumes.
+
+## Cost floor and resources
+
+- **Hardware**: the agentmemory container idles at ~150 MB RSS, climbs
+ to ~400 MB under steady traffic. The bundled iii engine adds another
+ ~80 MB. A 1 vCPU / 1 GB VPS is comfortably enough for a personal
+ install.
+- **VPS providers commonly paired with Coolify**: Hetzner CX22
+ (~€3.79/month), DigitalOcean Basic Droplet ($6/month), Vultr Cloud
+ Compute ($6/month). Coolify itself is free.
+- **Volume storage**: tied to whatever block storage the VPS provides;
+ typically pennies per GB-month.
+
+## Known caveats
+
+- The Dockerfile builds on the Coolify host on every deploy. First
+ deploy takes ~2 minutes; cached layers shrink subsequent rebuilds to
+ under 30 seconds. Pin `AGENTMEMORY_VERSION` and `III_VERSION` in
+ `docker-compose.yml`'s `build.args` block to lock a specific release.
+- Coolify's *Persistent Storage* tab will show `agentmemory-data` as a
+ managed volume — do not delete it from the dashboard if you want
+ your memories to survive a redeploy.
+- arm64 hosts work — the iii binary selection in the Dockerfile uses
+ `uname -m` and downloads the matching tarball.
diff --git a/deploy/coolify/docker-compose.yml b/deploy/coolify/docker-compose.yml
new file mode 100644
index 00000000..1bd48648
--- /dev/null
+++ b/deploy/coolify/docker-compose.yml
@@ -0,0 +1,30 @@
+services:
+ agentmemory:
+ build:
+ context: .
+ dockerfile: Dockerfile
+ args:
+ AGENTMEMORY_VERSION: "0.9.12"
+ III_VERSION: "0.11.2"
+ III_SDK_VERSION: "0.11.2"
+ restart: unless-stopped
+ environment:
+ - SERVICE_FQDN_AGENTMEMORY_3111
+ expose:
+ - "3111"
+ volumes:
+ - agentmemory-data:/data
+ healthcheck:
+ test: ["CMD-SHELL", "curl -fsS http://127.0.0.1:3111/agentmemory/livez || exit 1"]
+ interval: 30s
+ timeout: 5s
+ start_period: 30s
+ retries: 3
+ logging:
+ driver: json-file
+ options:
+ max-size: "10m"
+ max-file: "3"
+
+volumes:
+ agentmemory-data:
diff --git a/deploy/coolify/entrypoint.sh b/deploy/coolify/entrypoint.sh
new file mode 100755
index 00000000..ffdd6333
--- /dev/null
+++ b/deploy/coolify/entrypoint.sh
@@ -0,0 +1,98 @@
+#!/bin/sh
+# agentmemory first-boot entrypoint.
+#
+# Runs as root so it can:
+# 1. Overwrite the npm-bundled iii-config.yaml (which binds 127.0.0.1
+# and uses relative ./data paths) with a deploy-tuned version that
+# binds 0.0.0.0 and uses absolute /data paths.
+# 2. chown the platform-mounted /data volume to the runtime user
+# (managed platforms mount volumes root-owned 755 by default).
+# 3. Generate the HMAC secret on first boot and persist it to
+# /data/.hmac (chmod 600) so the secret survives restarts.
+#
+# Then it execs the agentmemory CLI under gosu as the unprivileged
+# `node` user.
+
+set -eu
+
+DATA_DIR="${AGENTMEMORY_DATA_DIR:-/data}"
+HMAC_FILE="${AGENTMEMORY_HMAC_FILE:-/data/.hmac}"
+RUN_AS="node:node"
+III_CONFIG="/opt/agentmemory/node_modules/@agentmemory/agentmemory/dist/iii-config.yaml"
+
+mkdir -p "$DATA_DIR"
+chown -R "$RUN_AS" "$DATA_DIR"
+
+cat > "$III_CONFIG" <<'EOF'
+workers:
+ - name: iii-http
+ config:
+ port: 3111
+ host: 0.0.0.0
+ default_timeout: 180000
+ cors:
+ allowed_origins:
+ - "http://localhost:3111"
+ - "http://localhost:3113"
+ - "http://127.0.0.1:3111"
+ - "http://127.0.0.1:3113"
+ allowed_methods: [GET, POST, PUT, DELETE, OPTIONS]
+ - name: iii-state
+ config:
+ adapter:
+ name: kv
+ config:
+ store_method: file_based
+ file_path: /data/state_store.db
+ - name: iii-queue
+ config:
+ adapter:
+ name: builtin
+ - name: iii-pubsub
+ config:
+ adapter:
+ name: local
+ - name: iii-cron
+ config:
+ adapter:
+ name: kv
+ - name: iii-stream
+ config:
+ port: 3112
+ host: 0.0.0.0
+ adapter:
+ name: kv
+ config:
+ store_method: file_based
+ file_path: /data/stream_store
+ - name: iii-observability
+ config:
+ enabled: true
+ service_name: agentmemory
+ exporter: memory
+ sampling_ratio: 1.0
+ metrics_enabled: true
+ logs_enabled: true
+ logs_console_output: true
+EOF
+chown "$RUN_AS" "$III_CONFIG"
+
+if [ ! -s "$HMAC_FILE" ]; then
+ SECRET="$(openssl rand -hex 32)"
+ umask 077
+ printf '%s\n' "$SECRET" > "$HMAC_FILE"
+ chmod 600 "$HMAC_FILE"
+ chown "$RUN_AS" "$HMAC_FILE"
+ echo "================================================================"
+ echo "agentmemory: generated HMAC secret on first boot"
+ echo "AGENTMEMORY_SECRET=$SECRET"
+ echo "Copy this value now. It will not be printed again."
+ echo "Stored at: $HMAC_FILE (chmod 600)"
+ echo "To rotate: delete $HMAC_FILE on the persistent volume and restart."
+ echo "================================================================"
+fi
+
+AGENTMEMORY_SECRET="$(cat "$HMAC_FILE")"
+export AGENTMEMORY_SECRET
+
+exec gosu "$RUN_AS" agentmemory "$@"
diff --git a/deploy/fly/Dockerfile b/deploy/fly/Dockerfile
new file mode 100644
index 00000000..89257f93
--- /dev/null
+++ b/deploy/fly/Dockerfile
@@ -0,0 +1,35 @@
+ARG III_VERSION=0.11.2
+
+FROM iiidev/iii:${III_VERSION} AS iii-image
+
+FROM node:22-slim
+
+ARG AGENTMEMORY_VERSION=0.9.12
+ARG III_VERSION=0.11.2
+ARG III_SDK_VERSION=0.11.2
+
+RUN apt-get update \
+ && apt-get install -y --no-install-recommends openssl ca-certificates tini gosu curl \
+ && rm -rf /var/lib/apt/lists/*
+
+COPY --from=iii-image /app/iii /usr/local/bin/iii
+
+# Install agentmemory into a dedicated prefix so the local package.json's
+# `overrides` field pins iii-sdk down to match the engine (agentmemory's
+# caret range `^0.11.2` otherwise resolves to 0.11.6, the version that
+# requires the new sandbox-everything worker model the agentmemory CLI
+# is not refactored for yet). `npm install -g` ignores overrides, hence
+# the local prefix.
+WORKDIR /opt/agentmemory
+RUN printf '{"name":"agentmemory-deploy","version":"1.0.0","private":true,"overrides":{"iii-sdk":"%s"}}\n' "${III_SDK_VERSION}" > package.json \
+ && npm install "@agentmemory/agentmemory@${AGENTMEMORY_VERSION}" --omit=optional --no-fund --no-audit \
+ && ln -s /opt/agentmemory/node_modules/.bin/agentmemory /usr/local/bin/agentmemory
+
+ENV AGENTMEMORY_III_VERSION=${III_VERSION} \
+ TINI_SUBREAPER=1
+
+COPY --chmod=0755 entrypoint.sh /usr/local/bin/agentmemory-entrypoint.sh
+
+EXPOSE 3111
+
+ENTRYPOINT ["/usr/bin/tini", "--", "/usr/local/bin/agentmemory-entrypoint.sh"]
diff --git a/deploy/fly/README.md b/deploy/fly/README.md
new file mode 100644
index 00000000..020d9751
--- /dev/null
+++ b/deploy/fly/README.md
@@ -0,0 +1,132 @@
+# Deploy agentmemory on fly.io
+
+This template runs agentmemory on a single fly.io machine with a 1 GB
+persistent volume mounted at `/data`. The HMAC secret is generated on
+first boot and persisted to the volume — you capture it from the deploy
+logs exactly once.
+
+## What you get
+
+- A public HTTPS endpoint serving the agentmemory REST API on port 3111
+- A 1 GB Fly Volume at `/data` for memories, BM25 index, and stream backlog
+- `auto_stop_machines = "stop"` and `min_machines_running = 0` — the
+ machine sleeps when idle, so cost floor approaches $0 for low traffic
+- HTTP healthcheck at `/agentmemory/livez` every 30 s
+- The HMAC bearer secret is generated on first boot inside the
+ container and persisted to `/data/.hmac` (chmod 600); the operator
+ copies it from the deploy logs once.
+
+## One-time setup
+
+Pick a unique Fly app name first — `agentmemory` itself is likely taken.
+Every command below references `$APP`, so set it once and the rest of the
+flow stays consistent:
+
+```bash
+# 1. Install flyctl: https://fly.io/docs/flyctl/install/
+# 2. Pick your unique app name (and matching volume name):
+export APP="agentmemory-$(whoami)" # or any other globally-unique name
+export VOLUME="${APP//-/_}_data" # Fly volume names can't contain '-'
+
+# 3. From this directory:
+fly launch --copy-config --no-deploy --name "$APP"
+
+# 4. Create the volume in the same region as the app:
+fly volumes create "$VOLUME" --region iad --size 1
+
+# 5. Deploy:
+fly deploy --app "$APP"
+```
+
+If `fly launch` reports the name is taken, pick another value for `$APP`,
+re-export, and re-run.
+
+## Capture the HMAC secret
+
+Right after the first deploy succeeds:
+
+```bash
+fly logs --app "$APP" | grep -A1 AGENTMEMORY_SECRET=
+```
+
+You will see exactly one line of the form `AGENTMEMORY_SECRET=<64 hex chars>`.
+Copy it into your client environment (`~/.bashrc`, Claude Desktop config,
+etc.). The secret is never printed again on subsequent boots.
+
+## Verify the deployment
+
+```bash
+curl "https://$APP.fly.dev/agentmemory/livez"
+# {"status":"ok"}
+```
+
+For an authenticated call, your client must send `Authorization: Bearer `.
+
+## Viewer access (port 3113 stays internal)
+
+The viewer port is intentionally not exposed publicly. Tunnel to it:
+
+```bash
+fly proxy 3113:3113 --app "$APP"
+# then open http://localhost:3113
+```
+
+`fly proxy` opens an mTLS WireGuard channel to the machine, so the
+viewer's bearer token still has to ride a loopback connection on your
+laptop — the v0.9.12 plaintext-bearer guard stays satisfied.
+
+## Rotate the HMAC secret
+
+```bash
+fly ssh console --app "$APP"
+rm /data/.hmac
+exit
+fly machine restart
+fly logs --app "$APP" | grep AGENTMEMORY_SECRET=
+```
+
+Update every client with the new secret. Old tokens stop working
+immediately.
+
+## Back up `/data`
+
+```bash
+fly ssh console --app "$APP" -C "tar czf - /data" > "$APP-$(date +%Y%m%d).tar.gz"
+```
+
+To restore on a fresh machine:
+
+```bash
+cat "$APP-YYYYMMDD.tar.gz" | fly ssh console --app "$APP" -C "tar xzf - -C /"
+fly machine restart
+```
+
+## Cost floor and egress
+
+- Idle (machine stopped): the volume costs ~$0.15/GB/month. A 1 GB
+ volume is roughly $0.15/month.
+- Active (machine running on `shared-cpu-1x` with 512 MB): about
+ $1.94/month if it ran 24/7; in practice `auto_stop_machines` keeps
+ that well under $1.
+- Outbound bandwidth: 100 GB/month free on the Hobby plan, then $0.02/GB
+ in North America / Europe.
+
+See for the up-to-date rate card.
+
+## Known caveats
+
+- The volume lives in one region. To survive a region outage, create a
+ second volume in another region and update `primary_region` after the
+ failover, or take snapshots with `fly volumes snapshots create`.
+- The Dockerfile builds in the Fly Builder on every deploy — first
+ build is ~30 seconds; cached layers shrink rebuilds to under 10
+ seconds. Image is ~114 MB.
+- First deploy lands on a **shared IPv4 + dedicated IPv6** by default
+ (free). If you need a dedicated IPv4 for legacy clients without SNI,
+ run `fly ips allocate-v4 --app "$APP"` — costs $2/month.
+- Cold-start (from machine launch to passing `/agentmemory/livez`) is
+ ~9 seconds measured. `grace_period = "30s"` on the health check
+ gives a 3x safety margin.
+- Bump `AGENTMEMORY_VERSION` or `III_VERSION` in the Dockerfile to
+ upgrade. `fly deploy --build-arg AGENTMEMORY_VERSION=` also works
+ for a one-off without editing the file.
diff --git a/deploy/fly/entrypoint.sh b/deploy/fly/entrypoint.sh
new file mode 100755
index 00000000..ffdd6333
--- /dev/null
+++ b/deploy/fly/entrypoint.sh
@@ -0,0 +1,98 @@
+#!/bin/sh
+# agentmemory first-boot entrypoint.
+#
+# Runs as root so it can:
+# 1. Overwrite the npm-bundled iii-config.yaml (which binds 127.0.0.1
+# and uses relative ./data paths) with a deploy-tuned version that
+# binds 0.0.0.0 and uses absolute /data paths.
+# 2. chown the platform-mounted /data volume to the runtime user
+# (managed platforms mount volumes root-owned 755 by default).
+# 3. Generate the HMAC secret on first boot and persist it to
+# /data/.hmac (chmod 600) so the secret survives restarts.
+#
+# Then it execs the agentmemory CLI under gosu as the unprivileged
+# `node` user.
+
+set -eu
+
+DATA_DIR="${AGENTMEMORY_DATA_DIR:-/data}"
+HMAC_FILE="${AGENTMEMORY_HMAC_FILE:-/data/.hmac}"
+RUN_AS="node:node"
+III_CONFIG="/opt/agentmemory/node_modules/@agentmemory/agentmemory/dist/iii-config.yaml"
+
+mkdir -p "$DATA_DIR"
+chown -R "$RUN_AS" "$DATA_DIR"
+
+cat > "$III_CONFIG" <<'EOF'
+workers:
+ - name: iii-http
+ config:
+ port: 3111
+ host: 0.0.0.0
+ default_timeout: 180000
+ cors:
+ allowed_origins:
+ - "http://localhost:3111"
+ - "http://localhost:3113"
+ - "http://127.0.0.1:3111"
+ - "http://127.0.0.1:3113"
+ allowed_methods: [GET, POST, PUT, DELETE, OPTIONS]
+ - name: iii-state
+ config:
+ adapter:
+ name: kv
+ config:
+ store_method: file_based
+ file_path: /data/state_store.db
+ - name: iii-queue
+ config:
+ adapter:
+ name: builtin
+ - name: iii-pubsub
+ config:
+ adapter:
+ name: local
+ - name: iii-cron
+ config:
+ adapter:
+ name: kv
+ - name: iii-stream
+ config:
+ port: 3112
+ host: 0.0.0.0
+ adapter:
+ name: kv
+ config:
+ store_method: file_based
+ file_path: /data/stream_store
+ - name: iii-observability
+ config:
+ enabled: true
+ service_name: agentmemory
+ exporter: memory
+ sampling_ratio: 1.0
+ metrics_enabled: true
+ logs_enabled: true
+ logs_console_output: true
+EOF
+chown "$RUN_AS" "$III_CONFIG"
+
+if [ ! -s "$HMAC_FILE" ]; then
+ SECRET="$(openssl rand -hex 32)"
+ umask 077
+ printf '%s\n' "$SECRET" > "$HMAC_FILE"
+ chmod 600 "$HMAC_FILE"
+ chown "$RUN_AS" "$HMAC_FILE"
+ echo "================================================================"
+ echo "agentmemory: generated HMAC secret on first boot"
+ echo "AGENTMEMORY_SECRET=$SECRET"
+ echo "Copy this value now. It will not be printed again."
+ echo "Stored at: $HMAC_FILE (chmod 600)"
+ echo "To rotate: delete $HMAC_FILE on the persistent volume and restart."
+ echo "================================================================"
+fi
+
+AGENTMEMORY_SECRET="$(cat "$HMAC_FILE")"
+export AGENTMEMORY_SECRET
+
+exec gosu "$RUN_AS" agentmemory "$@"
diff --git a/deploy/fly/fly.toml b/deploy/fly/fly.toml
new file mode 100644
index 00000000..03f58bc9
--- /dev/null
+++ b/deploy/fly/fly.toml
@@ -0,0 +1,44 @@
+# fly.io deployment for agentmemory.
+#
+# The HMAC secret is generated by entrypoint.sh on first boot and persisted
+# to the mounted volume at /data/.hmac. Operator copies it once from
+# `fly logs` then never sees it again. To rotate: `fly ssh console` and
+# `rm /data/.hmac`, then `fly machine restart`.
+#
+# Only port 3111 (REST API) is exposed publicly. Viewer 3113 stays bound
+# to localhost inside the machine; reach it via `fly proxy 3113:3113`.
+
+app = "agentmemory"
+primary_region = "iad"
+
+[build]
+ dockerfile = "Dockerfile"
+
+[[mounts]]
+ source = "agentmemory_data"
+ destination = "/data"
+ initial_size = "1gb"
+
+[http_service]
+ internal_port = 3111
+ force_https = true
+ auto_stop_machines = "stop"
+ auto_start_machines = true
+ min_machines_running = 0
+ processes = ["app"]
+
+ [http_service.concurrency]
+ type = "requests"
+ soft_limit = 200
+ hard_limit = 250
+
+ [[http_service.checks]]
+ interval = "30s"
+ timeout = "5s"
+ grace_period = "30s"
+ method = "GET"
+ path = "/agentmemory/livez"
+
+[[vm]]
+ size = "shared-cpu-1x"
+ memory = "512mb"
diff --git a/deploy/railway/Dockerfile b/deploy/railway/Dockerfile
new file mode 100644
index 00000000..89257f93
--- /dev/null
+++ b/deploy/railway/Dockerfile
@@ -0,0 +1,35 @@
+ARG III_VERSION=0.11.2
+
+FROM iiidev/iii:${III_VERSION} AS iii-image
+
+FROM node:22-slim
+
+ARG AGENTMEMORY_VERSION=0.9.12
+ARG III_VERSION=0.11.2
+ARG III_SDK_VERSION=0.11.2
+
+RUN apt-get update \
+ && apt-get install -y --no-install-recommends openssl ca-certificates tini gosu curl \
+ && rm -rf /var/lib/apt/lists/*
+
+COPY --from=iii-image /app/iii /usr/local/bin/iii
+
+# Install agentmemory into a dedicated prefix so the local package.json's
+# `overrides` field pins iii-sdk down to match the engine (agentmemory's
+# caret range `^0.11.2` otherwise resolves to 0.11.6, the version that
+# requires the new sandbox-everything worker model the agentmemory CLI
+# is not refactored for yet). `npm install -g` ignores overrides, hence
+# the local prefix.
+WORKDIR /opt/agentmemory
+RUN printf '{"name":"agentmemory-deploy","version":"1.0.0","private":true,"overrides":{"iii-sdk":"%s"}}\n' "${III_SDK_VERSION}" > package.json \
+ && npm install "@agentmemory/agentmemory@${AGENTMEMORY_VERSION}" --omit=optional --no-fund --no-audit \
+ && ln -s /opt/agentmemory/node_modules/.bin/agentmemory /usr/local/bin/agentmemory
+
+ENV AGENTMEMORY_III_VERSION=${III_VERSION} \
+ TINI_SUBREAPER=1
+
+COPY --chmod=0755 entrypoint.sh /usr/local/bin/agentmemory-entrypoint.sh
+
+EXPOSE 3111
+
+ENTRYPOINT ["/usr/bin/tini", "--", "/usr/local/bin/agentmemory-entrypoint.sh"]
diff --git a/deploy/railway/README.md b/deploy/railway/README.md
new file mode 100644
index 00000000..9aad4fb3
--- /dev/null
+++ b/deploy/railway/README.md
@@ -0,0 +1,136 @@
+# Deploy agentmemory on Railway
+
+This template runs agentmemory on a single Railway service with a
+persistent volume mounted at `/data`. The HMAC secret is generated on
+first boot and persisted to the volume — you read it once from the
+deploy logs and copy it into your client.
+
+## What you get
+
+- A public HTTPS endpoint serving the agentmemory REST API on port 3111
+- A persistent Railway Volume at `/data` for memories, BM25 index, and
+ stream backlog
+- Railway healthcheck against `/agentmemory/livez`
+- The HMAC bearer secret is generated on first boot inside the
+ container and persisted to `/data/.hmac` (chmod 600); the operator
+ copies it from the deploy logs once.
+- The deploy uses `requiredMountPath: /data` so Railway refuses to
+ start the service if no volume is attached at that path — first
+ deploy must create the volume from the dashboard.
+
+## Deploy via Railway dashboard
+
+1. Click **Deploy from GitHub** in the Railway dashboard and pick the
+ `rohitg00/agentmemory` repo.
+2. Set the **Config-as-Code Path** under the service Settings to
+ `deploy/railway/railway.json`. Railway picks up the Dockerfile path
+ from there.
+3. Open the service's **Volumes** tab and add a volume mounted at
+ `/data` (Railway volumes are configured in the dashboard or via
+ `railway volume add`, not in `railway.json`).
+4. Click **Deploy**.
+
+## Deploy via Railway CLI
+
+```bash
+# Install: https://docs.railway.com/guides/cli
+railway login
+railway init # link a new project
+railway up --service agentmemory # builds + deploys
+railway volume add --service agentmemory --mount /data # attach persistent volume
+railway redeploy # restart with the volume
+```
+
+## Capture the HMAC secret
+
+After the first deploy succeeds, open the service's **Deploy Logs**:
+
+```bash
+railway logs --service agentmemory | grep AGENTMEMORY_SECRET=
+```
+
+You will see exactly one line of the form `AGENTMEMORY_SECRET=<64 hex chars>`.
+Copy it into your client environment. The secret is never printed again
+on subsequent boots.
+
+## Verify the deployment
+
+```bash
+curl https://.up.railway.app/agentmemory/livez
+# {"status":"ok"}
+```
+
+For an authenticated call, your client must send `Authorization: Bearer `.
+
+## Viewer access (port 3113 stays internal)
+
+Railway only exposes the single public port from your service's
+`PORT` env var (which we map to 3111). The viewer stays bound to
+localhost inside the container. `railway ssh` is an interactive shell
+only — it does not support `-L`-style port forwarding, so reach the
+viewer with one of the following.
+
+**Quick in-container check:**
+
+```bash
+railway ssh --service agentmemory
+# inside the container:
+curl http://localhost:3113
+```
+
+**Browser session — option A (TCP Proxy, recommended):** in the Railway
+dashboard, open the service's *Settings → Networking* tab and add a
+**TCP Proxy** for container port `3113`. Railway returns a public
+host/port pair you can hit directly from your browser. Pair it with the
+HMAC bearer-auth header so the viewer is not anonymously reachable.
+
+**Browser session — option B (in-container sshd):** add an `openssh-server`
+process to the image and start it from `entrypoint.sh` on a fixed port,
+expose that port through a second Railway TCP Proxy, then use a native
+`ssh -L 3113:localhost:3113 -p ` from your laptop.
+This is the heavier path; option A is what most users will want.
+
+## Rotate the HMAC secret
+
+```bash
+railway ssh --service agentmemory
+rm /data/.hmac
+exit
+railway redeploy --service agentmemory
+railway logs --service agentmemory | grep AGENTMEMORY_SECRET=
+```
+
+Update every client with the new secret. Old tokens stop working
+immediately.
+
+## Back up `/data`
+
+```bash
+railway ssh --service agentmemory -- "tar czf - /data" > agentmemory-$(date +%Y%m%d).tar.gz
+```
+
+To restore on a fresh volume:
+
+```bash
+cat agentmemory-YYYYMMDD.tar.gz | railway ssh --service agentmemory -- "tar xzf - -C /"
+railway redeploy --service agentmemory
+```
+
+## Cost floor and egress
+
+- Hobby plan: $5/month flat, includes $5 of usage.
+- agentmemory at idle plus a 1 GB volume typically uses $3–$6 of usage
+ per month on the smallest instance, so most users stay near the $5
+ floor.
+- Egress: $0.10/GB after the bundled allowance.
+
+See for the current rate card.
+
+## Known caveats
+
+- Railway volumes do not auto-snapshot. Take your own backups (above)
+ or use the dashboard's manual snapshot feature.
+- The Dockerfile builds on Railway's builder on every deploy. First
+ deploy is ~2 minutes; cached layers make subsequent rebuilds quick.
+ Pin `AGENTMEMORY_VERSION` / `III_VERSION` build args in the
+ service's *Variables* tab to lock a specific release.
diff --git a/deploy/railway/entrypoint.sh b/deploy/railway/entrypoint.sh
new file mode 100755
index 00000000..ffdd6333
--- /dev/null
+++ b/deploy/railway/entrypoint.sh
@@ -0,0 +1,98 @@
+#!/bin/sh
+# agentmemory first-boot entrypoint.
+#
+# Runs as root so it can:
+# 1. Overwrite the npm-bundled iii-config.yaml (which binds 127.0.0.1
+# and uses relative ./data paths) with a deploy-tuned version that
+# binds 0.0.0.0 and uses absolute /data paths.
+# 2. chown the platform-mounted /data volume to the runtime user
+# (managed platforms mount volumes root-owned 755 by default).
+# 3. Generate the HMAC secret on first boot and persist it to
+# /data/.hmac (chmod 600) so the secret survives restarts.
+#
+# Then it execs the agentmemory CLI under gosu as the unprivileged
+# `node` user.
+
+set -eu
+
+DATA_DIR="${AGENTMEMORY_DATA_DIR:-/data}"
+HMAC_FILE="${AGENTMEMORY_HMAC_FILE:-/data/.hmac}"
+RUN_AS="node:node"
+III_CONFIG="/opt/agentmemory/node_modules/@agentmemory/agentmemory/dist/iii-config.yaml"
+
+mkdir -p "$DATA_DIR"
+chown -R "$RUN_AS" "$DATA_DIR"
+
+cat > "$III_CONFIG" <<'EOF'
+workers:
+ - name: iii-http
+ config:
+ port: 3111
+ host: 0.0.0.0
+ default_timeout: 180000
+ cors:
+ allowed_origins:
+ - "http://localhost:3111"
+ - "http://localhost:3113"
+ - "http://127.0.0.1:3111"
+ - "http://127.0.0.1:3113"
+ allowed_methods: [GET, POST, PUT, DELETE, OPTIONS]
+ - name: iii-state
+ config:
+ adapter:
+ name: kv
+ config:
+ store_method: file_based
+ file_path: /data/state_store.db
+ - name: iii-queue
+ config:
+ adapter:
+ name: builtin
+ - name: iii-pubsub
+ config:
+ adapter:
+ name: local
+ - name: iii-cron
+ config:
+ adapter:
+ name: kv
+ - name: iii-stream
+ config:
+ port: 3112
+ host: 0.0.0.0
+ adapter:
+ name: kv
+ config:
+ store_method: file_based
+ file_path: /data/stream_store
+ - name: iii-observability
+ config:
+ enabled: true
+ service_name: agentmemory
+ exporter: memory
+ sampling_ratio: 1.0
+ metrics_enabled: true
+ logs_enabled: true
+ logs_console_output: true
+EOF
+chown "$RUN_AS" "$III_CONFIG"
+
+if [ ! -s "$HMAC_FILE" ]; then
+ SECRET="$(openssl rand -hex 32)"
+ umask 077
+ printf '%s\n' "$SECRET" > "$HMAC_FILE"
+ chmod 600 "$HMAC_FILE"
+ chown "$RUN_AS" "$HMAC_FILE"
+ echo "================================================================"
+ echo "agentmemory: generated HMAC secret on first boot"
+ echo "AGENTMEMORY_SECRET=$SECRET"
+ echo "Copy this value now. It will not be printed again."
+ echo "Stored at: $HMAC_FILE (chmod 600)"
+ echo "To rotate: delete $HMAC_FILE on the persistent volume and restart."
+ echo "================================================================"
+fi
+
+AGENTMEMORY_SECRET="$(cat "$HMAC_FILE")"
+export AGENTMEMORY_SECRET
+
+exec gosu "$RUN_AS" agentmemory "$@"
diff --git a/deploy/railway/railway.json b/deploy/railway/railway.json
new file mode 100644
index 00000000..43f52173
--- /dev/null
+++ b/deploy/railway/railway.json
@@ -0,0 +1,15 @@
+{
+ "$schema": "https://railway.com/railway.schema.json",
+ "build": {
+ "builder": "DOCKERFILE",
+ "dockerfilePath": "deploy/railway/Dockerfile"
+ },
+ "deploy": {
+ "numReplicas": 1,
+ "healthcheckPath": "/agentmemory/livez",
+ "healthcheckTimeout": 30,
+ "restartPolicyType": "ON_FAILURE",
+ "restartPolicyMaxRetries": 10,
+ "requiredMountPath": "/data"
+ }
+}
diff --git a/deploy/render/Dockerfile b/deploy/render/Dockerfile
new file mode 100644
index 00000000..89257f93
--- /dev/null
+++ b/deploy/render/Dockerfile
@@ -0,0 +1,35 @@
+ARG III_VERSION=0.11.2
+
+FROM iiidev/iii:${III_VERSION} AS iii-image
+
+FROM node:22-slim
+
+ARG AGENTMEMORY_VERSION=0.9.12
+ARG III_VERSION=0.11.2
+ARG III_SDK_VERSION=0.11.2
+
+RUN apt-get update \
+ && apt-get install -y --no-install-recommends openssl ca-certificates tini gosu curl \
+ && rm -rf /var/lib/apt/lists/*
+
+COPY --from=iii-image /app/iii /usr/local/bin/iii
+
+# Install agentmemory into a dedicated prefix so the local package.json's
+# `overrides` field pins iii-sdk down to match the engine (agentmemory's
+# caret range `^0.11.2` otherwise resolves to 0.11.6, the version that
+# requires the new sandbox-everything worker model the agentmemory CLI
+# is not refactored for yet). `npm install -g` ignores overrides, hence
+# the local prefix.
+WORKDIR /opt/agentmemory
+RUN printf '{"name":"agentmemory-deploy","version":"1.0.0","private":true,"overrides":{"iii-sdk":"%s"}}\n' "${III_SDK_VERSION}" > package.json \
+ && npm install "@agentmemory/agentmemory@${AGENTMEMORY_VERSION}" --omit=optional --no-fund --no-audit \
+ && ln -s /opt/agentmemory/node_modules/.bin/agentmemory /usr/local/bin/agentmemory
+
+ENV AGENTMEMORY_III_VERSION=${III_VERSION} \
+ TINI_SUBREAPER=1
+
+COPY --chmod=0755 entrypoint.sh /usr/local/bin/agentmemory-entrypoint.sh
+
+EXPOSE 3111
+
+ENTRYPOINT ["/usr/bin/tini", "--", "/usr/local/bin/agentmemory-entrypoint.sh"]
diff --git a/deploy/render/README.md b/deploy/render/README.md
new file mode 100644
index 00000000..3b73f7ee
--- /dev/null
+++ b/deploy/render/README.md
@@ -0,0 +1,113 @@
+# Deploy agentmemory on Render
+
+This template runs agentmemory on a single Render Web Service with a
+persistent disk mounted at `/data`. The HMAC secret is generated on
+first boot and persisted to the disk — you capture it from the deploy
+logs exactly once.
+
+## What you get
+
+- A public HTTPS endpoint serving the agentmemory REST API on port 3111
+ (Render injects `PORT` defaulting to 10000; we override it to 3111
+ via `envVars` so the published port matches the container's bind)
+- A 1 GB persistent disk at `/data` for memories, BM25 index, and
+ stream backlog
+- Render healthcheck against `/agentmemory/livez`
+- The HMAC bearer secret is generated on first boot inside the
+ container and persisted to `/data/.hmac` (chmod 600); the operator
+ copies it from the deploy logs once.
+
+## Deploy via Render Blueprint
+
+Render's one-click deploy button only auto-detects `render.yaml` at the
+repository root, which the agentmemory repo keeps clean. Use the
+dashboard's manual Blueprint flow instead:
+
+1. Push the `deploy/render/` directory to a Git provider Render can
+ reach (a fork of `rohitg00/agentmemory` works).
+2. In the Render dashboard, click **New +** → **Blueprint**.
+3. Point Render at the repo and the path `deploy/render/render.yaml`.
+4. Render reads the Blueprint, provisions the disk, builds the
+ Dockerfile, and starts the service. The whole flow takes 3–5
+ minutes on the first run.
+
+## Deploy via Render Deploy Hook (one-click)
+
+Once the Blueprint exists in your account, generate a Deploy Hook URL
+in the service settings. Future deploys are a single curl call:
+
+```bash
+curl "https://api.render.com/deploy/srv-XXYYZZ?key=AABBCC"
+```
+
+To pin a specific `@agentmemory/agentmemory` release, set the
+`AGENTMEMORY_VERSION` build arg in the service's *Environment* tab
+before the next deploy. Same for `III_VERSION`.
+
+## Capture the HMAC secret
+
+After the first deploy succeeds, open the service's **Logs** tab and
+search for `AGENTMEMORY_SECRET=`. You will see exactly one line of the
+form `AGENTMEMORY_SECRET=<64 hex chars>`. Copy it into your client
+environment. The secret is never printed again on subsequent boots.
+
+## Verify the deployment
+
+```bash
+curl https://agentmemory.onrender.com/agentmemory/livez
+# {"status":"ok"}
+```
+
+For an authenticated call, your client must send `Authorization: Bearer `.
+
+## Viewer access (port 3113 stays internal)
+
+Render only exposes one public port per service, and we use it for
+3111. The viewer port stays bound to localhost inside the container.
+Reach it via Render's SSH:
+
+```bash
+# Settings → SSH → enable for your service, copy the connection command
+ssh srv-XXYYZZ@ssh..render.com -L 3113:localhost:3113
+# now http://localhost:3113 in your browser hits the in-container viewer
+```
+
+## Rotate the HMAC secret
+
+```bash
+ssh srv-XXYYZZ@ssh..render.com
+rm /data/.hmac
+exit
+# trigger a redeploy from the Render dashboard or via the Deploy Hook
+```
+
+After the redeploy, grab the new secret from the logs and update every
+client. Old tokens stop working immediately.
+
+## Back up `/data`
+
+```bash
+ssh srv-XXYYZZ@ssh..render.com "tar czf - /data" > agentmemory-$(date +%Y%m%d).tar.gz
+```
+
+Render also takes daily snapshots of persistent disks automatically on
+paid plans — the SSH tarball is a belt-and-braces option you can ship
+off-platform.
+
+## Cost floor and egress
+
+- Starter plan web service: $7/month (0.5 CPU, 512 MB RAM).
+- 1 GB persistent disk: $0.25/GB/month, so $0.25/month for the default.
+- Bandwidth: 100 GB outbound included, then $0.10/GB.
+
+See for the current rate card.
+
+## Known caveats
+
+- Render Free tier does not support persistent disks. The Starter plan
+ ($7/month) is the minimum.
+- Render restarts the service on every deploy. The HMAC secret survives
+ because it lives on the disk, but expect a 10–30 s gap of 502s
+ during rollouts.
+- Render runs amd64 only for web services. The Dockerfile selects the
+ matching iii binary automatically via `uname -m`.
diff --git a/deploy/render/entrypoint.sh b/deploy/render/entrypoint.sh
new file mode 100755
index 00000000..ffdd6333
--- /dev/null
+++ b/deploy/render/entrypoint.sh
@@ -0,0 +1,98 @@
+#!/bin/sh
+# agentmemory first-boot entrypoint.
+#
+# Runs as root so it can:
+# 1. Overwrite the npm-bundled iii-config.yaml (which binds 127.0.0.1
+# and uses relative ./data paths) with a deploy-tuned version that
+# binds 0.0.0.0 and uses absolute /data paths.
+# 2. chown the platform-mounted /data volume to the runtime user
+# (managed platforms mount volumes root-owned 755 by default).
+# 3. Generate the HMAC secret on first boot and persist it to
+# /data/.hmac (chmod 600) so the secret survives restarts.
+#
+# Then it execs the agentmemory CLI under gosu as the unprivileged
+# `node` user.
+
+set -eu
+
+DATA_DIR="${AGENTMEMORY_DATA_DIR:-/data}"
+HMAC_FILE="${AGENTMEMORY_HMAC_FILE:-/data/.hmac}"
+RUN_AS="node:node"
+III_CONFIG="/opt/agentmemory/node_modules/@agentmemory/agentmemory/dist/iii-config.yaml"
+
+mkdir -p "$DATA_DIR"
+chown -R "$RUN_AS" "$DATA_DIR"
+
+cat > "$III_CONFIG" <<'EOF'
+workers:
+ - name: iii-http
+ config:
+ port: 3111
+ host: 0.0.0.0
+ default_timeout: 180000
+ cors:
+ allowed_origins:
+ - "http://localhost:3111"
+ - "http://localhost:3113"
+ - "http://127.0.0.1:3111"
+ - "http://127.0.0.1:3113"
+ allowed_methods: [GET, POST, PUT, DELETE, OPTIONS]
+ - name: iii-state
+ config:
+ adapter:
+ name: kv
+ config:
+ store_method: file_based
+ file_path: /data/state_store.db
+ - name: iii-queue
+ config:
+ adapter:
+ name: builtin
+ - name: iii-pubsub
+ config:
+ adapter:
+ name: local
+ - name: iii-cron
+ config:
+ adapter:
+ name: kv
+ - name: iii-stream
+ config:
+ port: 3112
+ host: 0.0.0.0
+ adapter:
+ name: kv
+ config:
+ store_method: file_based
+ file_path: /data/stream_store
+ - name: iii-observability
+ config:
+ enabled: true
+ service_name: agentmemory
+ exporter: memory
+ sampling_ratio: 1.0
+ metrics_enabled: true
+ logs_enabled: true
+ logs_console_output: true
+EOF
+chown "$RUN_AS" "$III_CONFIG"
+
+if [ ! -s "$HMAC_FILE" ]; then
+ SECRET="$(openssl rand -hex 32)"
+ umask 077
+ printf '%s\n' "$SECRET" > "$HMAC_FILE"
+ chmod 600 "$HMAC_FILE"
+ chown "$RUN_AS" "$HMAC_FILE"
+ echo "================================================================"
+ echo "agentmemory: generated HMAC secret on first boot"
+ echo "AGENTMEMORY_SECRET=$SECRET"
+ echo "Copy this value now. It will not be printed again."
+ echo "Stored at: $HMAC_FILE (chmod 600)"
+ echo "To rotate: delete $HMAC_FILE on the persistent volume and restart."
+ echo "================================================================"
+fi
+
+AGENTMEMORY_SECRET="$(cat "$HMAC_FILE")"
+export AGENTMEMORY_SECRET
+
+exec gosu "$RUN_AS" agentmemory "$@"
diff --git a/deploy/render/render.yaml b/deploy/render/render.yaml
new file mode 100644
index 00000000..c96516d0
--- /dev/null
+++ b/deploy/render/render.yaml
@@ -0,0 +1,22 @@
+services:
+ - type: web
+ name: agentmemory
+ runtime: docker
+ plan: starter
+ dockerfilePath: ./deploy/render/Dockerfile
+ dockerContext: ./deploy/render
+ healthCheckPath: /agentmemory/livez
+ autoDeploy: false
+ disk:
+ name: data
+ mountPath: /data
+ sizeGB: 1
+ envVars:
+ - key: PORT
+ value: "3111"
+ - key: AGENTMEMORY_VERSION
+ value: "0.9.12"
+ - key: III_VERSION
+ value: "0.11.2"
+ - key: III_SDK_VERSION
+ value: "0.11.2"