Skip to content

Orbitalo/cursor-memory-system

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 

Repository files navigation

🧠 cursor-memory-system

Give your AI a persistent memory — with nothing but a Git repository and Markdown files.

Built for Cursor + Claude, but works with any AI coding assistant that reads files.


The Problem

Every new Cursor session starts from zero. You explain your project structure, your server IPs, your tech stack — again and again. The AI has no memory between sessions.

This wastes time. More importantly, it produces worse results: the AI makes assumptions instead of knowing.

The Solution

A structured Git repository that your AI reads at the start of every session. One small routing file tells the AI exactly which context file to load for which task. State files contain everything the AI needs to know — updated automatically by a sync script.

.cursorrules          ← Always loaded. Routing table: task → file.
├── project-a/
│   └── STATE.md      ← Loaded only when working on project-a
├── project-b/
│   └── STATE.md      ← Loaded only when working on project-b
└── scripts/
    └── sync-state.sh ← Optional: auto-updates STATE.md files

Result: The AI knows your infrastructure, credentials, current project status, and last development state — without you repeating yourself.


How It Works

1. The Routing Table (.cursorrules)

The core concept. A small file in your workspace root that maps tasks to context files:

## Routing Table
| Task is about...         | Load this file            |
|--------------------------|---------------------------|
| WordPress / Blog         | blog/STATE.md             |
| API Backend              | backend/STATE.md          |
| Infrastructure / Servers | infrastructure/STATE.md   |
| Database                 | database/STATE.md         |
| All Projects / Overview  | MASTER_INDEX.md           |

## Golden Rules
1. Never load more than 1 STATE.md at a time
2. STATE.md files are auto-generated — always current
3. When in doubt: read MASTER_INDEX.md, don't guess

The AI loads .cursorrules first (it's small, ~30 lines), then fetches only the relevant STATE.md.

2. State Files (STATE.md)

Each project gets a STATE.md with everything the AI needs:

# STATE: My Blog
**Last updated: 2026-02-28**

## Status
✅ Running on production server

## Access
| What        | Value                        |
|-------------|------------------------------|
| URL         | https://myblog.example.com   |
| Admin       | https://myblog.example.com/wp-admin |
| Login       | admin / [your-password]      |
| Server      | 192.168.1.10                 |

## Stack
- WordPress 6.x + MySQL 8
- Cloudflare Tunnel (no open ports)
- Docker Compose

## Current Status
- RSS auto-poster: running, posts 3x/day
- Last article: "My homelab journey" (Feb 28)
- Next planned: ESP32 temperature project

## Key Paths
- App: /opt/myblog/
- Config: /opt/myblog/.env
- Logs: /opt/myblog/logs/

3. Auto-Sync (Optional but Powerful)

A cron job that updates STATE.md files with live data every 15 minutes:

*/15 * * * * /path/to/scripts/sync-state.sh

The script queries your services, writes live status to STATE.md, and commits + pushes to your Git remote. When you open Cursor in the morning, the AI already knows what happened overnight.


Quick Start

Minimal Setup (10 minutes)

# 1. Clone this repo or use it as a template
git clone https://github.com/Orbitalo/cursor-memory-system.git
cd cursor-memory-system

# 2. Copy .cursorrules to your workspace root
cp templates/cursorrules-template .cursorrules

# 3. Create your first STATE.md
cp templates/STATE-template.md myproject/STATE.md

# 4. Edit both files with your project details
# 5. Restart Cursor

With Auto-Sync

# Configure the sync script
cp scripts/sync-state.sh /opt/my-brain/scripts/
chmod +x /opt/my-brain/scripts/sync-state.sh
# Edit the script: set REPO path and your services

# Add to crontab (on your server)
*/15 * * * * /opt/my-brain/scripts/sync-state.sh

See SETUP.md for detailed instructions.


File Structure

cursor-memory-system/
├── README.md                    ← You are here
├── SETUP.md                     ← Detailed setup guide
├── templates/
│   ├── cursorrules-template     ← Copy to .cursorrules in your workspace
│   ├── STATE-template.md        ← Template for project state files
│   └── MASTER_INDEX-template.md ← Template for overview file
├── scripts/
│   └── sync-state.sh            ← Auto-sync script (Linux/cron)
└── examples/
    ├── homelab-example/         ← Full homelab example
    └── simple-example/          ← Minimal single-project example

Why Not Just Use a Big Context File?

Common mistake: dump everything into one huge file and load it always.

Problems:

  • Context windows are limited
  • More irrelevant context = worse answers
  • AI spends "attention" on things that don't matter

This system's approach:

  • .cursorrules is always tiny (~30 lines)
  • STATE.md files are loaded on-demand, per task
  • Each file contains only what's relevant right now
Approach Context used Quality
One big file, always loaded ~5000 tokens ⚠️ Diluted
Routing + lazy loading ~500 tokens ✅ Focused

Real-World Results

This system runs a homelab with:

  • 7 active projects
  • 12 containers on 2 Proxmox servers
  • Auto-sync every 15 minutes for live service states

The AI assistant never asks "what server is this on?" or "what's the admin password?" again.

Article (German): Wie ich meiner KI ein Gedächtnis gebaut habe (coming soon)


Contributing

PRs welcome. Especially:

  • Sync scripts for other platforms (Windows, macOS)
  • Templates for common setups (single server, cloud, k8s)
  • Examples for different AI tools (Windsurf, Aider, etc.)

License

MIT — use freely, attribution appreciated.


Built by Orbitalo · Star ⭐ if useful

Releases

No releases published

Packages

 
 
 

Contributors

Languages