Skip to content

Commit d4919b1

Browse files
committed
feat: add distribution content for all channels
Add ready-to-post content for Show HN, Reddit (6 subreddits), X/Twitter (5 threads), LinkedIn (3 posts), Dev.to (2 tutorials), Cursor/Claude community posts, Substack issue #1, Product Hunt listing, and anchor content (manifesto, tutorial, narrative). Made-with: Cursor
1 parent f473ec0 commit d4919b1

13 files changed

Lines changed: 984 additions & 0 deletions

content/claude-community-post.md

Lines changed: 29 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,29 @@
1+
# Give Claude persistent memory with Reflect Memory MCP
2+
3+
Claude is stateless by default. Every conversation starts fresh. Reflect Memory adds a persistent memory layer via the Model Context Protocol, so Claude can recall context, preferences, and decisions across sessions.
4+
5+
**Setup:** Add Reflect Memory to your Claude Desktop MCP config (or Cursor, or any MCP client):
6+
7+
```json
8+
{
9+
"mcpServers": {
10+
"reflect-memory": {
11+
"command": "npx",
12+
"args": ["reflect-memory-mcp"],
13+
"env": {
14+
"RM_API_KEY": "your-api-key",
15+
"RM_MCP_USER_ID": "your-user-id"
16+
}
17+
}
18+
}
19+
}
20+
```
21+
22+
Get your API key at [reflectmemory.com](https://reflectmemory.com). The MCP server exposes five tools: `read_memories`, `get_memory_by_id`, `browse_memories`, `write_memory`, and `query`. Claude can read your stored memories, write new ones, and ask natural-language questions that get answered with memory context.
23+
24+
**What it enables:**
25+
- Claude remembers your preferences, project context, and past decisions across conversations.
26+
- Memories are structured (title, content, tags) and fully editable. You control what Claude sees.
27+
- The same memory store works with ChatGPT, Cursor, Gemini, and n8n. One API, many agents.
28+
29+
TypeScript SDK: `npm install reflect-memory-sdk`. REST API: `https://api.reflectmemory.com`. Open source: [github.com/van-reflect/Reflect-Memory](https://github.com/van-reflect/Reflect-Memory).

content/cursor-community-post.md

Lines changed: 29 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,29 @@
1+
# Add persistent cross-vendor memory to Cursor in 60 seconds
2+
3+
Give Cursor a memory layer that persists across sessions and works with ChatGPT, Claude, Gemini, and more. One API, one MCP server.
4+
5+
**Setup:** Add Reflect Memory to your MCP config:
6+
7+
```json
8+
{
9+
"mcpServers": {
10+
"reflect-memory": {
11+
"command": "npx",
12+
"args": ["reflect-memory-mcp"],
13+
"env": {
14+
"RM_API_KEY": "your-api-key",
15+
"RM_MCP_USER_ID": "your-user-id"
16+
}
17+
}
18+
}
19+
}
20+
```
21+
22+
Get your API key at [reflectmemory.com](https://reflectmemory.com). The MCP server exposes tools for reading, writing, browsing, and querying memories. Cursor can now recall project context, preferences, and decisions across conversations.
23+
24+
**Use cases:**
25+
- **Project context:** Store architecture decisions, tech stack choices, and conventions. Cursor pulls them in when you start a new session.
26+
- **Preferences:** Remember your coding style, naming conventions, and tool preferences. No more repeating yourself.
27+
- **Cross-vendor continuity:** What you tell ChatGPT can be available to Cursor. Same memory store, different agents.
28+
29+
TypeScript SDK: `npm install reflect-memory-sdk` (zero deps, Node 18+). API: `https://api.reflectmemory.com`. Open source: [github.com/van-reflect/Reflect-Memory](https://github.com/van-reflect/Reflect-Memory).
Lines changed: 48 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,48 @@
1+
---
2+
title: Why I Chose MCP Over Custom APIs for AI Tool Integration
3+
published: false
4+
description: The Model Context Protocol (MCP) enables one integration that works across ChatGPT, Claude, Cursor, and more. Here's why that beats building custom APIs per vendor.
5+
tags:
6+
- mcp
7+
- ai
8+
- api
9+
- integration
10+
---
11+
12+
# Why I Chose MCP Over Custom APIs for AI Tool Integration
13+
14+
When I built Reflect Memory, a shared memory layer for AI agents, I had a choice: build a custom integration for each vendor (ChatGPT Actions, Claude Extensions, Cursor rules, Gemini Functions) or bet on a single protocol. I chose MCP, the Model Context Protocol. Here's why.
15+
16+
## What is MCP?
17+
18+
The Model Context Protocol is an open standard for how AI applications discover and invoke tools. Think of it like USB for AI: one plug, many devices. An MCP server exposes tools (functions with schemas). An MCP client (Cursor, Claude Desktop, etc.) discovers those tools and lets the model call them during a conversation.
19+
20+
The key insight: the client handles the transport. I don't have to implement OAuth for ChatGPT, a different auth flow for Claude, or Cursor's custom config format. I implement one server. The clients already know how to talk to it.
21+
22+
## The Alternative: Custom Integrations Per Vendor
23+
24+
Without MCP, you build N integrations. ChatGPT has Actions (OpenAPI spec, OAuth or API key). Claude has Extensions (different schema, different auth). Cursor has MCP support, but also custom rules and workflows. Gemini has Function Calling. n8n has its own node format.
25+
26+
Each integration means: a new auth story, a new schema format, a new deployment path, and a new surface for bugs. When you add a feature (e.g., a new memory type), you ship it N times. When a vendor changes their API, you fix N integrations.
27+
28+
## One Server, Many Clients
29+
30+
With MCP, I built one server that exposes five tools: `read_memories`, `get_memory_by_id`, `browse_memories`, `write_memory`, and `query`. Cursor users add it via `npx reflect-memory-mcp` in their MCP config. Claude Desktop users do the same. Any client that speaks MCP gets the full tool set.
31+
32+
The server runs as a standalone process. It uses Streamable HTTP transport, so it works over the network. No stdio hacks, no localhost-only limits. Auth is a Bearer token validated against our API. Same token works for the REST API and the MCP server.
33+
34+
## Interop Without Lock-In
35+
36+
MCP is vendor-neutral. Anthropic, Google, and others have adopted it. New clients will support it. If I had built a ChatGPT-only integration, I'd be locked into their release cycle and their design choices. With MCP, the protocol is the contract. I can add tools, deprecate old ones, and evolve the schema without rewriting per-vendor glue code.
37+
38+
## The Tradeoff
39+
40+
MCP is still evolving. Not every AI product supports it yet. Some vendors have their own extension systems and may never add MCP. For those, we still have the REST API and SDK. But for the clients that do support MCP (Cursor, Claude Desktop, and growing), one integration covers them all.
41+
42+
The protocol is maintained by Anthropic and adopted by others. Tool schemas use JSON Schema. Zod works well for validation on the server side. The Streamable HTTP transport means you can run the server remotely, not just as a local subprocess. That matters for multi-user or hosted setups.
43+
44+
## Recommendation
45+
46+
If you're building AI tooling, consider MCP first. One protocol, many clients, less code to maintain. You can always add vendor-specific integrations later for clients that don't support MCP. But starting with a single protocol reduces complexity and keeps your options open.
47+
48+
Reflect Memory: [reflectmemory.com](https://reflectmemory.com) | [github.com/van-reflect/Reflect-Memory](https://github.com/van-reflect/Reflect-Memory)
Lines changed: 109 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,109 @@
1+
---
2+
title: Building a Vendor-Neutral Memory Layer in TypeScript
3+
published: false
4+
description: A technical deep-dive into Reflect Memory's architecture: TypeScript, Fastify, SQLite with WAL mode, and MCP transport for AI agent memory.
5+
tags:
6+
- typescript
7+
- ai
8+
- mcp
9+
- sqlite
10+
- fastify
11+
---
12+
13+
# Building a Vendor-Neutral Memory Layer in TypeScript
14+
15+
AI agents are stateless by default. Every conversation starts from zero. That works for one-off tasks, but breaks down when you want ChatGPT to remember your preferences, Claude to recall project context, and Cursor to know your coding style. The solution is a shared memory layer that any agent can read and write, regardless of vendor.
16+
17+
Reflect Memory is an open-source memory substrate built in TypeScript. Here's why we chose each piece of the stack and how they fit together.
18+
19+
## Why TypeScript?
20+
21+
TypeScript gives us a single language across the entire system: the REST API, the MCP server, the SDK, and the n8n node. No context switching. The SDK uses native `fetch` and has zero runtime dependencies, so it runs anywhere Node 18+ runs. The type system catches schema mismatches at compile time, which matters when you're passing memory structures between services.
22+
23+
```typescript
24+
// SDK usage -- zero deps, native fetch
25+
import { ReflectMemory } from "reflect-memory-sdk";
26+
27+
const rm = new ReflectMemory({ apiKey: process.env.REFLECT_API_KEY! });
28+
const latest = await rm.getLatest();
29+
```
30+
31+
## Fastify for the HTTP Layer
32+
33+
We use Fastify instead of Express for the main API. Fastify's schema-based validation (via JSON Schema) enforces request shapes before handlers run. That's critical for security: we reject malformed bodies and unknown fields at the edge. Rate limiting and CORS are first-class plugins. The server stays thin: it authenticates, validates, and delegates to pure service functions.
34+
35+
```typescript
36+
// Server setup -- schema validation, rate limit, CORS
37+
import Fastify from "fastify";
38+
import cors from "@fastify/cors";
39+
import rateLimit from "@fastify/rate-limit";
40+
41+
const app = Fastify();
42+
await app.register(cors, { origin: true });
43+
await app.register(rateLimit, { max: 100, timeWindow: "1 minute" });
44+
```
45+
46+
## SQLite with WAL Mode
47+
48+
We store memories in SQLite with WAL (Write-Ahead Logging) mode. WAL gives us concurrent reads while a single writer commits. The process exits at startup if WAL activation fails, so we never silently fall back to rollback journal mode.
49+
50+
```typescript
51+
// Enforced at startup
52+
const journalMode = db.pragma("journal_mode = WAL") as Array<{ journal_mode: string }>;
53+
if (journalMode[0]?.journal_mode !== "wal") {
54+
console.error(`WAL mode not active. Got: ${journalMode[0]?.journal_mode}`);
55+
process.exit(1);
56+
}
57+
```
58+
59+
The schema uses `STRICT` tables and `json_type()` CHECK constraints on JSON columns. Foreign keys are enforced via `PRAGMA foreign_keys = ON`. No connection pooling needed: better-sqlite3 is synchronous and single-process.
60+
61+
## Pure Context Builder
62+
63+
The context builder is a pure function. No I/O, no database, no side effects. It takes memories and a query, returns a prompt string. Same inputs, same output, every time. That makes it testable and auditable. The model never decides which memories to include; that decision is made upstream based on the user's explicit filter.
64+
65+
```typescript
66+
// Pure function -- no I/O
67+
export function buildPrompt(
68+
memories: MemoryEntry[],
69+
userQuery: string,
70+
systemPrompt: string,
71+
charBudget?: number,
72+
): PromptResult {
73+
const systemSection = systemPrompt.length > 0 ? `[System]\n${systemPrompt}` : "";
74+
const querySection = `[User Query]\n${userQuery}`;
75+
// ... assembles prompt, respects charBudget
76+
}
77+
```
78+
79+
## MCP Transport
80+
81+
The Model Context Protocol (MCP) is how Cursor, Claude Desktop, and other clients discover and call tools. We run a standalone MCP server that exposes `read_memories`, `get_memory_by_id`, `browse_memories`, `write_memory`, and `query`. Each tool is a Zod-validated function. The transport is Streamable HTTP, so it works over the network without stdio.
82+
83+
```typescript
84+
// MCP tool registration
85+
mcp.tool(
86+
"read_memories",
87+
"Get the most recent memories. Returns full content.",
88+
{ limit: z.number().min(1).max(50).default(10) },
89+
{ title: "Read Memories", readOnlyHint: true },
90+
async ({ limit }) => {
91+
const memories = listMemories(db, userId, { by: "all" }, vendor, { limit });
92+
return { content: [{ type: "text", text: JSON.stringify(memories, null, 2) }] };
93+
},
94+
);
95+
```
96+
97+
## One API, Many Vendors
98+
99+
User keys get full CRUD. Agent keys (per-vendor) can only write via `POST /agent/memories` and query via `POST /query`. Agents see only memories where `allowed_vendors` includes their vendor or `"*"`. The `origin` field is set server-side from the key, never from the request body. That prevents agents from impersonating each other.
100+
101+
The result: one memory store, one API, and one MCP server. ChatGPT, Claude, Cursor, Gemini, and n8n all talk to the same layer. No per-vendor integrations to maintain.
102+
103+
## Hard Invariants
104+
105+
We enforce a few invariants that keep the system predictable. Explicit intent: every request declares exactly what it wants. No inferred behavior. Hard deletion: delete means delete. One row, gone. No soft deletes or archives. Pure context builder: the prompt assembly has no I/O. Same inputs, same output. No AI write path: the model cannot create, modify, or delete memories. One-directional data flow. Deterministic visibility: every query response includes a full receipt with memories used, prompt sent, and vendor filter applied.
106+
107+
## Getting Started
108+
109+
Try it: `npm install reflect-memory-sdk` or `npx reflect-memory-mcp` for the MCP server. The SDK works with the hosted API at api.reflectmemory.com, or you can self-host. Docs and source: [reflectmemory.com](https://reflectmemory.com), [github.com/van-reflect/Reflect-Memory](https://github.com/van-reflect/Reflect-Memory).

content/hn-show-post.md

Lines changed: 15 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,15 @@
1+
# Show HN: I built a shared memory layer for AI agents (ChatGPT, Claude, Cursor, Gemini)
2+
3+
---
4+
5+
**Top-level comment (paste with submission):**
6+
7+
Every AI tool forgets everything between sessions. ChatGPT Memory stays in OpenAI. Claude Projects stay in Anthropic. Nothing talks to each other, so you repeat yourself across every tool.
8+
9+
Reflect Memory is a vendor-neutral memory substrate. One API, every vendor. Write a memory from Cursor, retrieve it from Claude or ChatGPT. You control which vendors can see what via `allowed_vendors`. No AI in the write path, just deterministic persistence.
10+
11+
Architecture: https://github.com/van-reflect/Reflect-Memory/blob/main/ARCHITECTURE.md
12+
13+
Quick start: `npm install reflect-memory-sdk` then hit the API. MCP server for Cursor/Claude, Custom GPT for ChatGPT, n8n community node. Open spec, TypeScript backend.
14+
15+
I'm the solo founder, happy to answer questions.

content/linkedin-posts.md

Lines changed: 78 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,78 @@
1+
# Reflect Memory - LinkedIn Posts
2+
3+
## Post 1: Career Pivot
4+
**Theme:** "I left my design career at Google/Apple/Sony/TikTok to build AI infrastructure. Here's why."
5+
6+
---
7+
8+
I left my design career at Google, Apple, Sony, and TikTok to build AI infrastructure.
9+
10+
Not because I stopped loving design. Because I saw a gap that only someone who lived in both worlds could fill.
11+
12+
For years I designed products that shipped to billions of users. I learned how systems scale, how users behave, and how to ship. But I also watched AI tools arrive one by one, each with its own memory, each forgetting everything the moment you switched tabs.
13+
14+
ChatGPT doesn't remember what you told Claude. Cursor forgets what you did in n8n. Every vendor builds a silo. Users repeat themselves. Workflows can't span tools. The AI stack is fragmented, and memory is the glue that's missing.
15+
16+
I couldn't unsee it. So I started building Reflect Memory: a vendor-neutral memory substrate for AI agents. One API, every vendor. Write a memory from ChatGPT, retrieve it from Claude, use it in Cursor or n8n. You control which vendors can see what.
17+
18+
It's early. I'm a solo founder. But the architecture is shipped: TypeScript SDK, MCP server, n8n community node, ChatGPT Custom GPT. All talking to the same store.
19+
20+
If you've ever felt the frustration of AI amnesia, I'd love to hear what you'd want from a shared memory layer.
21+
22+
reflectmemory.com | github.com/van-reflect/Reflect-Memory
23+
24+
---
25+
26+
## Post 2: Architecture Thought Leadership
27+
**Theme:** "The AI memory problem is a billion-dollar infrastructure opportunity. Here's the architecture I shipped."
28+
29+
---
30+
31+
The AI memory problem is a billion-dollar infrastructure opportunity.
32+
33+
Right now, every AI vendor builds its own memory. ChatGPT has one. Claude has another. Cursor, Gemini, n8n, and every new tool adds another silo. Users repeat themselves. Workflows can't span vendors. The cost of context is paid over and over, in every product, by every user.
34+
35+
I think that's backwards. Memory should be a layer. One substrate. Every agent.
36+
37+
Here's the architecture I shipped:
38+
39+
**Deterministic persistence.** Memories are stored as structured types, not black-box embeddings. You can inspect, edit, and delete exactly what you wrote. No fuzzy retrieval, no mystery vectors.
40+
41+
**MCP-native design.** The Model Context Protocol is becoming the standard for AI tool integration. I built the memory layer to speak it natively. One integration surface. Every vendor that supports MCP can plug in.
42+
43+
**User-controlled visibility.** You decide which vendors can read which memories. Your preferences might be visible to ChatGPT and Claude. Your project context might only go to Cursor. No vendor gets everything by default.
44+
45+
**Vendor-neutral API.** The store doesn't care if the client is OpenAI, Anthropic, or a custom agent. Same API. Same semantics. No lock-in.
46+
47+
The result: TypeScript SDK, MCP server, n8n node, Custom GPT. All talking to the same memory store. Write once, retrieve everywhere.
48+
49+
If you're thinking about AI infrastructure, what would you add to this stack?
50+
51+
reflectmemory.com | github.com/van-reflect/Reflect-Memory
52+
53+
---
54+
55+
## Post 3: Demo with Results
56+
**Theme:** "What happens when you give 6 AI tools shared memory"
57+
58+
---
59+
60+
What happens when you give 6 AI tools shared memory?
61+
62+
I built Reflect Memory to find out. One API. One store. ChatGPT, Claude, Cursor, Gemini, n8n, and any MCP-compatible tool can read and write to the same memory layer.
63+
64+
Here's what I saw:
65+
66+
I told ChatGPT to remember that I'm building an AI memory startup called Reflect Memory. Then I opened Claude. I asked: "What am I building?" Claude answered correctly. No prompt. No copy-paste. It pulled the memory from the shared store.
67+
68+
I switched to Cursor. I said: "Use my preferred variable naming convention." Cursor had it. I never defined it in that session. It was stored from a previous chat with Claude.
69+
70+
The pattern held across tools. One write, many reads. No per-vendor configuration. No sync jobs. Just one memory layer and every tool that speaks MCP.
71+
72+
The implications are big. Workflows can span vendors. Users stop repeating themselves. Context follows the user, not the product. And because the architecture is vendor-neutral, you control which tools see which memories.
73+
74+
I'm a solo founder. Designer from Google, Apple, Sony, TikTok who pivoted to AI infrastructure. This is the first version. I'd love feedback from anyone building with AI agents.
75+
76+
What would you do with shared memory across your tools?
77+
78+
reflectmemory.com | github.com/van-reflect/Reflect-Memory

0 commit comments

Comments
 (0)