You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
[Crush](https://github.com/charmbracelet/crush) (and its CLI `rush`) supports the**Model Context Protocol (MCP)**, making integration with tinyMem native and powerful.
3
+
[Crush](https://github.com/charmbracelet/crush) (and its CLI `rush`) supports both**Model Context Protocol (MCP)** and standard **OpenAI-compatible APIs**, giving you two flexible ways to integrate with tinyMem.
4
4
5
-
## Configuration
5
+
## Option A: MCP Mode (Recommended)
6
+
7
+
This mode allows Rush to natively use tinyMem's tools (like `memory_query`, `memory_write`, `memory_health`) to actively manage context during your chat.
8
+
9
+
### 1. Configuration
6
10
7
11
Crush looks for a configuration file at `.crush.json` (project local) or `~/.config/crush/crush.json` (global).
8
12
9
-
1.**Install tinyMem** and ensure it's in your PATH.
10
-
2.**Create/Edit `.crush.json`:**
11
-
12
-
```json
13
-
{
14
-
"mcp": {
15
-
"tinymem": {
16
-
"type": "stdio",
17
-
"command": "tinymem",
18
-
"args": ["mcp"],
19
-
"timeout": 120,
20
-
"env": {
21
-
"TINYMEM_METRICS_ENABLED": "true"
22
-
}
23
-
}
13
+
**Create/Edit `.crush.json`:**
14
+
15
+
```json
16
+
{
17
+
"mcp": {
18
+
"tinymem": {
19
+
"type": "stdio",
20
+
"command": "tinymem",
21
+
"args": ["mcp"],
22
+
"timeout": 120,
23
+
"env": {
24
+
"TINYMEM_METRICS_ENABLED": "true"
24
25
}
25
26
}
26
-
```
27
+
}
28
+
}
29
+
```
27
30
28
-
*If `tinymem` is not in PATH, replace `"command": "tinymem"` with `"command": "/absolute/path/to/tinymem"`.*
31
+
*If `tinymem` is not in PATH, replace `"command": "tinymem"` with `"command": "/absolute/path/to/tinymem"`.*
29
32
30
-
##Usage
33
+
### 2. Usage (MCP)
31
34
32
35
Start Rush:
33
36
```bash
34
37
rush
35
38
```
36
39
37
-
### Querying Memory
38
-
You can ask Rush natural language questions about the project:
39
-
> "What decisions did we make about the API structure?"
40
+
Rush will now have access to tinyMem tools:
41
+
-**Querying:** "What decisions did we make about the API structure?" (Rush calls `memory_query`)
42
+
-**Writing:** "Remember that we decided to use Postgres for production." (Rush calls `memory_write`)
43
+
-**Health:** "Check if the memory system is working." (Rush calls `memory_health`)
40
44
41
-
Rush will automatically call `memory_query` to fetch context before answering.
45
+
### 3. Advanced MCP Config
42
46
43
-
### Writing Memory
44
-
> "Remember that we decided to use Postgres for production."
45
-
46
-
Rush will call `memory_write` to store this decision.
47
-
48
-
### Health Check
49
-
> "Check if the memory system is working."
50
-
51
-
Rush will call `memory_health`.
52
-
53
-
## Advanced Crush Config
54
-
55
-
You can tune the integration in `.crush.json`:
47
+
You can tune the MCP integration in `.crush.json`:
56
48
57
49
```json
58
50
{
@@ -65,18 +57,102 @@ You can tune the integration in `.crush.json`:
65
57
"TINYMEM_LOG_LEVEL": "debug",
66
58
"TINYMEM_RECALL_MAX_ITEMS": "20"
67
59
},
68
-
"disabled_tools": []// You can explicitly disable tools if needed
60
+
"disabled_tools": []
69
61
}
70
62
},
71
63
"system_prompt_suffix": "\n\nUse tinyMem to check context before answering code questions."
72
64
}
73
65
```
74
66
67
+
---
68
+
69
+
## Option B: Proxy Mode
70
+
71
+
Use this mode if you want tinyMem to act as a transparent "middle-man," automatically injecting relevant memory context into every prompt before it reaches your LLM.
72
+
73
+
### 1. Configure tinyMem
74
+
75
+
Create a `config.toml` file (e.g., in your project root or home directory) to tell tinyMem where your actual LLM is running (e.g., Ollama, LM Studio, vLLM).
76
+
77
+
```toml
78
+
[agent]
79
+
contract = 'small'
80
+
81
+
[proxy]
82
+
port = 8080
83
+
base_url = "http://127.0.0.1:1234"# Your actual LLM backend (e.g., LM Studio, Ollama)
84
+
85
+
[llm]
86
+
model = "unsloth/rnj-1-instruct"# The model name your backend expects
87
+
timeout = 5000
88
+
89
+
[recall]
90
+
max_items = 10
91
+
92
+
[cove]
93
+
enabled = true
94
+
confidence_threshold = 0.6
95
+
96
+
[logging]
97
+
level = "info"
98
+
file = "tinymem.log"
99
+
```
100
+
101
+
### 2. Start tinyMem Proxy
102
+
103
+
Open a terminal and run:
104
+
105
+
```bash
106
+
tinymem proxy
107
+
```
108
+
*You should see logs indicating the proxy is listening on port 8080.*
109
+
110
+
### 3. Configure Crush
111
+
112
+
Edit your `.crush.json` to treat tinyMem as an OpenAI-compatible provider.
113
+
114
+
```json
115
+
{
116
+
"providers": {
117
+
"Ollama": {
118
+
"name": "Ollama",
119
+
"base_url": "http://localhost:8080/v1",
120
+
"type": "openai-compat",
121
+
"models": [
122
+
{
123
+
"name": "rnj",
124
+
"id": "unsloth/rnj-1-instruct",
125
+
"context_window": 256000,
126
+
"default_max_tokens": 20000
127
+
}
128
+
]
129
+
}
130
+
}
131
+
}
132
+
```
133
+
134
+
***`base_url`**: Must point to tinyMem (`http://localhost:8080/v1`).
135
+
***`id`**: Should match the model name configured in `config.toml` (or be compatible with your backend).
136
+
137
+
### 4. Usage (Proxy)
138
+
139
+
Start Rush:
140
+
```bash
141
+
rush
142
+
```
143
+
144
+
In this mode, interaction is implicit:
145
+
-**Context Injection:** When you ask a question, tinyMem automatically searches its database and prepends relevant memories to your prompt.
146
+
-**Transparency:** Crush thinks it's talking directly to the LLM, but tinyMem is enriching the conversation in the background.
147
+
148
+
---
149
+
75
150
## Configuration Reference
76
151
77
152
For full configuration options, see [Configuration.md](Configuration.md).
78
153
79
154
## Troubleshooting
80
155
81
-
-**Tools Not Available:** Check `rush --version` to ensure you have a version with MCP support.
82
-
-**Execution Errors:** Run `rush` with debug flags or check `.tinyMem/logs/` to see if tinyMem is crashing or erroring.
156
+
-**Tools Not Available (MCP):** Check `rush --version` to ensure you have a version with MCP support.
157
+
-**Connection Refused (Proxy):** Ensure `tinymem proxy` is running and the port matches your `.crush.json` configuration.
158
+
-**Logs:** Check `.tinyMem/logs/` or the terminal where you ran `tinymem proxy` to see if requests are being processed.
0 commit comments