-
Notifications
You must be signed in to change notification settings - Fork 0
Expand file tree
/
Copy path.env.example
More file actions
318 lines (256 loc) · 16.9 KB
/
.env.example
File metadata and controls
318 lines (256 loc) · 16.9 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
# ═══════════════════════════════════════════════════════════════════════════════
# CLAUDE CODE++ ENVIRONMENT CONFIGURATION
# Author: Jeremiah Kroesche | Halfservers LLC
# Updated: January 2026
# ═══════════════════════════════════════════════════════════════════════════════
# ─────────────────────────────────────────────────────────────────────────────
# API KEYS
# ─────────────────────────────────────────────────────────────────────────────
# Required - Primary LLM provider
ANTHROPIC_API_KEY=sk-ant-...
# Optional - For model routing fallbacks and alternatives
OPENAI_API_KEY=sk-...
GOOGLE_API_KEY=...
# Optional - For premium embeddings (if not using local)
VOYAGE_API_KEY=...
# ─────────────────────────────────────────────────────────────────────────────
# MODEL CONFIGURATION
# ─────────────────────────────────────────────────────────────────────────────
#
# LATEST ANTHROPIC MODELS (January 2026):
# claude-opus-4-5-20251101 - Most capable, highest cost
# claude-sonnet-4-5-20250929 - Best balance of capability/cost
# claude-haiku-4-5-20251001 - Fastest, most affordable
#
# LATEST OPENAI MODELS:
# gpt-4o-2024-11-20 - Flagship multimodal
# gpt-4o-mini-2024-07-18 - Fast and cheap
# o1-2024-12-17 - Reasoning model
# o3-mini-2025-01-31 - Latest reasoning, efficient
#
# LATEST GOOGLE MODELS:
# gemini-2.0-flash - Fast, multimodal
# gemini-2.0-flash-thinking - Reasoning variant
#
# ─────────────────────────────────────────────────────────────────────────────
# Primary model for complex reasoning, architecture decisions
PRIMARY_MODEL=claude-sonnet-4-5-20250929
# Fast model for simple completions, explanations, formatting
FAST_MODEL=claude-haiku-4-5-20251001
# Fallback if primary unavailable
FALLBACK_MODEL=gpt-4o-2024-11-20
# Entity extraction for Graphiti (needs to be good at structured output)
GRAPHITI_LLM_MODEL=claude-haiku-4-5-20251001
# ─────────────────────────────────────────────────────────────────────────────
# LOCAL MODELS (Ollama) - RECOMMENDED FOR COST EFFICIENCY
# ─────────────────────────────────────────────────────────────────────────────
#
# Running models locally eliminates API costs for routine tasks.
# Requires: Ollama installed (https://ollama.ai)
#
# RECOMMENDED LOCAL MODELS:
# codellama:13b - Code completion, explanation (~8GB VRAM)
# deepseek-coder:6.7b - Excellent code model, efficient (~4GB VRAM)
# mistral:7b - General purpose, fast (~4GB VRAM)
# llama3.2:3b - Ultra-fast for simple tasks (~2GB VRAM)
# qwen2.5-coder:7b - Strong coding, good efficiency (~4GB VRAM)
#
# ─────────────────────────────────────────────────────────────────────────────
OLLAMA_HOST=http://localhost:11434
# Local model for simple tasks (code explanation, formatting, simple Q&A)
LOCAL_SIMPLE_MODEL=llama3.2:3b
# Local model for code tasks (completion, refactoring)
LOCAL_CODE_MODEL=qwen2.5-coder:7b
# ─────────────────────────────────────────────────────────────────────────────
# EMBEDDING MODELS
# ─────────────────────────────────────────────────────────────────────────────
#
# LOCAL (Recommended - Zero cost after setup):
# nomic-embed-text - Best open-source general (768d, ~275MB)
# mxbai-embed-large - Strong alternative (1024d, ~670MB)
# snowflake-arctic-embed:335m - Good balance (1024d, ~670MB)
#
# API-BASED (Higher quality, ongoing cost):
# voyage-code-3 - SOTA for code ($0.06/1M tokens)
# voyage-3-large - SOTA general ($0.06/1M tokens)
# text-embedding-3-large - OpenAI option ($0.13/1M tokens)
#
# ─────────────────────────────────────────────────────────────────────────────
# Primary embedding model - LOCAL RECOMMENDED
EMBEDDING_MODEL=nomic-embed-text
EMBEDDING_PROVIDER=ollama
# Alternative: Use Voyage for higher quality (costs money)
# EMBEDDING_MODEL=voyage-code-3
# EMBEDDING_PROVIDER=voyage
# Embedding dimensions (for Nomic)
EMBEDDING_DIMENSIONS=768
# ─────────────────────────────────────────────────────────────────────────────
# REDIS (Hot Tier)
# ─────────────────────────────────────────────────────────────────────────────
REDIS_URL=redis://localhost:6379
REDIS_DB=0
REDIS_MAX_MEMORY=256mb
REDIS_EVICTION_POLICY=allkeys-lru
# ─────────────────────────────────────────────────────────────────────────────
# NEO4J / GRAPHITI (Warm Tier - Knowledge Graph)
# ─────────────────────────────────────────────────────────────────────────────
#
# Backend options:
# neo4j - Full featured, best for production
# falkordb - Lighter weight, Redis-compatible
# kuzu - Embedded, simplest deployment
#
# ─────────────────────────────────────────────────────────────────────────────
GRAPHITI_BACKEND=neo4j
NEO4J_URI=bolt://localhost:7687
NEO4J_USER=neo4j
NEO4J_PASSWORD=CHANGE_ME_BEFORE_USING
# Entity extraction settings
GRAPHITI_EXTRACT_ON_MESSAGE=true
GRAPHITI_BATCH_SIZE=10
# ─────────────────────────────────────────────────────────────────────────────
# LIVEGREP (Cold Tier + Global Search)
# ─────────────────────────────────────────────────────────────────────────────
LIVEGREP_INDEX_PATH=${CLAUDE_CODE_PP_HOME}/memory/user.idx
LIVEGREP_BIND_ADDRESS=127.0.0.1:8910
LIVEGREP_MAX_MATCHES=1000
LIVEGREP_CONTEXT_LINES=3
# Indexing settings
LIVEGREP_BUILD_PARALLELISM=4
LIVEGREP_INDEX_ON_SESSION_END=true
# ─────────────────────────────────────────────────────────────────────────────
# OBSIDIAN VAULT (Archive - Human Readable)
# ─────────────────────────────────────────────────────────────────────────────
VAULT_PATH=${CLAUDE_CODE_PP_HOME}/memory/vault
VAULT_SYNC_ENABLED=true
VAULT_SYNC_INTERVAL=300
VAULT_DAILY_NOTES=true
# ─────────────────────────────────────────────────────────────────────────────
# LOGGING
# ─────────────────────────────────────────────────────────────────────────────
LOG_LEVEL=INFO
LOG_PATH=${CLAUDE_CODE_PP_HOME}/logs
LOG_RETENTION_DAYS=365
LOG_FORMAT=json
# Comprehensive logging ("down to the bones")
LOG_CONVERSATIONS=true
LOG_TOOL_CALLS=true
LOG_FILE_OPERATIONS=true
LOG_MODEL_CALLS=true
# ─────────────────────────────────────────────────────────────────────────────
# SYSTEM PATHS
# ─────────────────────────────────────────────────────────────────────────────
CLAUDE_CODE_PP_HOME=~/.claude-code-pp
CLAUDE_CODE_PP_CACHE=${CLAUDE_CODE_PP_HOME}/cache
CLAUDE_CODE_PP_BIN=${CLAUDE_CODE_PP_HOME}/bin
# ─────────────────────────────────────────────────────────────────────────────
# DOCKER MCP GATEWAY / VOLUME CONFIGURATION
# ─────────────────────────────────────────────────────────────────────────────
#
# These settings are used when running with Docker MCP Gateway mode.
# Run: id -u && id -g to get your user/group IDs.
#
# ─────────────────────────────────────────────────────────────────────────────
# User ID/Group ID for container file permissions (match your host user)
USER_ID=1000
GROUP_ID=1000
# Obsidian Vault Path - BIND MOUNTED into containers
# Container writes appear in this host directory, enabling cloud sync.
#
# Default (no cloud sync):
OBSIDIAN_VAULT_PATH=~/.claude-code-pp/memory/vault
# iCloud (macOS):
# OBSIDIAN_VAULT_PATH=~/Library/Mobile Documents/iCloud~md~obsidian/Documents/Claude-Memory
# Dropbox:
# OBSIDIAN_VAULT_PATH=~/Dropbox/Obsidian/Claude-Memory
# OneDrive:
# OBSIDIAN_VAULT_PATH=~/OneDrive/Obsidian/Claude-Memory
# Google Drive:
# OBSIDIAN_VAULT_PATH=~/Google Drive/Obsidian/Claude-Memory
# Memory MCP log level
MEMORY_MCP_LOG_LEVEL=INFO
# ─────────────────────────────────────────────────────────────────────────────
# PERMISSIONS
# ─────────────────────────────────────────────────────────────────────────────
#
# Levels:
# sandboxed - Read only, no execution
# standard - Read/write project, safe commands (DEFAULT)
# elevated - Access home directory, install packages
# unrestricted - Full user permissions
#
# ─────────────────────────────────────────────────────────────────────────────
DEFAULT_PERMISSION_LEVEL=standard
# ─────────────────────────────────────────────────────────────────────────────
# OPENCLAW INTEGRATION (Multi-Channel AI Gateway)
# ─────────────────────────────────────────────────────────────────────────────
#
# OpenClaw provides AI gateway access via WhatsApp, Telegram, Discord, Slack,
# Signal, iMessage, and more. Memory is shared with Claude Code++ via the
# memory-mcp-bridge extension.
#
# Install: npm install -g openclaw@latest && openclaw onboard --install-daemon
# Docs: https://docs.openclaw.ai
#
# ─────────────────────────────────────────────────────────────────────────────
# OpenClaw Gateway Settings
OPENCLAW_GATEWAY_PORT=18789
OPENCLAW_GATEWAY_BIND=loopback
# Channel Tokens (Optional - Configure via 'openclaw config set')
# TELEGRAM_BOT_TOKEN=...
# DISCORD_BOT_TOKEN=...
# SLACK_BOT_TOKEN=...
# SLACK_APP_TOKEN=...
# TWILIO_ACCOUNT_SID=...
# TWILIO_AUTH_TOKEN=...
# TWILIO_WHATSAPP_FROM=whatsapp:+1...
# OpenClaw Memory Bridge (auto-configured by install.sh)
OPENCLAW_PLUGIN_MEMORY_MCP_ENABLED=true
OPENCLAW_PLUGIN_MEMORY_MCP_COMMAND=${CLAUDE_CODE_PP_BIN}/memory-mcp
# ─────────────────────────────────────────────────────────────────────────────
# EFFICIENCY PRESETS
# ─────────────────────────────────────────────────────────────────────────────
#
# Uncomment ONE preset to override individual settings above:
#
# ─── PRESET: Maximum Efficiency (Local-first, minimal API costs) ─────────────
# EFFICIENCY_PRESET=local_max
# PRIMARY_MODEL=qwen2.5-coder:7b
# FAST_MODEL=llama3.2:3b
# EMBEDDING_MODEL=nomic-embed-text
# EMBEDDING_PROVIDER=ollama
#
# ─── PRESET: Balanced (Local simple, API for complex) ────────────────────────
# EFFICIENCY_PRESET=balanced
# PRIMARY_MODEL=claude-sonnet-4-5-20250929
# FAST_MODEL=llama3.2:3b
# EMBEDDING_MODEL=nomic-embed-text
#
# ─── PRESET: Quality First (Best models, higher cost) ────────────────────────
# EFFICIENCY_PRESET=quality
# PRIMARY_MODEL=claude-opus-4-5-20251101
# FAST_MODEL=claude-haiku-4-5-20251001
# EMBEDDING_MODEL=voyage-code-3
# EMBEDDING_PROVIDER=voyage
#
# ─────────────────────────────────────────────────────────────────────────────
# ═══════════════════════════════════════════════════════════════════════════════
# COST ESTIMATES (January 2026 pricing)
# ═══════════════════════════════════════════════════════════════════════════════
#
# LOCAL-FIRST PRESET (~$0-5/month):
# - Local models handle 80-90% of requests
# - API only for complex reasoning
# - Requires decent GPU (8GB+ VRAM recommended)
#
# BALANCED PRESET (~$20-50/month):
# - Local for simple tasks
# - Sonnet for complex work
# - Good quality/cost balance
#
# QUALITY PRESET (~$100-300/month):
# - Opus for primary work
# - Premium embeddings
# - Best results, highest cost
#
# ═══════════════════════════════════════════════════════════════════════════════