Skip to content

Commit c0d1c9e

Browse files
Add enterprise deployment docs (upstash#2033)
* Add on-premise deployment docs Hidden page covering what's included, setup steps, configuration reference, MCP client connection, and architecture diagram. * Update on-premise docs: switch from tar.gz to GHCR registry pull Replace curl+docker load approach with standard docker login + docker pull using a license-gated registry token from context7.com. * Fix on-premise docs: separate auth step, clarify docker compose pull - Split into distinct "Authenticate" and "Configure and start" steps - Remove redundant docker pull (docker compose up -d handles it) - Add driver: local to volumes declaration - Clarify that docker login credentials persist for docker compose * docs: mention docker pull as alternative to docker compose pull * docs: use -p flag instead of --password-stdin for docker login * Move on-premise docs to enterprise/, add configuration details - Move on-premise.mdx from docs root to docs/enterprise/ - Add setup wizard documentation - Add AI provider settings with OpenRouter and local model examples - Add embedding settings with incompatibility warning - Add access control section with permission toggles - Add volume persistence warning - Remove env vars for AI/git config (now UI-configured) * Docs: clarify git tokens, add health check example, group operations * Docs: extract LICENSE_KEY variable for easier configuration
1 parent dfe9863 commit c0d1c9e

File tree

3 files changed

+317
-1
lines changed

3 files changed

+317
-1
lines changed

docs/docs.json

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,8 @@
2828
"api-guide",
2929
"skills",
3030
"tips",
31-
"enterprise"
31+
"enterprise",
32+
"enterprise/on-premise"
3233
]
3334
},
3435
{

docs/enterprise/on-premise.mdx

Lines changed: 315 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,315 @@
1+
---
2+
title: "On-Premise Deployment"
3+
sidebarTitle: "On-Premise"
4+
hidden: true
5+
---
6+
7+
Context7 On-Premise lets you run the full Context7 stack inside your own infrastructure. Your code, documentation, and embeddings never leave your environment.
8+
9+
## What's Included
10+
11+
- Full Context7 parsing and indexing pipeline
12+
- Local vector storage (no external vector DB required)
13+
- Built-in MCP server — works with any MCP-compatible AI client
14+
- Web UI for managing indexed libraries and configuration
15+
- REST API compatible with the public Context7 API
16+
- Private GitHub and GitLab repository ingestion
17+
18+
<Frame>
19+
![On-Premise Architecture](/images/on-premise-architecture.png)
20+
</Frame>
21+
22+
## Setup
23+
24+
<Steps>
25+
26+
<Step title="Request a trial">
27+
28+
Go to [context7.com/plans](https://context7.com/plans) and click **On-Premise Trial**. Fill out the request form — no credit card required. You'll receive a 30-day full-featured license key via email once approved.
29+
30+
</Step>
31+
32+
<Step title="Authenticate with the registry">
33+
34+
Use your license key to get a registry token and log in:
35+
36+
```bash
37+
LICENSE_KEY="<your-license-key>"
38+
39+
TOKEN=$(curl -s -H "Authorization: Bearer $LICENSE_KEY" \
40+
https://context7.com/api/v1/license/registry-token | jq -r '.token')
41+
42+
docker login ghcr.io -u x-access-token -p $TOKEN
43+
```
44+
45+
Docker stores these credentials locally — `docker compose` will use them automatically when pulling the image. You can also pull manually:
46+
47+
```bash
48+
docker pull ghcr.io/context7/enterprise:latest
49+
```
50+
51+
</Step>
52+
53+
<Step title="Configure and start">
54+
55+
Create a `docker-compose.yml`:
56+
57+
```yaml
58+
services:
59+
context7:
60+
image: ghcr.io/context7/enterprise:latest
61+
container_name: context7
62+
restart: unless-stopped
63+
ports:
64+
- "3000:3000"
65+
volumes:
66+
- context7-data:/data
67+
environment:
68+
- LICENSE_KEY=${LICENSE_KEY}
69+
70+
volumes:
71+
context7-data:
72+
driver: local
73+
```
74+
75+
<Warning>
76+
The `context7-data` volume is critical. It stores your SQLite database (configuration, credentials, indexed libraries) and all vector embeddings. Without a persistent volume, all data is lost when the container restarts or is recreated. Never run without a volume mount in production.
77+
</Warning>
78+
79+
Create a `.env` file in the same directory:
80+
81+
```bash
82+
LICENSE_KEY=ctx7sk-...
83+
```
84+
85+
Start the service:
86+
87+
```bash
88+
docker compose up -d
89+
```
90+
91+
</Step>
92+
93+
<Step title="Complete the setup wizard">
94+
95+
Open `http://localhost:3000` in your browser. On first launch, the setup wizard guides you through configuring:
96+
97+
1. **AI Provider** — Choose OpenAI, Anthropic, Gemini, or a custom OpenAI-compatible endpoint. Enter your API key and model name.
98+
2. **Embedding Provider** — Use the same provider as your LLM, or configure a separate one for embeddings.
99+
3. **Git Tokens** — Add a GitHub and/or GitLab token for the platforms you use.
100+
101+
All configuration is stored locally in the embedded database and can be updated later from the Settings page.
102+
103+
</Step>
104+
105+
<Step title="Ingest your first repository">
106+
107+
From the dashboard, click **Add Repository** and enter a GitHub or GitLab URL. Once ingestion completes, your private docs are ready to query.
108+
109+
You can also add libraries via the REST API:
110+
111+
```bash
112+
curl -X POST http://localhost:3000/api/parse \
113+
-H "Content-Type: application/json" \
114+
-d '{"url": "https://github.com/your-org/your-repo"}'
115+
```
116+
117+
</Step>
118+
119+
</Steps>
120+
121+
## Connecting Your AI Client
122+
123+
Point your MCP client at your deployment URL. Replace `https://context7.internal.yourcompany.com` with your actual host.
124+
125+
### Claude Code
126+
127+
```bash
128+
claude mcp add --scope user --transport http context7 https://context7.internal.yourcompany.com/mcp
129+
```
130+
131+
### Cursor
132+
133+
Add to `~/.cursor/mcp.json`:
134+
135+
```json
136+
{
137+
"mcpServers": {
138+
"context7": {
139+
"url": "https://context7.internal.yourcompany.com/mcp"
140+
}
141+
}
142+
}
143+
```
144+
145+
### Opencode
146+
147+
```json
148+
{
149+
"mcp": {
150+
"context7": {
151+
"type": "remote",
152+
"url": "https://context7.internal.yourcompany.com/mcp",
153+
"enabled": true
154+
}
155+
}
156+
}
157+
```
158+
159+
For other clients, see [All Clients](/resources/all-clients).
160+
161+
## Configuration
162+
163+
### Environment Variables
164+
165+
These are set in your `docker-compose.yml` or `.env` file before starting the container.
166+
167+
| Variable | Required | Description |
168+
|---|---|---|
169+
| `LICENSE_KEY` | Yes | License key issued by Upstash |
170+
| `PORT` | No | HTTP port (default: `3000`) |
171+
| `DATA_DIR` | No | Data directory inside the container (default: `./data`) |
172+
173+
<Note>
174+
AI provider keys, model settings, and git tokens are **not** set via environment variables. They are configured through the setup wizard and can be updated anytime from the Settings page in the web UI.
175+
</Note>
176+
177+
### AI Provider Settings
178+
179+
Configured via the **Settings** page in the web UI.
180+
181+
| Setting | Description |
182+
|---|---|
183+
| LLM Provider | `openai`, `anthropic`, `gemini`, or custom |
184+
| LLM API Key | API key for your chosen provider |
185+
| LLM Model | Model name (e.g. `gpt-4o`, `claude-sonnet-4-5`, `gemini-2.5-flash`) |
186+
| LLM Base URL | Custom OpenAI-compatible endpoint (for local models or proxies) |
187+
188+
#### Examples
189+
190+
<Tabs>
191+
<Tab title="OpenRouter">
192+
```
193+
Provider: custom
194+
Base URL: https://openrouter.ai/api/v1
195+
Model: openai/gpt-4o
196+
API Key: sk-or-v1-...
197+
```
198+
</Tab>
199+
<Tab title="Local Model (Ollama, vLLM)">
200+
```
201+
Provider: custom
202+
Base URL: http://host.docker.internal:11434/v1
203+
Model: llama3.2
204+
API Key: ollama
205+
```
206+
</Tab>
207+
</Tabs>
208+
209+
### Embedding Settings
210+
211+
By default, Context7 uses the same provider as your LLM for generating embeddings. You can configure a separate embedding provider if needed.
212+
213+
| Setting | Description |
214+
|---|---|
215+
| Embedding Provider | `openai` or `gemini` |
216+
| Embedding API Key | Separate API key for embeddings (falls back to LLM API key) |
217+
| Embedding Model | Embedding model name (e.g. `text-embedding-3-small`) |
218+
| Embedding Base URL | Custom embedding endpoint |
219+
220+
### Git Access Tokens
221+
222+
Configured via the **Settings** page in the web UI.
223+
224+
| Setting | Description |
225+
|---|---|
226+
| GitHub Token | GitHub Personal Access Token — required for GitHub repositories |
227+
| GitLab Token | GitLab token — required for GitLab repositories |
228+
229+
You only need tokens for the platforms you use. If you only parse GitLab repos, you don't need a GitHub token, and vice versa. Create tokens with `repo` scope (GitHub) or `read_repository` scope (GitLab) for private repository access.
230+
231+
## Access Control
232+
233+
Admin credentials are set during first login (default: `admin` / `admin`). Change these immediately after setup via **Settings > Change Credentials**.
234+
235+
The Settings page lets you control which operations are available without authentication.
236+
237+
| Permission | Default | Description |
238+
|---|---|---|
239+
| Allow anonymous parse | Off | Allow unauthenticated users to trigger parsing |
240+
| Allow anonymous refresh | Off | Allow unauthenticated users to refresh libraries |
241+
| Allow anonymous delete | Off | Allow unauthenticated users to delete libraries |
242+
| Allow anonymous support bundle | Off | Allow unauthenticated support bundle downloads |
243+
244+
When a permission is off, the operation requires admin login. The MCP endpoint and search API are always publicly accessible.
245+
246+
## Web UI
247+
248+
Open your deployment URL in a browser to access the dashboard. From here you can:
249+
250+
- Add and remove libraries
251+
- Trigger re-indexing
252+
- Monitor parsing status and logs
253+
- Update AI provider settings, git tokens, and permissions
254+
- Test MCP connectivity
255+
- Change admin credentials
256+
257+
## Operations
258+
259+
### Updating the Image
260+
261+
If your registry login has expired, re-authenticate first:
262+
263+
```bash
264+
LICENSE_KEY="<your-license-key>"
265+
266+
TOKEN=$(curl -s -H "Authorization: Bearer $LICENSE_KEY" \
267+
https://context7.com/api/v1/license/registry-token | jq -r '.token')
268+
269+
docker login ghcr.io -u x-access-token -p $TOKEN
270+
```
271+
272+
Then pull the latest image and restart the container:
273+
274+
```bash
275+
docker compose pull
276+
docker compose up -d
277+
```
278+
279+
Data persists in the named Docker volume across updates.
280+
281+
### Health Check
282+
283+
```bash
284+
curl http://localhost:3000/api/health
285+
```
286+
287+
Example response:
288+
289+
```json
290+
{
291+
"status": "healthy",
292+
"version": "1.0.0",
293+
"setup": "complete",
294+
"license": "configured",
295+
"licenseInfo": {
296+
"valid": true,
297+
"teamSize": 10,
298+
"expiresAt": "2026-06-01T00:00:00.000Z"
299+
},
300+
"repos_parsed": 5,
301+
"uptime": 3600,
302+
"connectivity": {
303+
"llm": "configured",
304+
"llm_provider": "openai",
305+
"embedding": "configured",
306+
"embedding_provider": "openai",
307+
"github": "configured",
308+
"gitlab": "not configured"
309+
}
310+
}
311+
```
312+
313+
## Support
314+
315+
For license issues, upgrade requests, or deployment questions, contact [context7@upstash.com](mailto:context7@upstash.com).
119 KB
Loading

0 commit comments

Comments
 (0)