by Musab Kara · GitHub
Claude Code plugin for integrating with OpenCode -- providing access to both Zen (free cloud models) and Ollama (local models).
bash <(curl -fsSL https://raw.githubusercontent.com/SkyWalker2506/claude-marketplace/main/install.sh) opencode-bridgeOr via Claude Code native marketplace:
claude plugin install opencode-bridge@musabkara-claude-marketplaceWhen Claude Code quota runs out or you need a free/local alternative, OpenCode bridges the gap:
- Zen (cloud): Free models like
gpt-5-nano,minimax-m2.1-free,glm-4.7-free,kimi-k2.5-free,big-pickle-- no GPU required, runs via OpenCode's Zen cloud service - Ollama (local): Fully local models like
qwen2.5-coder:7b,gemma3:4b-- no API key, no internet needed, runs on your machine
Both providers are configured in a single ~/.config/opencode/opencode.json file and can be switched on the fly.
| Command | Description |
|---|---|
/opencode |
Show current OpenCode setup status |
/opencode zen |
Connect to Zen cloud, configure free models |
/opencode local |
Switch to Ollama local model |
/opencode models |
List all available models (cloud + local) |
/opencode pull <model> |
Pull an Ollama model |
/opencode config |
Show/edit OpenCode configuration |
/opencode install |
Install or update the OpenCode CLI |
npm install -g opencode-ai# macOS
brew install ollama
# Then pull a model
ollama pull qwen2.5-coder:7b
ollama pull gemma3:4b# Option A: Use the TUI
opencode
# Then: /connect -> select "opencode" -> paste API key from https://opencode.ai/auth
# Option B: Set environment variable
export OPENCODE_ZEN_API_KEY='sk-...' # Add to ~/.zshrcThe plugin uses ~/.config/opencode/opencode.json:
{
"enabled_providers": ["opencode", "ollama"],
"model": "opencode/gpt-5-nano",
"small_model": "opencode/gpt-5-nano",
"provider": {
"opencode": {
"options": { "apiKey": "{env:OPENCODE_ZEN_API_KEY}" },
"models": {
"gpt-5-nano": { "name": "Zen -- GPT-5 Nano (free tier)" },
"minimax-m2.1-free": { "name": "Zen -- MiniMax M2.1 (free, limited)" },
"glm-4.7-free": { "name": "Zen -- GLM 4.7 (free, limited)" }
}
},
"ollama": {
"npm": "@ai-sdk/openai-compatible",
"name": "Ollama (local)",
"options": { "baseURL": "http://localhost:11434/v1" },
"models": {
"qwen2.5-coder:7b": { "name": "Qwen 2.5 Coder 7B (local)" }
}
}
}
}Run install.sh to add these aliases directly to your shell rc (no claude-config required):
| Alias | What it does |
|---|---|
claude-free |
Opens OpenCode with Zen model (opencode/gpt-5-nano) |
claude-local |
Opens OpenCode with Ollama model (ollama/qwen2.5-coder:7b) |
bash install.sh
source ~/.zshrc # or ~/.bashrc| Model | Notes |
|---|---|
opencode/gpt-5-nano |
Free tier (default) |
opencode/minimax-m2.1-free |
Free, limited time |
opencode/glm-4.7-free |
Free, limited time |
opencode/kimi-k2.5-free |
Free, limited time |
opencode/big-pickle |
Free, limited time |
| Model | Size | Notes |
|---|---|---|
qwen2.5-coder:7b |
~4.7 GB | Best for coding tasks |
gemma3:4b |
~3.3 GB | Lightweight, general purpose |
qwen2.5-coder:14b |
~9.0 GB | Higher quality, needs more RAM |
gemma3:12b |
~8.1 GB | Larger Gemma, better quality |
Pull any model with: ollama pull <model>
MIT
- claude-config — Multi-Agent OS for Claude Code (134 agents, local-first routing)
- Plugin Marketplace — Browse & install all 18 plugins
- ClaudeHQ — Claude ecosystem HQ