💬 Let's talk about the project on Discord
1. Create a free API key (NVIDIA, OpenRouter, Hugging Face, etc.)
2. npm i -g free-coding-models
3. free-coding-models
Find the fastest coding LLM models in seconds
Ping free coding models from 17 providers in real-time — pick the best one for OpenCode, OpenClaw, or any AI coding assistant
Features • Requirements • Installation • Usage • Models • OpenCode • OpenClaw • How it works
- 🎯 Coding-focused — Only LLM models optimized for code generation, not chat or vision
- 🌐 Multi-provider — 134 models from NVIDIA NIM, Groq, Cerebras, SambaNova, OpenRouter, Hugging Face Inference, Replicate, DeepInfra, Fireworks AI, Codestral, Hyperbolic, Scaleway, Google AI, SiliconFlow, Together AI, Cloudflare Workers AI, and Perplexity API
- ⚙️ Settings screen — Press
Pto manage provider API keys, enable/disable providers, test keys live, and manually check/install updates - 🚀 Parallel pings — All models tested simultaneously via native
fetch - 📊 Real-time animation — Watch latency appear live in alternate screen buffer
- 🏆 Smart ranking — Top 3 fastest models highlighted with medals 🥇🥈🥉
- ⏱ Continuous monitoring — Pings all models every 2 seconds forever, never stops
- 📈 Rolling averages — Avg calculated from ALL successful pings since start
- 📊 Uptime tracking — Percentage of successful pings shown in real-time
- 🔄 Auto-retry — Timeout models keep getting retried, nothing is ever "given up on"
- 🎮 Interactive selection — Navigate with arrow keys directly in the table, press Enter to act
- 🔀 Startup mode menu — Choose between OpenCode and OpenClaw before the TUI launches
- 💻 OpenCode integration — Auto-detects NIM setup, sets model as default, launches OpenCode
- 🦞 OpenClaw integration — Sets selected model as default provider in
~/.openclaw/openclaw.json - 🎨 Clean output — Zero scrollback pollution, interface stays open until Ctrl+C
- 📶 Status indicators — UP ✅ · No Key 🔑 · Timeout ⏳ · Overloaded 🔥 · Not Found 🚫
- 🔍 Keyless latency — Models are pinged even without an API key — a
🔑 NO KEYstatus confirms the server is reachable with real latency shown, so you can compare providers before committing to a key - 🏷 Tier filtering — Filter models by tier letter (S, A, B, C) with
--tierflag or dynamically withTkey - ⭐ Persistent favorites — Press
Fon a selected row to pin/unpin it; favorites stay at top with a dark orange background and a star before the model name - 📊 Privacy-first analytics (optional) — anonymous PostHog events with explicit consent + opt-out
Before using free-coding-models, make sure you have:
- Node.js 18+ — Required for native
fetchAPI - At least one free API key — pick any or all of:
- NVIDIA NIM — build.nvidia.com → Profile → API Keys → Generate
- Groq — console.groq.com/keys → Create API Key
- Cerebras — cloud.cerebras.ai → API Keys → Create
- SambaNova — sambanova.ai/developers → Developers portal → API key (dev tier generous)
- OpenRouter — openrouter.ai/keys → Create key (50 req/day, 20/min on
:free) - Hugging Face Inference — huggingface.co/settings/tokens → Access Tokens (free monthly credits)
- Replicate — replicate.com/account/api-tokens → Create token (dev quota)
- DeepInfra — deepinfra.com/login → Login → API key (free dev tier)
- Fireworks AI — fireworks.ai → Settings → Access Tokens ($1 free credits)
- Mistral Codestral — codestral.mistral.ai → API Keys (30 req/min, 2000/day — phone required)
- Hyperbolic — app.hyperbolic.ai/settings → API Keys ($1 free trial)
- Scaleway — console.scaleway.com/iam/api-keys → IAM → API Keys (1M free tokens)
- Google AI Studio — aistudio.google.com/apikey → Get API key (free Gemma models, 14.4K req/day)
- SiliconFlow — cloud.siliconflow.cn/account/ak → API Keys (free-model quotas vary by model)
- Together AI — api.together.ai/settings/api-keys → API Keys (credits/promotions vary)
- Cloudflare Workers AI — dash.cloudflare.com → Create API token + set
CLOUDFLARE_ACCOUNT_ID(Free: 10k neurons/day) - Perplexity API — perplexity.ai/settings/api → API Key (tiered limits by spend)
- OpenCode (optional) — Install OpenCode to use the OpenCode integration
- OpenClaw (optional) — Install OpenClaw to use the OpenClaw integration
💡 Tip: You don't need all seventeen providers. One key is enough to get started. Add more later via the Settings screen (
Pkey). Models without a key still show real latency (🔑 NO KEY) so you can evaluate providers before signing up.
# npm (global install — recommended)
npm install -g free-coding-models
# pnpm
pnpm add -g free-coding-models
# bun
bun add -g free-coding-models
# Or use directly with npx/pnpx/bunx
npx free-coding-models YOUR_API_KEY
pnpx free-coding-models YOUR_API_KEY
bunx free-coding-models YOUR_API_KEY# Just run it — shows a startup menu to pick OpenCode or OpenClaw, prompts for API key if not set
free-coding-models
# Explicitly target OpenCode CLI (TUI + Enter launches OpenCode CLI)
free-coding-models --opencode
# Explicitly target OpenCode Desktop (TUI + Enter sets model & opens Desktop app)
free-coding-models --opencode-desktop
# Explicitly target OpenClaw (TUI + Enter sets model as default in OpenClaw)
free-coding-models --openclaw
# Show only top-tier models (A+, S, S+)
free-coding-models --best
# Analyze for 10 seconds and output the most reliable model
free-coding-models --fiable
# Disable anonymous analytics for this run
free-coding-models --no-telemetry
# Filter models by tier letter
free-coding-models --tier S # S+ and S only
free-coding-models --tier A # A+, A, A- only
free-coding-models --tier B # B+, B only
free-coding-models --tier C # C only
# Combine flags freely
free-coding-models --openclaw --tier S
free-coding-models --opencode --bestWhen you run free-coding-models without --opencode or --openclaw, you get an interactive startup menu:
⚡ Free Coding Models — Choose your tool
❯ 💻 OpenCode CLI
Press Enter on a model → launch OpenCode CLI with it as default
🖥 OpenCode Desktop
Press Enter on a model → set model & open OpenCode Desktop app
🦞 OpenClaw
Press Enter on a model → set it as default in OpenClaw config
↑↓ Navigate • Enter Select • Ctrl+C Exit
Use ↑↓ arrows to select, Enter to confirm. Then the TUI launches with your chosen mode shown in the header badge.
How it works:
- Ping phase — All enabled models are pinged in parallel (up to 134 across 17 providers)
- Continuous monitoring — Models are re-pinged every 2 seconds forever
- Real-time updates — Watch "Latest", "Avg", and "Up%" columns update live
- Select anytime — Use ↑↓ arrows to navigate, press Enter on a model to act
- Smart detection — Automatically detects if NVIDIA NIM is configured in OpenCode or OpenClaw
Setup wizard (first run — walks through all 17 providers):
🔑 First-time setup — API keys
Enter keys for any provider you want to use. Press Enter to skip one.
● NVIDIA NIM
Free key at: https://build.nvidia.com
Profile → API Keys → Generate
Enter key (or Enter to skip): nvapi-xxxx
● Groq
Free key at: https://console.groq.com/keys
API Keys → Create API Key
Enter key (or Enter to skip): gsk_xxxx
● Cerebras
Free key at: https://cloud.cerebras.ai
API Keys → Create
Enter key (or Enter to skip):
● SambaNova
Free key at: https://cloud.sambanova.ai/apis
API Keys → Create ($5 free trial, 3 months)
Enter key (or Enter to skip):
✅ 2 key(s) saved to ~/.free-coding-models.json
You can add or change keys anytime with the P key in the TUI.
You don't need all seventeen — skip any provider by pressing Enter. At least one key is required.
Press P to open the Settings screen at any time:
⚙ Settings
Providers
❯ [ ✅ ] NVIDIA NIM nvapi-••••••••••••3f9a [Test ✅] Free tier (provider quota by model)
[ ✅ ] OpenRouter (no key set) [Test —] 50 req/day, 20/min (:free shared quota)
[ ✅ ] Hugging Face Inference (no key set) [Test —] Free monthly credits (~$0.10)
Setup Instructions — NVIDIA NIM
1) Create a NVIDIA NIM account: https://build.nvidia.com
2) Profile → API Keys → Generate
3) Press T to test your key
↑↓ Navigate • Enter Edit key / Check-or-Install update • Space Toggle enabled • T Test key • U Check updates • Esc Close
- ↑↓ — navigate providers
- Enter — enter inline key edit mode (type your key, Enter to save, Esc to cancel)
- Space — toggle provider enabled/disabled
- T — fire a real test ping to verify the key works (shows ✅/❌)
- U — manually check npm for a newer version
- Esc — close settings and reload models list
Keys are saved to ~/.free-coding-models.json (permissions 0600).
Analytics toggle is in the same Settings screen (P) as a dedicated row (toggle with Enter or Space).
Manual update is in the same Settings screen (P) under Maintenance (Enter to check, Enter again to install when an update is available).
Favorites are also persisted in the same config file and survive restarts.
Env vars always take priority over the config file:
NVIDIA_API_KEY=nvapi-xxx free-coding-models
GROQ_API_KEY=gsk_xxx free-coding-models
CEREBRAS_API_KEY=csk_xxx free-coding-models
OPENROUTER_API_KEY=sk-or-xxx free-coding-models
HUGGINGFACE_API_KEY=hf_xxx free-coding-models
REPLICATE_API_TOKEN=r8_xxx free-coding-models
DEEPINFRA_API_KEY=di_xxx free-coding-models
FIREWORKS_API_KEY=fw_xxx free-coding-models
SILICONFLOW_API_KEY=sk_xxx free-coding-models
TOGETHER_API_KEY=together_xxx free-coding-models
CLOUDFLARE_API_TOKEN=cf_xxx CLOUDFLARE_ACCOUNT_ID=your_account_id free-coding-models
PERPLEXITY_API_KEY=pplx_xxx free-coding-models
FREE_CODING_MODELS_TELEMETRY=0 free-coding-modelsTelemetry env vars:
FREE_CODING_MODELS_TELEMETRY=0|1— force disable/enable analyticsFREE_CODING_MODELS_POSTHOG_KEY— PostHog project API key (required to send events)FREE_CODING_MODELS_POSTHOG_HOST— optional ingest host (https://eu.i.posthog.comdefault)FREE_CODING_MODELS_TELEMETRY_DEBUG=1— optional stderr debug logs for telemetry troubleshooting
On first run (or when consent policy changes), the CLI asks users to accept or decline anonymous analytics.
When enabled, telemetry events include: event name, app version, selected mode, system (macOS/Windows/Linux), and terminal family (Terminal.app, iTerm2, kitty, Warp, WezTerm, etc., with generic fallback from TERM_PROGRAM/TERM).
NVIDIA NIM (44 models, S+ → C tier):
- Sign up at build.nvidia.com
- Go to Profile → API Keys → Generate API Key
- Name it (e.g. "free-coding-models"), set expiry to "Never"
- Copy — shown only once!
Groq (6 models, fast inference):
- Sign up at console.groq.com
- Go to API Keys → Create API Key
Cerebras (3 models, ultra-fast silicon):
- Sign up at cloud.cerebras.ai
- Go to API Keys → Create
OpenRouter (:free models):
- Sign up at openrouter.ai/keys
- Create API key (
sk-or-...)
Hugging Face Inference:
- Sign up at huggingface.co/settings/tokens
- Create Access Token (
hf_...)
Replicate:
- Sign up at replicate.com/account/api-tokens
- Create API token (
r8_...)
DeepInfra:
- Sign up at deepinfra.com/login
- Create API key from your account dashboard
Fireworks AI:
- Sign up at fireworks.ai
- Open Settings → Access Tokens and create a token
Mistral Codestral:
- Sign up at codestral.mistral.ai
- Go to API Keys → Create
Hyperbolic:
- Sign up at app.hyperbolic.ai/settings
- Create an API key in Settings
Scaleway:
- Sign up at console.scaleway.com/iam/api-keys
- Go to IAM → API Keys
Google AI Studio:
- Sign up at aistudio.google.com/apikey
- Create an API key for Gemini/Gemma endpoints
SiliconFlow:
- Sign up at cloud.siliconflow.cn/account/ak
- Create API key in Account → API Keys
Together AI:
- Sign up at api.together.ai/settings/api-keys
- Create an API key in Settings
Cloudflare Workers AI:
- Sign up at dash.cloudflare.com
- Create an API token with Workers AI permissions
- Export both
CLOUDFLARE_API_TOKENandCLOUDFLARE_ACCOUNT_ID
Perplexity API:
- Sign up at perplexity.ai/settings/api
- Create API key (
PERPLEXITY_API_KEY)
💡 Free tiers — each provider exposes a dev/free tier with its own quotas.
134 coding models across 17 providers and 8 tiers, ranked by SWE-bench Verified — the industry-standard benchmark measuring real GitHub issue resolution. Scores are self-reported by providers unless noted.
| Tier | SWE-bench | Models |
|---|---|---|
| S+ ≥70% | GLM 5 (77.8%), Kimi K2.5 (76.8%), Step 3.5 Flash (74.4%), MiniMax M2.1 (74.0%), GLM 4.7 (73.8%), DeepSeek V3.2 (73.1%), Devstral 2 (72.2%), Kimi K2 Thinking (71.3%), Qwen3 Coder 480B (70.6%), Qwen3 235B (70.0%) | |
| S 60–70% | MiniMax M2 (69.4%), DeepSeek V3.1 Terminus (68.4%), Qwen3 80B Thinking (68.0%), Qwen3.5 400B (68.0%), Kimi K2 Instruct (65.8%), Qwen3 80B Instruct (65.0%), DeepSeek V3.1 (62.0%), Llama 4 Maverick (62.0%), GPT OSS 120B (60.0%) | |
| A+ 50–60% | Mistral Large 675B (58.0%), Nemotron Ultra 253B (56.0%), Colosseum 355B (52.0%), QwQ 32B (50.0%) | |
| A 40–50% | Nemotron Super 49B (49.0%), Mistral Medium 3 (48.0%), Qwen2.5 Coder 32B (46.0%), Magistral Small (45.0%), Llama 4 Scout (44.0%), Llama 3.1 405B (44.0%), Nemotron Nano 30B (43.0%), R1 Distill 32B (43.9%), GPT OSS 20B (42.0%) | |
| A- 35–40% | Llama 3.3 70B (39.5%), Seed OSS 36B (38.0%), R1 Distill 14B (37.7%), Stockmark 100B (36.0%) | |
| B+ 30–35% | Ministral 14B (34.0%), Mixtral 8x22B (32.0%), Granite 34B Code (30.0%) | |
| B 20–30% | R1 Distill 8B (28.2%), R1 Distill 7B (22.6%) | |
| C <20% | Gemma 2 9B (18.0%), Phi 4 Mini (14.0%), Phi 3.5 Mini (12.0%) |
| Tier | SWE-bench | Model |
|---|---|---|
| S 60–70% | Kimi K2 Instruct (65.8%), Llama 4 Maverick (62.0%) | |
| A+ 50–60% | QwQ 32B (50.0%) | |
| A 40–50% | Llama 4 Scout (44.0%), R1 Distill 70B (43.9%) | |
| A- 35–40% | Llama 3.3 70B (39.5%) |
| Tier | SWE-bench | Model |
|---|---|---|
| A+ 50–60% | Qwen3 32B (50.0%) | |
| A 40–50% | Llama 4 Scout (44.0%) | |
| A- 35–40% | Llama 3.3 70B (39.5%) |
- S+/S — Elite frontier coders (≥60% SWE-bench), best for complex real-world tasks and refactors
- A+/A — Great alternatives, strong at most coding tasks
- A-/B+ — Solid performers, good for targeted programming tasks
- B/C — Lightweight or older models, good for code completion on constrained infra
Use --tier to focus on a specific capability band:
free-coding-models --tier S # Only S+ and S (frontier models)
free-coding-models --tier A # Only A+, A, A- (solid performers)
free-coding-models --tier B # Only B+, B (lightweight options)
free-coding-models --tier C # Only C (edge/minimal models)During runtime, use E and D keys to dynamically adjust the tier filter:
- E (Elevate) — Show fewer, higher-tier models (cycle: All → S → A → B → C → All)
- D (Descend) — Show more, lower-tier models (cycle: All → C → B → A → S → All)
Current tier filter is shown in the header badge (e.g., [Tier S])
The easiest way — let free-coding-models do everything:
- Run:
free-coding-models --opencode(or choose OpenCode from the startup menu) - Wait for models to be pinged (green ✅ status)
- Navigate with ↑↓ arrows to your preferred model
- Press Enter — tool automatically:
- Detects if NVIDIA NIM is configured in OpenCode
- Sets your selected model as default in
~/.config/opencode/opencode.json - Launches OpenCode with the model ready to use
When launched from an existing tmux session, free-coding-models now auto-adds an OpenCode --port argument so OpenCode/oh-my-opencode can spawn sub-agents in panes.
- Priority 1: reuse
OPENCODE_PORTif it is valid and free - Priority 2: auto-pick the first free port in
4096-5095
You can force a specific port:
OPENCODE_PORT=4098 free-coding-models --opencodeCreate or edit ~/.config/opencode/opencode.json:
{
"provider": {
"nvidia": {
"npm": "@ai-sdk/openai-compatible",
"name": "NVIDIA NIM",
"options": {
"baseURL": "https://integrate.api.nvidia.com/v1",
"apiKey": "{env:NVIDIA_API_KEY}"
}
}
},
"model": "nvidia/deepseek-ai/deepseek-v3.2"
}Then set the environment variable:
export NVIDIA_API_KEY=nvapi-xxxx-your-key-here
# Add to ~/.bashrc or ~/.zshrc for persistenceRun /models in OpenCode and select NVIDIA NIM provider and your chosen model.
⚠️ Note: Free models have usage limits based on NVIDIA's tier — check build.nvidia.com for quotas.
If NVIDIA NIM is not yet configured in OpenCode, the tool:
- Shows installation instructions in your terminal
- Creates a
promptfile in$HOME/promptwith the exact configuration - Launches OpenCode, which will detect and display the prompt automatically
OpenClaw is an autonomous AI agent daemon. free-coding-models can configure it to use NVIDIA NIM models as its default provider — no download or local setup needed, everything runs via the NIM remote API.
free-coding-models --openclawOr run without flags and choose OpenClaw from the startup menu.
- Wait for models to be pinged
- Navigate with ↑↓ arrows to your preferred model
- Press Enter — tool automatically:
- Reads
~/.openclaw/openclaw.json - Adds the
nvidiaprovider block (NIM base URL + your API key) if missing - Sets
agents.defaults.model.primarytonvidia/<model-id> - Saves config and prints next steps
- Reads
{
"models": {
"providers": {
"nvidia": {
"baseUrl": "https://integrate.api.nvidia.com/v1",
"api": "openai-completions"
}
}
},
"env": {
"NVIDIA_API_KEY": "nvapi-xxxx-your-key"
},
"agents": {
"defaults": {
"model": {
"primary": "nvidia/deepseek-ai/deepseek-v3.2"
},
"models": {
"nvidia/deepseek-ai/deepseek-v3.2": {}
}
}
}
}
⚠️ Note:providersmust be nested undermodels.providers— not at the config root. A root-levelproviderskey is ignored by OpenClaw.
⚠️ Note: The model must also be listed inagents.defaults.models(the allowlist). Without this entry, OpenClaw rejects the model with "not allowed" even if it is set as primary.
OpenClaw's gateway auto-reloads config file changes (depending on gateway.reload.mode). To apply manually:
# Apply via CLI
openclaw models set nvidia/deepseek-ai/deepseek-v3.2
# Or re-run the interactive setup wizard
openclaw configure
⚠️ Note:openclaw restartdoes not exist as a CLI command. Kill and relaunch the process manually if you need a full restart.
💡 Why use remote NIM models with OpenClaw? NVIDIA NIM serves models via a fast API — no local GPU required, no VRAM limits, free credits for developers. You get frontier-class coding models (DeepSeek V3, Kimi K2, Qwen3 Coder) without downloading anything.
Problem: By default, OpenClaw only allows a few specific NVIDIA models in its allowlist. If you try to use a model that's not in the list, you'll get this error:
Model "nvidia/mistralai/devstral-2-123b-instruct-2512" is not allowed. Use /models to list providers, or /models <provider> to list models.
Solution: Patch OpenClaw's configuration to add ALL 47 NVIDIA models from free-coding-models to the allowlist:
# From the free-coding-models package directory
node patch-openclaw.jsThis script:
- Backs up
~/.openclaw/agents/main/agent/models.jsonand~/.openclaw/openclaw.json - Adds all 47 NVIDIA models with proper context window and token limits
- Preserves existing models and configuration
- Prints a summary of what was added
After patching:
-
Restart OpenClaw gateway:
systemctl --user restart openclaw-gateway
-
Verify models are available:
free-coding-models --openclaw
-
Select any model — no more "not allowed" errors!
Why this is needed: OpenClaw uses a strict allowlist system to prevent typos and invalid models. The patch-openclaw.js script populates the allowlist with all known working NVIDIA models, so you can freely switch between them without manually editing config files.
┌─────────────────────────────────────────────────────────────┐
│ 1. Enter alternate screen buffer (like vim/htop/less) │
│ 2. Ping ALL models in parallel │
│ 3. Display real-time table with Latest/Avg/Up% columns │
│ 4. Re-ping ALL models every 2 seconds (forever) │
│ 5. Update rolling averages from ALL successful pings │
│ 6. User can navigate with ↑↓ and select with Enter │
│ 7. On Enter (OpenCode): set model, launch OpenCode │
│ 8. On Enter (OpenClaw): update ~/.openclaw/openclaw.json │
└─────────────────────────────────────────────────────────────┘
Result: Continuous monitoring interface that stays open until you select a model or press Ctrl+C. Rolling averages give you accurate long-term latency data, uptime percentage tracks reliability, and you can configure your tool of choice with your chosen model in one keystroke.
Environment variables (override config file):
| Variable | Description |
|---|---|
NVIDIA_API_KEY |
NVIDIA NIM key |
GROQ_API_KEY |
Groq key |
CEREBRAS_API_KEY |
Cerebras key |
SAMBANOVA_API_KEY |
SambaNova key |
OPENROUTER_API_KEY |
OpenRouter key |
HUGGINGFACE_API_KEY / HF_TOKEN |
Hugging Face token |
REPLICATE_API_TOKEN |
Replicate token |
DEEPINFRA_API_KEY / DEEPINFRA_TOKEN |
DeepInfra key |
CODESTRAL_API_KEY |
Mistral Codestral key |
HYPERBOLIC_API_KEY |
Hyperbolic key |
SCALEWAY_API_KEY |
Scaleway key |
GOOGLE_API_KEY |
Google AI Studio key |
SILICONFLOW_API_KEY |
SiliconFlow key |
TOGETHER_API_KEY |
Together AI key |
CLOUDFLARE_API_TOKEN / CLOUDFLARE_API_KEY |
Cloudflare Workers AI token/key |
CLOUDFLARE_ACCOUNT_ID |
Cloudflare account ID (required for Workers AI endpoint URL) |
PERPLEXITY_API_KEY / PPLX_API_KEY |
Perplexity API key |
FREE_CODING_MODELS_TELEMETRY |
0 disables analytics, 1 enables analytics |
FREE_CODING_MODELS_POSTHOG_KEY |
PostHog project API key used for anonymous event capture |
FREE_CODING_MODELS_POSTHOG_HOST |
Optional PostHog ingest host (https://eu.i.posthog.com default) |
Config file: ~/.free-coding-models.json (created automatically, permissions 0600)
{
"apiKeys": {
"nvidia": "nvapi-xxx",
"groq": "gsk_xxx",
"cerebras": "csk_xxx",
"openrouter": "sk-or-xxx",
"huggingface": "hf_xxx",
"replicate": "r8_xxx",
"deepinfra": "di_xxx",
"siliconflow": "sk_xxx",
"together": "together_xxx",
"cloudflare": "cf_xxx",
"perplexity": "pplx_xxx"
},
"providers": {
"nvidia": { "enabled": true },
"groq": { "enabled": true },
"cerebras": { "enabled": true },
"openrouter": { "enabled": true },
"huggingface": { "enabled": true },
"replicate": { "enabled": true },
"deepinfra": { "enabled": true },
"siliconflow": { "enabled": true },
"together": { "enabled": true },
"cloudflare": { "enabled": true },
"perplexity": { "enabled": true }
},
"favorites": [
"nvidia/deepseek-ai/deepseek-v3.2"
],
"telemetry": {
"enabled": true,
"consentVersion": 1,
"anonymousId": "anon_550e8400-e29b-41d4-a716-446655440000"
}
}Configuration:
- Ping timeout: 15 seconds per attempt (slow models get more time)
- Ping interval: 2 seconds between complete re-pings of all models (adjustable with W/X keys)
- Monitor mode: Interface stays open forever, press Ctrl+C to exit
Flags:
| Flag | Description |
|---|---|
| (none) | Show startup menu to choose OpenCode or OpenClaw |
--opencode |
OpenCode CLI mode — Enter launches OpenCode CLI with selected model |
--opencode-desktop |
OpenCode Desktop mode — Enter sets model & opens OpenCode Desktop app |
--openclaw |
OpenClaw mode — Enter sets selected model as default in OpenClaw |
--best |
Show only top-tier models (A+, S, S+) |
--fiable |
Analyze 10 seconds, output the most reliable model as provider/model_id |
--no-telemetry |
Disable anonymous analytics for this run |
--tier S |
Show only S+ and S tier models |
--tier A |
Show only A+, A, A- tier models |
--tier B |
Show only B+, B tier models |
--tier C |
Show only C tier models |
Keyboard shortcuts (main TUI):
- ↑↓ — Navigate models
- Enter — Select model (launches OpenCode or sets OpenClaw default, depending on mode)
- R/Y/O/M/L/A/S/N/H/V/U — Sort by Rank/Tier/Origin/Model/LatestPing/Avg/SWE/Ctx/Health/Verdict/Uptime
- F — Toggle favorite on selected model (⭐ in Model column, pinned at top)
- T — Cycle tier filter (All → S+ → S → A+ → A → A- → B+ → B → C → All)
- Z — Cycle mode (OpenCode CLI → OpenCode Desktop → OpenClaw)
- P — Open Settings (manage API keys, provider toggles, analytics toggle, manual update)
- W — Decrease ping interval (faster pings)
- X — Increase ping interval (slower pings)
- K / Esc — Show/hide help overlay
- Ctrl+C — Exit
Pressing K now shows a full in-app reference: main hotkeys, settings hotkeys, and CLI flags with usage examples.
Keyboard shortcuts (Settings screen — P key):
- ↑↓ — Navigate providers, analytics row, and maintenance row
- Enter — Edit API key inline, toggle analytics on analytics row, or check/install update on maintenance row
- Space — Toggle provider enabled/disabled, or toggle analytics on analytics row
- T — Test current provider's API key (fires a live ping)
- U — Check for updates manually from settings
- Esc — Close settings and return to main TUI
git clone https://github.com/vava-nessa/free-coding-models
cd free-coding-models
npm install
npm start -- YOUR_API_KEY- Make your changes and commit them with a descriptive message
- Update
CHANGELOG.mdwith the new version entry - Bump
"version"inpackage.json(e.g.0.1.3→0.1.4) - Commit with just the version number as the message:
git add .
git commit -m "0.1.4"
git pushThe GitHub Actions workflow automatically publishes to npm on every push to main.
MIT © vava
Built with ☕ and 🌹 by vava
We welcome contributions! Feel free to open issues, submit pull requests, or get involved in the project.
Q: Can I use this with other providers? A: Yes, the tool is designed to be extensible; see the source for examples of customizing endpoints.
Q: How accurate are the latency numbers? A: They represent average round-trip times measured during testing; actual performance may vary based on network conditions.
Q: Do I need to download models locally for OpenClaw?
A: No — free-coding-models configures OpenClaw to use NVIDIA NIM's remote API, so models run on NVIDIA's infrastructure. No GPU or local setup required.
For questions or issues, open a GitHub issue.
💬 Let's talk about the project on Discord: https://discord.gg/5MbTnDC3Md
⚠️ free-coding-models is a BETA TUI — it might crash or have problems. Use at your own risk and feel free to report issues!
