@fastagent/co2 is a local CLI gateway that translates between OpenAI-compatible and Claude-compatible protocols, so existing clients can talk to the other upstream without changing their request protocol. It runs as a local HTTP service and supports two modes: o2c (OpenAI request -> Claude upstream) and c2o (Claude request -> OpenAI upstream).
npm install -g @fastagent/co2Or run it directly without installing:
npx @fastagent/co2 start --config ./co2.config.json- Create
co2.config.json
{
"server": {
"host": "127.0.0.1",
"port": 8000,
"mode": "openai-to-claude",
"logLevel": "info"
},
"providers": {
"openai": {
"apiKey": "OPENAI_API_KEY_PLACEHOLDER",
"baseUrl": "https://api.openai.com/v1",
"defaultHeaders": {
"user-agent": "co2-cli/0.2.1"
}
},
"anthropic": {
"apiKey": "ANTHROPIC_API_KEY_PLACEHOLDER",
"baseUrl": "https://api.anthropic.com",
"version": "2023-06-01",
"defaultHeaders": {
"user-agent": "co2-cli/0.2.1"
}
}
},
"routing": {
"defaultOpenAIModel": "gpt-5.4",
"defaultClaudeModel": "claude-opus-4.6",
"openAIReasoningEffort": "high",
"claudeOutputEffort": "high",
"skipInboundFields": {
"claudeMessages": ["context_management"],
"openAIResponses": [],
"openAIChatCompletions": []
}
},
"modelMap": {
"claude-opus-4.6": "gpt-5.4",
"gpt-5.4": "claude-opus-4-6"
}
}Notes:
- The example shows both
openaiandanthropicproviders so the full config shape is visible in one place. - In
openai-to-claude/o2c, onlyproviders.anthropicis used.providers.openaican be omitted without affecting startup or request handling. - In
claude-to-openai/c2o, onlyproviders.openaiis used.providers.anthropiccan be omitted without affecting startup or request handling. routing.defaultClaudeModelandrouting.claudeOutputEffortonly affecto2c.routing.defaultOpenAIModelandrouting.openAIReasoningEffortonly affectc2o.routing.skipInboundFieldslets you explicitly drop known top-level request fields before validation, so you can keep a local gateway working with newer SDK/client fields without waiting for a newco2release.
skipInboundFields is split by inbound protocol:
claudeMessages: applies toPOST /v1/messagesopenAIResponses: applies toPOST /v1/responsesopenAIChatCompletions: applies toPOST /v1/chat/completions
Behavior:
- Matching is exact and only applies to top-level fields.
- When a field is skipped,
co2removes it at the boundary, logs awarn, and continues processing the request. - This is intended for fields that are known to be sent by real clients but are not yet modeled by the gateway.
- It does not relax typo protection for other fields; unconfigured misspellings such as
thinkinggstill fail validation.
Common examples:
- Claude Code currently sends
context_managementon somec2o /v1/messagesrequests. Add it toskipInboundFields.claudeMessagesto keep those requests working. - If a future OpenAI SDK starts sending a new top-level field on
/v1/responsesor/v1/chat/completions, add that exact field name to the corresponding skip list instead of waiting for a new release.
- Start the server
co2 start --config ./co2.config.json- Call the route that matches the current mode
openai-to-claude:POST /v1/chat/completions,POST /v1/responsesclaude-to-openai:POST /v1/messages
o2c=OpenAI request -> Claude upstreamc2o=Claude request -> OpenAI upstream- The mode name is always
incoming protocol -> upstream protocol; response protocol stays aligned with the incoming side by default.
| What protocol your client speaks | What upstream you want | Mode to use |
|---|---|---|
OpenAI chat/completions / responses |
Claude | o2c |
Claude messages |
OpenAI | c2o |
Common cases:
- OpenAI SDK and other OpenAI-compatible clients usually use
o2c. - Claude-compatible clients and Claude
messagesclients usually usec2o.
curl http://127.0.0.1:8000/v1/responses \
-H 'Content-Type: application/json' \
-d '{
"model": "gpt-5.4",
"input": [
{
"role": "user",
"content": [
{ "type": "input_text", "text": "Hello" }
]
}
],
"instructions": "You are concise."
}'After changing server.mode to claude-to-openai:
curl http://127.0.0.1:8000/v1/messages \
-H 'Content-Type: application/json' \
-d '{
"model": "claude-opus-4.6",
"max_tokens": 128,
"messages": [
{ "role": "user", "content": "Hello" }
]
}'Notes:
- For production, prefer environment variables for API keys; environment variables take precedence over the config file.
ANTHROPIC_AUTH_TOKENis accepted as a compatibility alias, but if it exists together withANTHROPIC_API_KEY, they must be identical.- Node.js
>= 20.19.0is required.