A generic API proxy that converts Anthropic/Claude API requests to OpenAI Chat Completions format and forwards to any OpenAI-compatible upstream service.
Expose POST /v1/messages (Anthropic/Claude style), convert to OpenAI Chat Completions, and proxy to upstream (configured via config.json).
Edit config.json:
akoptional: API key for inbound authentication. If set, clients must sendAuthorization: Bearer <ak>orx-api-key: <ak>header.portoptional: server port (default8888)upstream_timeout_secondsoptional: upstream request timeout in seconds (default300)log_body_max_charsoptional: maximum characters to log for request/response bodies (default4096, set to0to disable)log_stream_text_preview_charsoptional: maximum characters to log for streaming response preview (default256, set to0to disable)providersrequired: array of upstream service providersbase_urlrequired: base URL of upstream service (e.g.,https://api.example.com)- The completions endpoint will be constructed as
{base_url}/v1/chat/completions
- The completions endpoint will be constructed as
api_keyrequired: used for upstream auth, sent asAuthorization: Bearer ...modelsrequired: array of model objects to expose via/v1/modelsendpointidrequired: model identifier used by clients (e.g.,glm)display_nameoptional: display name (defaults toidif not provided)remote_idoptional: model ID sent to upstream service (defaults toidif not provided)
Example:
{
"ak": "your-proxy-api-key",
"port": 8888,
"upstream_timeout_seconds": 300,
"log_body_max_chars": 4096,
"log_stream_text_preview_chars": 256,
"providers": [
{
"base_url": "https://api.example.com",
"api_key": "your-upstream-api-key",
"models": [
{
"id": "glm",
"display_name": "glm4.7",
"remote_id": "deepseek-chat"
},
{
"id": "minimax",
"display_name": "minimax2.1",
"remote_id": "mimo-v2-flash"
}
]
}
]
}Do not commit your real ak or api_key values.
CONFIG_PATHdefaultconfig.json(relative to working directory)
go run .Or build and run:
go build -o claude-proxy
./claude-proxyuse glm model:
export ANTHROPIC_BASE_URL=http://localhost:8888
export ANTHROPIC_AUTH_TOKEN=your-proxy-api-key
export ANTHROPIC_DEFAULT_HAIKU_MODEL=glm
export ANTHROPIC_DEFAULT_SONNET_MODEL=glm
export ANTHROPIC_DEFAULT_OPUS_MODEL=glm
claudeuse minimax model:
export ANTHROPIC_BASE_URL=http://localhost:8888
export ANTHROPIC_AUTH_TOKEN=your-proxy-api-key
export ANTHROPIC_DEFAULT_HAIKU_MODEL=minimax
export ANTHROPIC_DEFAULT_SONNET_MODEL=minimax
export ANTHROPIC_DEFAULT_OPUS_MODEL=minimax
claudeReturns the list of available models configured in config.json.
- Inbound auth:
- If
akis set in config, you must sendAuthorization: Bearer <ak>(orx-api-key: <ak>).
- If
Response format (Anthropic style):
{
"object": "list",
"data": [
{
"id": "glm",
"object": "model",
"created": 1234567890,
"display_name": "glm4.7"
},
{
"id": "minimax",
"object": "model",
"created": 1234567890,
"display_name": "minimax2.1"
}
]
}Example:
curl -sS http://127.0.0.1:8888/v1/models \
-H 'Authorization: Bearer your-proxy-api-key'Sends a message to the upstream service.
- Inbound auth:
- If
akis set in config, you must sendAuthorization: Bearer <ak>(orx-api-key: <ak>).
- If
- Upstream auth:
- Always sends
Authorization: Bearer <api_key>to upstream.
- Always sends
- Model mapping:
- The
modelfield in the request uses the client-sideid(e.g.,glm) - The proxy converts it to the upstream
remote_id(e.g.,deepseek-chat) before forwarding
- The
Example (non-stream):
curl -sS http://127.0.0.1:8888/v1/messages \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer your-proxy-api-key' \
-d '{
"model": "glm",
"max_tokens": 256,
"messages": [{"role": "user", "content": "hello"}]
}'Example (stream):
curl -N http://127.0.0.1:8888/v1/messages \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer your-proxy-api-key' \
-d '{
"model": "glm",
"max_tokens": 256,
"stream": true,
"messages": [{"role": "user", "content": "hello"}]
}'Health check endpoint.
Response:
{
"message": "claude-proxy",
"health": "ok"
}This project uses only Go stdlib (no external deps). If your environment blocks the default Go build cache path, set:
export GOCACHE=/tmp/tmp-go-build-cache
export GOMODCACHE=/tmp/tmp-gomodcacheLinux (amd64):
mkdir -p dist
GOOS=linux GOARCH=amd64 go build -trimpath -ldflags "-s -w" -o dist/claude-proxy_linux_amd64 .Windows (amd64):
mkdir -p dist
GOOS=windows GOARCH=amd64 go build -trimpath -ldflags "-s -w" -o dist/claude-proxy_windows_amd64.exe .macOS (arm64):
mkdir -p dist
GOOS=darwin GOARCH=arm64 go build -trimpath -ldflags "-s -w" -o dist/claude-proxy_darwin_arm64 .- Multiple providers can be configured, each with their own
base_url,api_key, and models - Model IDs are mapped from client-side
idto upstreamremote_idbefore forwarding requests - Streaming conversion supports
delta.contenttext anddelta.tool_callstool-use blocks; other Anthropic blocks are not fully implemented - Logs show forwarded request bodies; keep
log_body_max_charssmall and avoid secrets in prompts - The
/v1/modelsendpoint returns a static list from config, not from the upstream service - All configuration is read from
config.json; environment variable overrides are not supported (exceptCONFIG_PATH)