A Python Flask application that serves as a proxy server for GitHub Copilot API, providing OpenAI and Anthropic API compatibility with caching and monitoring capabilities.
- OpenAI API Compatibility:
/v1/chat/completionsendpoint - Anthropic API Compatibility:
/v1/messagesendpoint with automatic request/response translation - Model Listing:
/v1/modelsendpoint listing available models - Model Name Mapping: Translate model names with exact and prefix-based matching
- Token Management: Automatic GitHub Copilot token refresh
- Vision Support: Handle image inputs and enable vision capabilities
- Memory Caching: Cache all requests and responses (up to 1000 entries)
- Web Dashboard: Real-time statistics and request browser
- Request Details: View full request/response bodies with JSON formatting
- Export/Import: Export and import request history as JSON Lines files
- Optional Request File Logging: Save completed requests to daily JSON Lines files
- Content Filtering: Remove or add content from system prompts and tool results
- Code Agent Manager UI: Install Codex/Claude/Copilot CLI and manage config sync from dashboard
- Code Agent Interaction: Web UI to create and interact with Claude Code, Codex, and Copilot CLI agents via the Agent Client Protocol (ACP)
- Config Sync: Sync Claude Code, Codex, and ghc-api config files with OneDrive
- Safe Backups: Auto backup overwritten config files as
*.YYYYMMDD_HHMMSS.bak - Machine Token Usage Logs: Periodic token usage JSONL per machine with cross-machine overview in dashboard
Install the package using pip:
pip install ghc-apiOr install from source:
pip install .Start the server with the ghc-api command:
ghc-apiBy default, the server will start on http://localhost:8313.
-p PORTor--port PORT: Specify the port to listen on (default: 8313)-a ADDRESSor--address ADDRESS: Specify the address to listen on (default: localhost)-cor--config: Generate a YAML config file in~/.ghc-api/config.yaml-vor--version: Show version (for exampleghc-api 1.0.15)--help: Show help message
The application looks for a configuration file at ~/.ghc-api/config.yaml. You can generate this file using:
ghc-api --configThe config file contains:
# Server Settings
address: localhost
port: 8313
debug: false
# GitHub Copilot Account Type
# Options: "individual", "business", "enterprise"
account_type: individual
# Version settings (used to build request headers)
vscode_version: "1.93.0"
api_version: "2025-04-01"
copilot_version: "0.26.7"
# Model Name Mappings
model_mappings:
# Exact match mappings
exact:
opus: claude-opus-4.5
sonnet: claude-sonnet-4.5
haiku: claude-haiku-4.5
# Prefix match mappings
prefix:
claude-sonnet-4-: claude-sonnet-4
claude-opus-4.5-: claude-opus-4.5
# Content Filtering
system_prompt_remove: [] # Strings to remove from system prompts
system_prompt_add: [] # Strings to append to system prompts
tool_result_suffix_remove: [] # Strings to remove from end of tool results
# Optional request persistence
save_request_to_file: false # If true, save completed requests to requests/YYYY-MM-DD.jl
# Optional OneDrive access gate
disable_onedrive_access: true # If true, skip all OneDrive detection/sync/shared readsThe application follows this priority for getting the GitHub token:
GITHUB_TOKENenvironment variable- Token file at
~/.ghc-api/github_token.txt - Interactive GitHub Device Flow authentication
ghc-api can manage and sync these files:
- Claude Code:
~/.claude/settings.json - Codex:
~/.codex/config.toml - ghc-api:
~/.ghc-api/config.yaml(or%APPDATA%/ghc-api/config.yamlon Windows)
OneDrive detection priority:
~/OneDrive - *~/OneDrive- In WSL:
/mnt/c/Users/<username>/OneDrive - *then/mnt/c/Users/<username>/OneDrive
To disable all OneDrive-dependent operations, set disable_onedrive_access: true in config.yaml.
When enabled, ghc-api skips OneDrive detection, config sync actions, and shared OneDrive hash reads.
Sync target folder:
.ghc-api/configSyncunder detected OneDrive root
Machine folder:
.ghc-api/agents/{hostname}_{os}whereosisWin,Linux, orWSL
Hash files:
.ghc-api/configSync/config.sha1.ghc-api/agents/{hostname}_{os}/ghc-api/config.sha1
Hashes are recalculated when local config file timestamp is newer than the hash file. On startup, ghc-api checks synced files and prints config differences to stdout (and UI indicator if different).
Every 5 minutes, ghc-api writes token usage delta (if non-zero) to:
- OneDrive mode:
.ghc-api/agents/{hostname}_{os}/token_usage.jl - Fallback when OneDrive is unavailable:
~/.ghc-api/token_usage.jl
Also flushes pending usage on shutdown (Ctrl+C/termination/normal exit).
Each JSONL line includes:
timestamp(unix seconds)modelslist with:modelrequest_countinput_tokensoutput_tokenstotal_tokens
When save_request_to_file: true, ghc-api appends each completed request to:
<ghc-api config dir>/requests/YYYY-MM-DD.jl
The saved .jl line format is the same as dashboard export (/api/requests/export) and can be imported by dashboard import (/api/requests/import).
The Code Agent page (/agent) provides a web interface for interacting with AI coding agents via the Agent Client Protocol (ACP). Supported agents:
| Agent | Package | Install |
|---|---|---|
| Claude Code | @agentclientprotocol/claude-agent-acp |
npm install -g @agentclientprotocol/claude-agent-acp |
| Codex | codex-acp |
Download from GitHub releases |
| Copilot CLI | @github/copilot |
npm install -g @github/copilot |
Agent binaries are resolved in order: environment variable override (CLAUDE_ACP_BINARY, CODEX_ACP_BINARY, COPILOT_CLI_BINARY), then PATH lookup, then npm global packages.
Session data is stored in:
- OneDrive mode:
.ghc-api/agents/{hostname}_{os}/sessions/ - Fallback:
~/.ghc-api/sessions/(or%APPDATA%/ghc-api/sessions/on Windows)
Recent working directories are persisted to workdirs.json in the same location. Sessions from other machines are browsable via the machine selector dropdown when OneDrive is enabled.
POST /v1/chat/completions- Chat completionsPOST /chat/completions- Chat completions (without v1 prefix)GET /v1/models- List available modelsGET /models- List available models (without v1 prefix)
POST /v1/messages- Messages API (Anthropic format)
GET /- Web dashboard with statisticsGET /requests- Request browser pageGET /api/runtime-config- Read in-memory runtime configPOST /api/runtime-config- Update in-memory runtime config (no file write)GET /api/stats- JSON statistics endpointGET /api/requests- Paginated list of requestsGET /api/requests/search- Full-text search in request/response bodiesGET /api/requests/export- Export all requests as JSON Lines filePOST /api/requests/import- Import requests from JSON Lines fileGET /api/request/<id>- Individual request detailsGET /api/request/<id>/request-body- Request body onlyGET /api/request/<id>/response-body- Response body onlyGET /api/config-manager/status- Config manager status and diff infoPOST /api/config-manager/install-tools- Install Codex/Claude/Copilot CLIPOST /api/config-manager/sync-to-onedrive- Sync local config to OneDrivePOST /api/config-manager/sync-from-onedrive- Copy OneDrive config to local machine with backupsGET /api/config-manager/token-usage?range=all|day|week|month- Cross-machine token usage overviewGET /api/config-manager/config-hashes- Config hash overview for shared OneDrive and each machine (with create time)
GET /agent- Code agent interaction pagePOST /api/agent/sessions- Create a new agent sessionGET /api/agent/sessions- List sessions (paginated, filterable by machine)GET /api/agent/sessions/<id>- Get session detail with message historyPOST /api/agent/sessions/<id>/prompt- Send a prompt (returns SSE stream)POST /api/agent/sessions/<id>/cancel- Cancel the current promptDELETE /api/agent/sessions/<id>- Terminate a sessionGET /api/agent/machines- List available machine namesGET /api/agent/workdirs- List recent working directories
from openai import OpenAI
client = OpenAI(
base_url="http://localhost:8313/v1",
api_key="not-needed" # Token is managed by the proxy
)
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Hello!"}]
)
print(response.choices[0].message.content)import anthropic
client = anthropic.Anthropic(
base_url="http://localhost:8313",
api_key="not-needed" # Token is managed by the proxy
)
message = client.messages.create(
model="claude-sonnet-4",
max_tokens=1024,
messages=[{"role": "user", "content": "Hello!"}]
)
print(message.content[0].text)# Chat completions
curl http://localhost:8313/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4o",
"messages": [{"role": "user", "content": "Hello!"}]
}'
# List models
curl http://localhost:8313/v1/modelsAccess the web dashboard at http://localhost:8313/ to:
- View overall statistics (total requests, data transfer)
- See per-model usage statistics
- See per-endpoint analytics
- Browse recent requests
- View detailed request/response bodies
- Use Code Agent Manager to:
- Install code-agent CLIs
- Sync config files to/from OneDrive
- See config mismatch alerts
- View token usage overview by machine/model with time-range and machine filters
- View config hash overview by machine and shared OneDrive hash with create times
- Use Code Agent page (
/agent) to:- Create interactive sessions with Claude Code, Codex, or Copilot CLI
- Send prompts and receive real-time streaming responses (text, tool calls, thinking)
- Toggle verbose mode for detailed tool inputs/outputs, stdout/stderr, and token usage
- Browse sessions across machines via OneDrive
- Resume viewing past session history
- Modular Design: Organized into separate modules for maintainability
main.py- Entry point and configuration loadingapp.py- Flask application factoryconfig.py- Configuration constants and model mappingscache.py- Request caching and statisticstranslator.py- OpenAI/Anthropic format translationstreaming.py- Streaming response handlingtoken_manager.py- GitHub token managementroutes/- API endpoint handlers (openai, anthropic, dashboard, agent)acp/- Agent Client Protocol implementation (JSON-RPC 2.0 over subprocess stdio)
- Thread-Safe Caching: Uses threading locks for concurrent access
- Memory-Based Storage: No external database dependencies
- RESTful API Design: Follows REST conventions
MIT License