-
Notifications
You must be signed in to change notification settings - Fork 629
Description
GitHub Copilot CLI: Model Name Discrepancy Report
Date: December 6, 2025
User: ellertsmari
CLI Version: 0.0.358
Environment: Linux (Node.js v22.20.0)
Executive Summary
The GitHub Copilot CLI allows users to select different models via the --model flag. However, investigation reveals a critical routing bug: user-selected models are not being routed correctly.
All Claude model selections (regardless of choice) route to Claude 3.5 Sonnet, and GPT-5 selection routes to GPT-4.1. The backend is not implementing model-specific routing; instead, it ignores the model parameter and defaults to a single model.
Investigation Methodology
Tools Used
- GitHub Copilot CLI v0.0.358
curlfor API endpoint analysis- Process monitoring and log analysis
- Direct model identity verification via LLM responses
API Endpoint Discovered
- Base URL:
https://api.individual.githubcopilot.com/chat/completions - Method: POST
- Authentication: Bearer token (GitHub credentials)
Findings
1. Model Identity Mismatch
Multiple CLI model options were tested by submitting the prompt: "What model are you? Answer with only the model name, nothing else."
The following results were obtained:
Test Results Table
| CLI Flag | Advertised Name | Actual Model Response | Usage Logged As |
|---|---|---|---|
--model claude-sonnet-4.5 |
Claude Sonnet 4.5 | claude-3-5-sonnet-20241022 |
claude-sonnet-4.5 |
--model claude-haiku-4.5 |
Claude Haiku 4.5 | Claude 3.5 Sonnet |
claude-haiku-4.5 |
--model claude-sonnet-4 |
Claude Sonnet 4 | Claude 3.5 Sonnet |
claude-sonnet-4 |
--model gpt-5 |
GPT-5 | gpt-4.1 |
gpt-5 |
2. Model Routing Bug
The CLI accepts user selection of various models but fails to route to them. Instead of routing based on user selection, the backend ignores the model parameter and uses default models:
User Selects Actual Route
──────────────────── ──────────────────────────────────
claude-sonnet-4.5 → Claude 3.5 Sonnet (claude-3-5-sonnet-20241022)
claude-haiku-4.5 → Claude 3.5 Sonnet (claude-3-5-sonnet-20241022)
claude-sonnet-4 → Claude 3.5 Sonnet (claude-3-5-sonnet-20241022)
gpt-5 → GPT-4.1
This means:
- The model selection parameter is not being routed to the correct backends
- All Claude options provide identical responses
- Users cannot access the models they requested
3. Log Evidence
The following logs from ~/.copilot/logs/session-1dd74f7a-1c01-478c-8484-fb9005ee994f.log confirm API routing:
2025-11-15T00:56:59.271Z [INFO] Using default model: claude-sonnet-4.5
2025-11-15T00:56:59.844Z [INFO]
2025-11-15T00:57:04.218Z [INFO] [log_e28150, x-request-id: "00000-45726e33-d63d-4336-8b94-02f8bc478908"] post https://api.individual.githubcopilot.com/chat/completions succeeded with status 200 in 2914ms
All requests route to https://api.individual.githubcopilot.com/chat/completions regardless of the advertised model selection.
Test Commands & Output
Test 1: Claude Sonnet 4.5
$ echo "What model are you? Answer with only the model name, nothing else." | \
copilot --model claude-sonnet-4.5
claude-3-5-sonnet-20241022
Usage by model:
claude-sonnet-4.5 13.6k input, 15 output, 0 cache read, 0 cache write (Est. 1 Premium request)Test 2: Claude Haiku 4.5
$ echo "What model are you? Answer with only the model name, nothing else." | \
copilot --model claude-haiku-4.5
Claude 3.5 Sonnet
Usage by model:
claude-haiku-4.5 13.5k input, 12 output, 0 cache read, 0 cache write (Est. 0.33 Premium requests)Test 3: Claude Sonnet 4
$ echo "What model are you? Answer with only the model name, nothing else." | \
copilot --model claude-sonnet-4
Claude 3.5 Sonnet
Usage by model:
claude-sonnet-4 13.4k input, 12 output, 0 cache read, 0 cache write (Est. 1 Premium request)Test 4: GPT-5
$ echo "What model are you? Answer with only the model name, nothing else." | \
copilot --model gpt-5
gpt-4.1
Usage by model:
gpt-5 10.0k input, 80 output, 0 cache read, 0 cache write (Est. 1 Premium request)Impact Analysis
For Users
-
Model Selection Ignored: User selection of different models has no effect - all Claude selections produce identical results.
-
Performance Expectations Unmet: Users selecting "Haiku" expect faster responses, but receive full Sonnet-level responses instead.
-
Capability Mismatch: Users selecting specific models don't get those models' unique capabilities.
-
No Feature Control: Users cannot switch between models to optimize for speed, cost, or capability as intended.
For GitHub Copilot Service
-
Routing Logic Failure: The backend is not properly routing requests based on the model selection parameter.
-
Feature Not Functional: The model selection feature appears to work in the UI but fails completely in the backend.
-
Support Burden: Users will report issues when they can't switch between models as documented.
Possible Explanations
-
Model Routing Not Implemented: The backend may not have implemented support for receiving the model selection parameter from the CLI.
-
Request Parameter Not Passed: The CLI may not be properly passing the model selection to the backend API request.
-
Hardcoded Default Model: The backend API endpoint may have a hardcoded default model instead of reading from the request.
-
Backend Misconfiguration: The routing rules mapping model names to backend endpoints may be misconfigured or missing.
Recommendations
-
Immediate Action: Investigate backend routing logic to verify that the model selection parameter is being received and processed correctly.
-
Fix Model Routing: Implement proper model routing so that:
claude-haiku-4.5routes to Claude Haiku 4.5claude-sonnet-4.5routes to Claude Sonnet 4.5claude-sonnet-4routes to Claude Sonnet 4gpt-5routes to GPT-5- Other advertised models route to their respective backends
-
Verify Model Parameter Transfer:
- Confirm the CLI is sending the model parameter in the POST request to
/chat/completions - Confirm the backend is receiving and parsing this parameter
- Confirm the backend is using this parameter for routing decisions
- Confirm the CLI is sending the model parameter in the POST request to
-
Add Logging/Debugging:
- Add server-side logging to show which model was requested vs. which model was used
- This will help identify where the routing logic is breaking down
-
Update Testing:
- Add tests to verify that each model option routes to the correct backend
- Test end-to-end: CLI selection → API request → correct model response
Appendix: CLI Help Output
--model <model> Set the AI model to use (choices:
"claude-sonnet-4.5", "claude-sonnet-4",
"claude-haiku-4.5", "gpt-5", "gpt-5.1",
"gpt-5.1-codex-mini", "gpt-5.1-codex")
Note: The CLI also advertises gpt-5.1, gpt-5.1-codex-mini, and gpt-5.1-codex which were not tested but are likely to have similar discrepancies given the pattern observed.
System Information
- GitHub Copilot CLI Version: 0.0.358 (Commit: f5a8b1e76)
- Node.js Version: v22.20.0
- Operating System: Linux
- Package: @github/copilot v0.0.358
- Repository: https://github.com/github/copilot-cli
Conclusion
The GitHub Copilot CLI has a routing bug where the backend is not properly directing requests to the models selected by the user.
Model selection is not working:
- User can choose different model options in the CLI
- Backend ignores the model parameter and always uses default models
- Users cannot access different models despite the feature appearing available
Severity: High (core feature doesn't work)
Status: Pending GitHub Investigation
Report generated: December 6, 2025
Prepared by: User ellertsmari
Evidence: Direct API testing, LLM model self-identification, log analysis