Expand OpenAI model regression coverage#943
Conversation
Review Summary by QodoExpand test coverage for OpenAI models and predicates
WalkthroughsDescription• Expand predicate test coverage with data-driven model key iterations • Add GPT-5 token parameter selection tests for multiple model variants • Test mapped model values and provider boundary behavior for OpenAI • Extend model-name conversion coverage for custom and preset segments Diagramflowchart LR
A["Test Coverage Expansion"] --> B["Config Predicates"]
A --> C["OpenAI API Compat"]
A --> D["Token Parameters"]
A --> E["Model Name Convert"]
B --> B1["Data-driven model key tests"]
C --> C1["GPT-5 token parameter selection"]
C --> C2["Mapped model values"]
C --> C3["Provider boundary behavior"]
D --> D1["Multiple GPT-5.x variants"]
E --> E1["Preset and custom segments"]
File Changes1. tests/unit/config/config-predicates.test.mjs
|
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances the test suite for model handling logic, focusing on improving the reliability and maintainability of how different AI models are identified, configured, and interact with various API providers. By introducing data-driven tests and expanding coverage for key utility functions, it aims to prevent regressions related to model identification, token parameter processing, and model name conversions across a range of OpenAI and compatible services. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
Code Review by Qodo
1. Self-referential mapping asserts
|
📝 WalkthroughWalkthroughAdds exported model-key arrays (chatgptApiModelKeys, gptApiModelKeys, claudeApiModelKeys, openRouterApiModelKeys, aimlApiModelKeys), a predicate export (isUsingGptCompletionApiModel), and a utility export (modelNameToPresetPart); tests expanded to exercise these exports and GPT‑5 compatibility/token-parameter scenarios. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Code Review
This pull request significantly improves test coverage for OpenAI and compatible models by making many tests data-driven. The changes are well-structured and increase confidence in the model handling logic. I've left one suggestion to reduce code duplication in the new test cases, which would improve maintainability.
| for (const modelName of gpt5CompatModelNames) { | ||
| capturedInit = undefined | ||
| const session = { | ||
| modelName, | ||
| conversationRecords: [], | ||
| isRetry: false, | ||
| } | ||
| const port = createFakePort() | ||
|
|
||
| await generateAnswersWithChatgptApiCompat( | ||
| 'https://api.example.com/v1', | ||
| port, | ||
| 'CurrentQ', | ||
| session, | ||
| 'sk-test', | ||
| {}, | ||
| 'openai', | ||
| ) | ||
|
|
||
| const body = JSON.parse(capturedInit.body) | ||
| assert.equal(body.max_completion_tokens, 321) | ||
| assert.equal(Object.hasOwn(body, 'max_tokens'), false) | ||
| } |
There was a problem hiding this comment.
While converting these tests to be data-driven is a great improvement, it has introduced significant code duplication across several tests. Many tests now contain a for loop with nearly identical setup and execution logic for mocking fetch, creating a session, and calling generateAnswersWithChatgptApiCompat. Consider refactoring this repeated logic into a helper function. This would make the tests more concise, easier to read, and simpler to maintain.
There was a problem hiding this comment.
Pull request overview
Expands unit/regression coverage for model-name conversion, OpenAI token parameter selection for GPT‑5 variants, and provider/model-group predicate behavior to prevent future config/mapping regressions.
Changes:
- Added tests for preset/custom segment extraction and mapped “latest” ChatGPT API model values.
- Broadened GPT‑5 token param coverage (max_completion_tokens vs max_tokens) across OpenAI vs compat providers.
- Parameterized predicate tests across configured model-key groups (OpenAI chat/completion, Claude, OpenRouter, AI/ML).
Reviewed changes
Copilot reviewed 4 out of 4 changed files in this pull request and generated 3 comments.
| File | Description |
|---|---|
| tests/unit/utils/model-name-convert.test.mjs | Adds coverage for preset/custom parsing and mapped model value lookups. |
| tests/unit/services/apis/openai-token-params.test.mjs | Broadens GPT‑5 token param selection cases. |
| tests/unit/services/apis/openai-api-compat.test.mjs | Adds data-driven compat vs OpenAI provider behavior tests for GPT‑5 variants and mapped models. |
| tests/unit/config/config-predicates.test.mjs | Parameterizes predicate tests across exported model-key groups. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| test('uses max_completion_tokens for all recent gpt-5.x chat family models', () => { | ||
| const models = [ | ||
| 'gpt-5.1', | ||
| 'gpt-5.1-chat-latest', | ||
| 'gpt-5.2', | ||
| 'gpt-5.2-chat-latest', | ||
| 'gpt-5.3', | ||
| 'gpt-5.3-chat-latest', | ||
| ] |
There was a problem hiding this comment.
The previous coverage for provider-prefixed model identifiers (e.g. openai/gpt-5.2) was removed, which can allow regressions if getChatCompletionsTokenParams still needs to handle prefixed model strings (common with proxy/aggregator formats). Add explicit cases for prefixed GPT‑5 model ids (at least one prefixed + one unprefixed per branch) for both provider === 'openai' and provider !== 'openai'.
| test('uses max_completion_tokens for all recent gpt-5.x chat family models', () => { | ||
| const models = [ | ||
| 'gpt-5.1', | ||
| 'gpt-5.1-chat-latest', | ||
| 'gpt-5.2', | ||
| 'gpt-5.2-chat-latest', | ||
| 'gpt-5.3', | ||
| 'gpt-5.3-chat-latest', | ||
| ] |
There was a problem hiding this comment.
The test name says “chat family models”, but the table includes base models like gpt-5.1 / gpt-5.2 / gpt-5.3 (without -chat-latest). Either rename the test to reflect “GPT‑5.x family models” or restrict the list to chat-suffixed models so the name matches what’s being asserted.
| test('isUsingChatgptApiModel detects chatgpt API models and excludes custom model', () => { | ||
| assert.equal(isUsingChatgptApiModel({ modelName: 'chatgptApi4oMini' }), true) | ||
| for (const modelName of chatgptApiModelKeys) { | ||
| assert.equal(isUsingChatgptApiModel({ modelName }), true) | ||
| } | ||
| assert.equal(isUsingChatgptApiModel({ modelName: 'customModel' }), false) | ||
| }) |
There was a problem hiding this comment.
These predicate tests now iterate over the exported *ModelKeys lists; if the predicate implementations are also derived directly from those same lists, the assertions can become tautological (they won’t catch mistakes where both the predicate and list drift together). Consider keeping a small set of hard-coded representative model names per predicate (plus at most one “completeness” assertion elsewhere) so the tests independently validate behavior.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 1399e84af9
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
|
|
||
| test('uses max_tokens for non OpenAI providers even with gpt-5 models', () => { | ||
| assert.deepEqual(getChatCompletionsTokenParams('some-proxy-provider', 'openai/gpt-5.2', 257), { | ||
| assert.deepEqual(getChatCompletionsTokenParams('some-proxy-provider', 'gpt-5.2', 257), { |
There was a problem hiding this comment.
Restore provider-prefixed GPT-5 coverage in token tests
This assertion now uses gpt-5.2 instead of openai/gpt-5.2, so the suite no longer verifies provider-prefixed GPT-5 model IDs despite getChatCompletionsTokenParams explicitly matching slash-prefixed names via /(^|\/)gpt-5.../. That drops an important provider-boundary regression check: a future break in prefixed-model handling would pass these tests undetected.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (4)
tests/unit/services/apis/openai-token-params.test.mjs (2)
59-59: Retain one slash-prefixed model test to cover the regex branch.Line 59 now uses plain
gpt-5.2; adding one case likeopenai/gpt-5.2withprovider: 'openai'would preserve coverage for the(^|/)gpt-5([.-]|$)path insrc/services/apis/openai-token-params.mjs(Lines 1-13).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/services/apis/openai-token-params.test.mjs` at line 59, Add a test case in tests/unit/services/apis/openai-token-params.test.mjs that covers the slash-prefixed model regex branch by calling getChatCompletionsTokenParams with provider 'openai' and model 'openai/gpt-5.2' (and the same token limit used elsewhere, e.g., 257) and assert the expected return value; this preserves coverage for the (^|/)gpt-5([.-]|$) path in src/services/apis/openai-token-params.mjs and ensures the slash-prefixed model handling is tested alongside the existing plain 'gpt-5.2' case.
28-30: Consider removing the standalonegpt-5.2case if the table remains authoritative.Line 28 overlaps with Line 15 in the data-driven list, so this test may be redundant unless you want an intentionally explicit single-case guard.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/services/apis/openai-token-params.test.mjs` around lines 28 - 30, The test case that explicitly asserts getChatCompletionsTokenParams('openai','gpt-5.2',2048) is redundant with the data-driven list; delete this standalone test (the test block starting with test('uses max_completion_tokens for gpt-5.2 in OpenAI provider', ...)) or convert it into an entry in the existing data-driven table so the single-case guard is no longer duplicated—ensure the data-driven list still covers 'gpt-5.2' and update/remove the redundant assertion accordingly.tests/unit/services/apis/openai-api-compat.test.mjs (1)
12-23: Add non-empty assertions for GPT-5 model arrays used in loops.If
gpt5CompatModelNamesorgpt5MappedModelNamesbecomes empty, multiple tests would pass without validating behavior. Assert array length up front in each loop-based test (or once in a dedicated test).Proposed hardening diff
test('generateAnswersWithChatgptApiCompat uses max_completion_tokens for OpenAI gpt-5 models', async (t) => { + assert.equal(gpt5CompatModelNames.length > 0, true) t.mock.method(console, 'debug', () => {}) setStorage({ @@ test('generateAnswersWithChatgptApiCompat uses mapped gpt-5 API model values', async (t) => { + assert.equal(gpt5MappedModelNames.length > 0, true) t.mock.method(console, 'debug', () => {}) setStorage({ @@ test('generateAnswersWithChatgptApiCompat keeps max_tokens for mapped gpt-5 models in compat provider', async (t) => { + assert.equal(gpt5MappedModelNames.length > 0, true) t.mock.method(console, 'debug', () => {})Also applies to: 112-135, 152-176, 228-252, 310-337, 395-422
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/services/apis/openai-api-compat.test.mjs` around lines 12 - 23, Add upfront non-empty assertions for the arrays used in looped tests: ensure gpt5CompatModelNames and gpt5MappedModelNames are not empty before any test that iterates them. Locate the tests referencing these symbols (e.g., the loops that iterate gpt5CompatModelNames/gpt5MappedModelNames in openai-api-compat.test.mjs) and add an assertion like expect(gpt5CompatModelNames.length).toBeGreaterThan(0) and expect(gpt5MappedModelNames.length).toBeGreaterThan(0) (or a single combined assertion) at the start of each relevant test block (or in one dedicated test) so an empty array will fail rather than silently skip checks.tests/unit/config/config-predicates.test.mjs (1)
73-75: Guard against vacuous passes in loop-driven tests.These loops all pass if a key array is empty. Add a non-empty assertion before iterating so accidental empty exports fail loudly.
Proposed hardening diff
test('isUsingChatgptApiModel detects chatgpt API models and excludes custom model', () => { + assert.equal(chatgptApiModelKeys.length > 0, true) for (const modelName of chatgptApiModelKeys) { assert.equal(isUsingChatgptApiModel({ modelName }), true) } assert.equal(isUsingChatgptApiModel({ modelName: 'customModel' }), false) }) test('isUsingOpenAiApiModel accepts both chat and completion API model groups', () => { + assert.equal(chatgptApiModelKeys.length > 0, true) for (const modelName of chatgptApiModelKeys) { assert.equal(isUsingOpenAiApiModel({ modelName }), true) } + assert.equal(gptApiModelKeys.length > 0, true) for (const modelName of gptApiModelKeys) { assert.equal(isUsingOpenAiApiModel({ modelName }), true) } }) test('isUsingGptCompletionApiModel accepts all completion API model keys', () => { + assert.equal(gptApiModelKeys.length > 0, true) for (const modelName of gptApiModelKeys) { assert.equal(isUsingGptCompletionApiModel({ modelName }), true) }Also applies to: 80-85, 92-95, 135-137, 154-156, 161-163
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/config/config-predicates.test.mjs` around lines 73 - 75, Tests iterate over exported key arrays (e.g., chatgptApiModelKeys) and will vacuously pass if those arrays are empty; before each for...of loop (the ones using isUsingChatgptApiModel and the other predicate functions referenced in the other ranges) add an explicit non-empty assertion such as assert.ok(chatgptApiModelKeys.length > 0, 'Expected chatgptApiModelKeys to be non-empty') (and similarly for the other key arrays used near lines 80-85, 92-95, 135-137, 154-156, 161-163) so the test fails loudly if an export is accidentally empty.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@tests/unit/utils/model-name-convert.test.mjs`:
- Around line 238-243: The test uses self-referential expectations by comparing
modelNameToValue outputs to values pulled from Models (Models.chatgptApi5Latest,
Models.chatgptApi5_1Latest, Models.chatgptApi5_2Latest,
Models.chatgptApi5_3Latest); change the assertions in the test case (the test
named/modelNameToValue returns endpoint for latest chatgptApi models) to compare
to hard-coded expected endpoint strings (the canonical contract values) instead
of reading from Models so regressions in Models will fail deterministically.
Ensure each assert.equal uses the literal expected endpoint for the
corresponding input ('chatgptApi5Latest', 'chatgptApi5_1Latest',
'chatgptApi5_2Latest', 'chatgptApi5_3Latest') rather than referencing Models.
---
Nitpick comments:
In `@tests/unit/config/config-predicates.test.mjs`:
- Around line 73-75: Tests iterate over exported key arrays (e.g.,
chatgptApiModelKeys) and will vacuously pass if those arrays are empty; before
each for...of loop (the ones using isUsingChatgptApiModel and the other
predicate functions referenced in the other ranges) add an explicit non-empty
assertion such as assert.ok(chatgptApiModelKeys.length > 0, 'Expected
chatgptApiModelKeys to be non-empty') (and similarly for the other key arrays
used near lines 80-85, 92-95, 135-137, 154-156, 161-163) so the test fails
loudly if an export is accidentally empty.
In `@tests/unit/services/apis/openai-api-compat.test.mjs`:
- Around line 12-23: Add upfront non-empty assertions for the arrays used in
looped tests: ensure gpt5CompatModelNames and gpt5MappedModelNames are not empty
before any test that iterates them. Locate the tests referencing these symbols
(e.g., the loops that iterate gpt5CompatModelNames/gpt5MappedModelNames in
openai-api-compat.test.mjs) and add an assertion like
expect(gpt5CompatModelNames.length).toBeGreaterThan(0) and
expect(gpt5MappedModelNames.length).toBeGreaterThan(0) (or a single combined
assertion) at the start of each relevant test block (or in one dedicated test)
so an empty array will fail rather than silently skip checks.
In `@tests/unit/services/apis/openai-token-params.test.mjs`:
- Line 59: Add a test case in
tests/unit/services/apis/openai-token-params.test.mjs that covers the
slash-prefixed model regex branch by calling getChatCompletionsTokenParams with
provider 'openai' and model 'openai/gpt-5.2' (and the same token limit used
elsewhere, e.g., 257) and assert the expected return value; this preserves
coverage for the (^|/)gpt-5([.-]|$) path in
src/services/apis/openai-token-params.mjs and ensures the slash-prefixed model
handling is tested alongside the existing plain 'gpt-5.2' case.
- Around line 28-30: The test case that explicitly asserts
getChatCompletionsTokenParams('openai','gpt-5.2',2048) is redundant with the
data-driven list; delete this standalone test (the test block starting with
test('uses max_completion_tokens for gpt-5.2 in OpenAI provider', ...)) or
convert it into an entry in the existing data-driven table so the single-case
guard is no longer duplicated—ensure the data-driven list still covers 'gpt-5.2'
and update/remove the redundant assertion accordingly.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 0765818f-4cca-4e44-86bb-22a40e699a95
📒 Files selected for processing (4)
tests/unit/config/config-predicates.test.mjstests/unit/services/apis/openai-api-compat.test.mjstests/unit/services/apis/openai-token-params.test.mjstests/unit/utils/model-name-convert.test.mjs
Add data-driven tests around OpenAI and compat model handling, including GPT-5 token parameter selection, mapped model values, and provider boundary behavior. Also extend predicate and model-name conversion coverage across completion, Claude, OpenRouter, and AI/ML model-key groups so future config and mapping regressions fail deterministically.
1399e84 to
e6185a6
Compare
|
@codex review |
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 4 out of 4 changed files in this pull request and generated 1 comment.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| }) | ||
| } | ||
| }) | ||
|
|
There was a problem hiding this comment.
The updated test suite no longer asserts behavior for provider-prefixed model strings (e.g. openai/gpt-5.2), even though GPT5_CHAT_COMPLETIONS_MODEL_PATTERN explicitly supports a leading .../ segment. Please add back a dedicated assertion covering a slash-prefixed GPT‑5 model value so regressions in that parsing behavior are caught.
| test('uses max_completion_tokens for slash-prefixed gpt-5.2 model', () => { | |
| assert.deepEqual(getChatCompletionsTokenParams('openai', 'openai/gpt-5.2', 4096), { | |
| max_completion_tokens: 4096, | |
| }) | |
| }) |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: e6185a6ec6
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
|
|
||
| test('uses max_tokens for non OpenAI providers even with gpt-5 models', () => { | ||
| assert.deepEqual(getChatCompletionsTokenParams('some-proxy-provider', 'openai/gpt-5.2', 257), { | ||
| assert.deepEqual(getChatCompletionsTokenParams('some-proxy-provider', 'gpt-5.2', 257), { |
There was a problem hiding this comment.
Restore provider-prefixed GPT-5 regression case
This assertion now uses plain gpt-5.2, which removes coverage for provider-prefixed model IDs (for example openai/gpt-5.2) that getChatCompletionsTokenParams explicitly supports via GPT5_CHAT_COMPLETIONS_MODEL_PATTERN in src/services/apis/openai-token-params.mjs. If slash-prefixed handling regresses, OpenAI-compatible requests can silently fall back to max_tokens and only fail at runtime, but this test will no longer catch it.
Useful? React with 👍 / 👎.
| ) | ||
|
|
||
| const body = JSON.parse(capturedInit.body) | ||
| assert.equal(body.model, Models[modelName].value) |
There was a problem hiding this comment.
Avoid self-referential expectations for mapped model IDs
Using Models[modelName].value as the expected value makes this regression test read the expected model ID from the same config object the implementation uses (getModelValue in src/services/apis/openai-api.mjs), so accidental mapping changes in Models can pass both code and test together. This weakens the new coverage because it no longer verifies that specific stable model IDs are preserved.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (1)
tests/unit/services/apis/openai-api-compat.test.mjs (1)
8-8:⚠️ Potential issue | 🟠 MajorHard-code the mapped GPT-5 endpoints in these expectations.
These assertions pull expected values from
Models, which is the same config sourcegetModelValueuses. If a GPT-5 mapping regresses insrc/config/index.mjs, both production and test will drift together and this suite still passes, which undercuts the PR's “fail deterministically” goal.Suggested refactor
-import { Models } from '../../../../src/config/index.mjs' import { createFakePort } from '../../helpers/port.mjs' import { createMockSseResponse } from '../../helpers/sse-response.mjs' const gpt5CompatModelNames = [ 'chatgptApi-gpt-5-chat-latest', 'chatgptApi-gpt-5.1-chat-latest', 'chatgptApi-gpt-5.2-chat-latest', 'chatgptApi-gpt-5.3-chat-latest', ] -const gpt5MappedModelNames = [ - 'chatgptApi5Latest', - 'chatgptApi5_1Latest', - 'chatgptApi5_2Latest', - 'chatgptApi5_3Latest', +const gpt5MappedModels = [ + ['chatgptApi5Latest', 'gpt-5-chat-latest'], + ['chatgptApi5_1Latest', 'gpt-5.1-chat-latest'], + ['chatgptApi5_2Latest', 'gpt-5.2-chat-latest'], + ['chatgptApi5_3Latest', 'gpt-5.3-chat-latest'], ] -test('generateAnswersWithChatgptApiCompat uses mapped gpt-5 API model values', async (t) => { +test('generateAnswersWithChatgptApiCompat uses mapped gpt-5 API model values', async (t) => { // ... - for (const modelName of gpt5MappedModelNames) { + for (const [modelName, expectedModel] of gpt5MappedModels) { // ... const body = JSON.parse(capturedInit.body) - assert.equal(body.model, Models[modelName].value) + assert.equal(body.model, expectedModel) assert.equal(body.max_completion_tokens, 111) assert.equal(Object.hasOwn(body, 'max_tokens'), false) } }) test('generateAnswersWithChatgptApi uses OpenAI token params for mapped gpt-5 models', async (t) => { // ... - assert.equal(body.model, Models.chatgptApi5_2Latest.value) + assert.equal(body.model, 'gpt-5.2-chat-latest') assert.equal(body.max_completion_tokens, 222) assert.equal(Object.hasOwn(body, 'max_tokens'), false) }) test('generateAnswersWithChatgptApiCompat keeps max_tokens for mapped gpt-5 models in compat provider', async (t) => { // ... - for (const modelName of gpt5MappedModelNames) { + for (const [modelName, expectedModel] of gpt5MappedModels) { // ... const body = JSON.parse(capturedInit.body) - assert.equal(body.model, Models[modelName].value) + assert.equal(body.model, expectedModel) assert.equal(body.max_tokens, 223) assert.equal(Object.hasOwn(body, 'max_completion_tokens'), false) } })Also applies to: 18-23, 137-176, 178-210, 213-252
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/services/apis/openai-api-compat.test.mjs` at line 8, The tests currently import and assert against the runtime config object Models (symbol: Models) which mirrors getModelValue, making the tests non-deterministic; update the assertions in openai-api-compat.test.mjs to hard-code the expected GPT-5 endpoint strings instead of reading from Models (replace uses of Models[...] in expectations with the literal endpoint URLs/IDs you expect for GPT-5), and do the same for the other expectation blocks mentioned (lines covering 18-23, 137-176, 178-210, 213-252) so the test fails if the GPT-5 mapping in the config regresses. Ensure you only change the expected values in the assertions (not getModelValue or Models usage elsewhere).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@tests/unit/services/apis/openai-token-params.test.mjs`:
- Around line 11-26: The test suite currently only uses plain GPT-5 model names
so it no longer exercises the provider-prefixed regex branch in
getChatCompletionsTokenParams (the /(^|\/)gpt-5([.-]|$)/ case); update the
models array in the test to include one slash-prefixed GPT-5 identifier (for
example 'openai/gpt-5.2' or similar) so the branch matching a leading '/' is
exercised, and mirror the same change in the other analogous test(s) that check
GPT-5 behavior to ensure both plain and provider-prefixed forms are covered.
In `@tests/unit/utils/model-name-convert.test.mjs`:
- Around line 152-159: The tests use the wrong Azure OpenAI model-name prefix;
update the test cases so they match the actual persisted shape (use the same
prefix used by apiModeToModelName for always-custom Azure entries). Replace
occurrences of 'azureOpenAi-...' in the two assertions for modelNameToPresetPart
and modelNameToCustomPart with the real shape (the prefix used by
apiModeToModelName, e.g. azureOpenAiApiModelKeys-<deployment>) so the helper
tests exercise the true round-trip format for modelNameToPresetPart and
modelNameToCustomPart.
---
Duplicate comments:
In `@tests/unit/services/apis/openai-api-compat.test.mjs`:
- Line 8: The tests currently import and assert against the runtime config
object Models (symbol: Models) which mirrors getModelValue, making the tests
non-deterministic; update the assertions in openai-api-compat.test.mjs to
hard-code the expected GPT-5 endpoint strings instead of reading from Models
(replace uses of Models[...] in expectations with the literal endpoint URLs/IDs
you expect for GPT-5), and do the same for the other expectation blocks
mentioned (lines covering 18-23, 137-176, 178-210, 213-252) so the test fails if
the GPT-5 mapping in the config regresses. Ensure you only change the expected
values in the assertions (not getModelValue or Models usage elsewhere).
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 1732a5f6-42fb-4552-b3b2-29d8b6b8edf7
📒 Files selected for processing (4)
tests/unit/config/config-predicates.test.mjstests/unit/services/apis/openai-api-compat.test.mjstests/unit/services/apis/openai-token-params.test.mjstests/unit/utils/model-name-convert.test.mjs
🚧 Files skipped from review as they are similar to previous changes (1)
- tests/unit/config/config-predicates.test.mjs
| test('uses max_completion_tokens for all recent gpt-5.x chat family models', () => { | ||
| const models = [ | ||
| 'gpt-5.1', | ||
| 'gpt-5.1-chat-latest', | ||
| 'gpt-5.2', | ||
| 'gpt-5.2-chat-latest', | ||
| 'gpt-5.3', | ||
| 'gpt-5.3-chat-latest', | ||
| ] | ||
|
|
||
| for (const model of models) { | ||
| assert.deepEqual(getChatCompletionsTokenParams('openai', model, 333), { | ||
| max_completion_tokens: 333, | ||
| }) | ||
| } | ||
| }) |
There was a problem hiding this comment.
Keep one slash-prefixed GPT-5 case in this suite.
src/services/apis/openai-token-params.mjs explicitly supports the /(^|\/)gpt-5([.-]|$)/ branch, but after swapping these assertions to plain model names this file no longer exercises it. That makes it easier for a future regex change to break provider-prefixed values without failing tests.
Suggested test addition
+test('uses max_completion_tokens for slash-prefixed gpt-5 model values', () => {
+ assert.deepEqual(getChatCompletionsTokenParams('openai', 'openai/gpt-5.2', 333), {
+ max_completion_tokens: 333,
+ })
+})Also applies to: 58-60
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/unit/services/apis/openai-token-params.test.mjs` around lines 11 - 26,
The test suite currently only uses plain GPT-5 model names so it no longer
exercises the provider-prefixed regex branch in getChatCompletionsTokenParams
(the /(^|\/)gpt-5([.-]|$)/ case); update the models array in the test to include
one slash-prefixed GPT-5 identifier (for example 'openai/gpt-5.2' or similar) so
the branch matching a leading '/' is exercised, and mirror the same change in
the other analogous test(s) that check GPT-5 behavior to ensure both plain and
provider-prefixed forms are covered.
| test('modelNameToPresetPart returns preset segment for custom names', () => { | ||
| assert.equal(modelNameToPresetPart('azureOpenAi-my-deploy'), 'azureOpenAi') | ||
| assert.equal(modelNameToPresetPart('chatgptApi5_3Latest-chatgpt'), 'chatgptApi5_3Latest') | ||
| }) | ||
|
|
||
| test('modelNameToCustomPart keeps entire suffix for multi-hyphen custom names', () => { | ||
| assert.equal(modelNameToCustomPart('azureOpenAi-my-eu-1'), 'my-eu-1') | ||
| assert.equal(modelNameToCustomPart('chatgptApi5_3Latest-blue-green'), 'blue-green') |
There was a problem hiding this comment.
Use the real Azure OpenAI stored model-name shape in these helper tests.
Earlier in this file, apiModeToModelName for always-custom Azure entries is asserted as azureOpenAiApiModelKeys-<deployment>, but these new helper tests use azureOpenAi-.... That means they are not covering the actual round-trip format the app persists.
Suggested update
test('modelNameToPresetPart returns preset segment for custom names', () => {
- assert.equal(modelNameToPresetPart('azureOpenAi-my-deploy'), 'azureOpenAi')
+ assert.equal(
+ modelNameToPresetPart('azureOpenAiApiModelKeys-my-deploy'),
+ 'azureOpenAiApiModelKeys',
+ )
assert.equal(modelNameToPresetPart('chatgptApi5_3Latest-chatgpt'), 'chatgptApi5_3Latest')
})
test('modelNameToCustomPart keeps entire suffix for multi-hyphen custom names', () => {
- assert.equal(modelNameToCustomPart('azureOpenAi-my-eu-1'), 'my-eu-1')
+ assert.equal(modelNameToCustomPart('azureOpenAiApiModelKeys-my-eu-1'), 'my-eu-1')
assert.equal(modelNameToCustomPart('chatgptApi5_3Latest-blue-green'), 'blue-green')
})🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/unit/utils/model-name-convert.test.mjs` around lines 152 - 159, The
tests use the wrong Azure OpenAI model-name prefix; update the test cases so
they match the actual persisted shape (use the same prefix used by
apiModeToModelName for always-custom Azure entries). Replace occurrences of
'azureOpenAi-...' in the two assertions for modelNameToPresetPart and
modelNameToCustomPart with the real shape (the prefix used by
apiModeToModelName, e.g. azureOpenAiApiModelKeys-<deployment>) so the helper
tests exercise the true round-trip format for modelNameToPresetPart and
modelNameToCustomPart.
Add data-driven tests around OpenAI and compat model handling, including GPT-5 token parameter selection, mapped model values, and provider boundary behavior.
Also extend predicate and model-name conversion coverage across completion, Claude, OpenRouter, and AI/ML model-key groups so future config and mapping regressions fail deterministically.
Summary by CodeRabbit
New Features
Tests