feat: add Claude Opus 4.6 and GPT-5.3-Codex model support#757
feat: add Claude Opus 4.6 and GPT-5.3-Codex model support#757
Conversation
- Upgraded @openai/codex-sdk from version 0.77.0 to 0.98.0 in package-lock.json and package.json. - Introduced new model 'GPT-5.3-Codex' with enhanced capabilities in codex-models.ts and related files. - Updated descriptions for existing models to reflect their latest features and improvements. - Adjusted Codex model configuration and display to include the new model and its attributes. These changes enhance the Codex model offerings and ensure compatibility with the latest SDK version.
…upport - Changed model identifier from `claude-opus-4-5-20251101` to `claude-opus-4-6` across various files, including documentation and code references. - Updated the SDK to support adaptive thinking for Opus 4.6, allowing the model to determine its own reasoning depth. - Enhanced the thinking level options to include 'adaptive' and adjusted related components to reflect this change. - Updated tests to ensure compatibility with the new model and its features. These changes improve the model's capabilities and user experience by leveraging adaptive reasoning.
📝 WalkthroughWalkthroughUpgrades Claude Opus alias to 4.6, adds an "adaptive" thinking level (omitting explicit token budgets for adaptive), introduces codex-gpt-5.3-codex, updates SDK deps, augments Windows CLI path detection for NVM symlinks, and updates UI/tests/docs to reflect these changes. Changes
Sequence DiagramsequenceDiagram
participant User as User
participant UI as UI (Selector)
participant Types as Types (libs/types)
participant Options as SDK Options
participant Provider as Claude Provider
participant API as Claude API
User->>UI: Select model (e.g., Opus 4.6)
UI->>Types: getThinkingLevelsForModel(model)
Types-->>UI: ['none','adaptive'] or full levels
UI->>User: Show available thinking levels
User->>UI: Choose 'adaptive'
UI->>Options: buildThinkingOptions(level='adaptive')
Options->>Options: detect 'adaptive' -> omit maxThinkingTokens
Options-->>Provider: {} (no maxThinkingTokens)
Provider->>API: Send request (model: claude-opus-4-6, no maxThinkingTokens)
API-->>Provider: Response (model-managed reasoning)
Provider-->>UI: Deliver result
Estimated Code Review Effort🎯 3 (Moderate) | ⏱️ ~20 minutes Possibly Related PRs
Suggested Labels
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Summary of ChangesHello @Shironex, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request introduces significant advancements in the platform's AI model capabilities by integrating the newest Claude Opus 4.6 and GPT-5.3-Codex models. It brings sophisticated features such as adaptive thinking for Claude Opus, allowing for more intelligent and efficient reasoning processes. The changes also encompass essential SDK updates and user interface enhancements to streamline model selection, ensuring the system leverages the most current and powerful AI technologies available. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces support for the new Claude Opus 4.6 and GPT-5.3-Codex models, along with updates to the corresponding SDKs. The changes are comprehensive, covering model maps, UI components, documentation, and tests to integrate the new models and their features, such as adaptive thinking for Opus 4.6. Overall, the implementation is solid. I've identified a couple of minor opportunities for improvement related to code duplication and ensuring consistency in model defaults.
| const nvmSymlink = process.env.NVM_SYMLINK; | ||
| const paths = [ | ||
| path.join(os.homedir(), '.local', 'bin', 'claude.exe'), | ||
| path.join(appData, 'npm', 'claude.cmd'), | ||
| path.join(appData, 'npm', 'claude'), | ||
| path.join(appData, '.npm-global', 'bin', 'claude.cmd'), | ||
| path.join(appData, '.npm-global', 'bin', 'claude'), | ||
| ]; | ||
| // nvm4w (NVM for Windows) symlink path | ||
| if (nvmSymlink) { | ||
| paths.push(path.join(nvmSymlink, 'claude.cmd'), path.join(nvmSymlink, 'claude')); | ||
| } | ||
| return paths; |
There was a problem hiding this comment.
| supportsVision: true, | ||
| supportsTools: true, | ||
| tier: 'premium' as const, | ||
| default: true, |
There was a problem hiding this comment.
This correctly sets gpt53Codex as the new default model. However, there's a related constant in libs/types/src/model.ts that appears to have been missed. The DEFAULT_MODELS.codex is still set to CODEX_MODEL_MAP.gpt52Codex. Please update it to CODEX_MODEL_MAP.gpt53Codex for consistency across the codebase.
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
apps/server/package.json (1)
27-38:⚠️ Potential issue | 🔴 CriticalFix critical version discrepancies and missing SDK resolutions.
The specified versions have serious issues:
@anthropic-ai/claude-agent-sdkdeclares0.2.32in package.json, but this version does not exist on npm (latest public is0.2.29). The lockfile shows0.1.69with specifier^0.1.61—a significant downgrade from the declared version.@openai/codex-sdkis not resolved in the lockfile, indicating a missing or unresolved dependency.These mismatches will cause build failures and inconsistent environments. Verify the correct versions and ensure package.json and lockfile are synchronized.
apps/ui/src/lib/agent-context-parser.ts (1)
37-47:⚠️ Potential issue | 🟡 MinorAdd display names for GPT‑5.3 Codex models.
With the new
codex-gpt-5.3-codexentry, this formatter will fall back to generic labeling. Adding explicit cases keeps UI display consistent.✅ Suggested addition
// Codex/GPT models - specific formatting + if (model === 'codex-gpt-5.3-codex') return 'GPT-5.3 Codex'; + if (model === 'codex-gpt-5.3') return 'GPT-5.3'; if (model === 'codex-gpt-5.2-codex') return 'GPT-5.2 Codex'; if (model === 'codex-gpt-5.2') return 'GPT-5.2';
🤖 Fix all issues with AI agents
In `@apps/ui/src/components/views/board-view/shared/thinking-level-selector.tsx`:
- Around line 8-23: The component calls getThinkingLevelsForModel with the raw
model prop (in ThinkingLevelSelector) which can be an alias like "opus" and
causes adaptive detection to fail; normalize the model string first (using
resolveModelString from model-config.ts or `@automaker/model-resolver`) and pass
the resolved canonical id to getThinkingLevelsForModel in all places where model
is used (e.g., the initial levels = ... call and any other calls inside
ThinkingLevelSelector), ensuring you default to empty string when
resolveModelString returns undefined.
In `@libs/types/src/codex-models.ts`:
- Around line 33-39: The resolver is passing the internal alias
"codex-gpt-5.3-codex" through to the Codex SDK; update the model-resolver logic
in libs/model-resolver/src/resolver.ts to strip the "codex-" provider prefix
before calling the Codex SDK by using the existing stripProviderPrefix utility
(same approach used for other prefixed providers). Locate the Codex branch in
the resolver (where Codex models are detected — around the code that imports
stripProviderPrefix and handles provider-specific mapping) and ensure you call
stripProviderPrefix(modelId) (or an equivalent helper) to produce
"gpt-5.3-codex" and pass that to the SDK call (e.g., wherever the Codex API
parameter or variable is set), leaving internal IDs unchanged elsewhere.
🧹 Nitpick comments (5)
apps/ui/docs/AGENT_ARCHITECTURE.md (1)
200-215: Consider documenting adaptive thinking support.While the model update is correct, Opus 4.6 introduces adaptive thinking as a notable new feature (per PR objectives). Consider adding a brief note or example in this configuration section to help developers leverage this capability.
📝 Example documentation addition
You could add a comment or additional section after line 215:
sandbox: { enabled: true, // Sandboxed bash execution autoAllowBashIfSandboxed: true } } + +// Note: Claude Opus 4.6 supports adaptive thinking mode, which automatically +// adjusts the thinking budget based on task complexity (max 128K output tokens).libs/platform/src/system-paths.ts (1)
1141-1163: Consistent implementation, consider extracting helper to reduce duplication.The NVM symlink logic is now duplicated across three functions (
getClaudeCliPaths,getCodexCliPaths,getOpenCodeCliPaths). Consider extracting a small helper to DRY this up.♻️ Optional: Extract NVM symlink helper
Add a helper function near the other private helpers:
/** * Get NVM for Windows symlink paths for a given CLI tool */ function getNvmSymlinkPaths(baseName: string): string[] { const nvmSymlink = process.env.NVM_SYMLINK; if (!nvmSymlink) return []; return [path.join(nvmSymlink, `${baseName}.cmd`), path.join(nvmSymlink, baseName)]; }Then simplify each function:
if (isWindows) { const appData = process.env.APPDATA || path.join(homeDir, 'AppData', 'Roaming'); const localAppData = process.env.LOCALAPPDATA || path.join(homeDir, 'AppData', 'Local'); - const nvmSymlink = process.env.NVM_SYMLINK; - const paths = [ + return [ // ... existing paths ... - ]; - // nvm4w (NVM for Windows) symlink path - if (nvmSymlink) { - paths.push(path.join(nvmSymlink, 'opencode.cmd'), path.join(nvmSymlink, 'opencode')); - } - return paths; + // nvm4w (NVM for Windows) symlink path + ...getNvmSymlinkPaths('opencode'), + ]; }apps/ui/src/components/views/settings-view/providers/codex-model-configuration.tsx (1)
23-60: Derive reasoning-effort support from model metadata to avoid drift.
The hardcodedreasoningModelslist duplicates model capability data and will need manual updates whenever new models are added.♻️ Suggested refactor (metadata-driven)
interface CodexModelInfo { id: CodexModelId; label: string; description: string; + supportsReasoningEffort?: boolean; } const CODEX_MODEL_INFO: Record<CodexModelId, CodexModelInfo> = { 'codex-gpt-5.3-codex': { id: 'codex-gpt-5.3-codex', label: 'GPT-5.3-Codex', description: 'Latest frontier agentic coding model', + supportsReasoningEffort: true, }, 'codex-gpt-5.2-codex': { id: 'codex-gpt-5.2-codex', label: 'GPT-5.2-Codex', description: 'Frontier agentic coding model', + supportsReasoningEffort: true, }, 'codex-gpt-5.1-codex-max': { id: 'codex-gpt-5.1-codex-max', label: 'GPT-5.1-Codex-Max', description: 'Codex-optimized flagship for deep and fast reasoning', + supportsReasoningEffort: true, }, 'codex-gpt-5.1-codex-mini': { id: 'codex-gpt-5.1-codex-mini', label: 'GPT-5.1-Codex-Mini', description: 'Optimized for codex. Cheaper, faster, but less capable', + supportsReasoningEffort: false, }, 'codex-gpt-5.2': { id: 'codex-gpt-5.2', label: 'GPT-5.2', description: 'Latest frontier model with improvements across knowledge, reasoning and coding', + supportsReasoningEffort: true, }, 'codex-gpt-5.1': { id: 'codex-gpt-5.1', label: 'GPT-5.1', description: 'Great for coding and agentic tasks across domains', + supportsReasoningEffort: true, }, }; -function supportsReasoningEffort(modelId: string): boolean { - const reasoningModels = [ - 'codex-gpt-5.3-codex', - 'codex-gpt-5.2-codex', - 'codex-gpt-5.1-codex-max', - 'codex-gpt-5.2', - 'codex-gpt-5.1', - ]; - return reasoningModels.includes(modelId); +function supportsReasoningEffort(modelId: CodexModelId): boolean { + return CODEX_MODEL_INFO[modelId]?.supportsReasoningEffort ?? false; }Based on learnings: When implementing AI model capability flags (like thinking/reasoning support), always add per-model flags to model definitions and check the specific model being used, rather than assuming all models from a provider have the same capabilities.
Also applies to: 166-174
apps/ui/src/components/views/board-view/dialogs/add-feature-dialog.tsx (1)
266-281: Consider using theisAdaptiveThinkingModel()helper for consistency and maintainability.The current inline checks duplicate the same logic as the shared helper function. Using
isAdaptiveThinkingModel(entry.model)consolidates the adaptive model detection and aligns with howgetThinkingLevelsForModel()also uses this helper—making it a single source of truth if the adaptation rules ever change.♻️ Suggested update
-import { supportsReasoningEffort } from '@automaker/types'; +import { supportsReasoningEffort, isAdaptiveThinkingModel } from '@automaker/types';- const isNewModelAdaptive = - entry.model === 'claude-opus' || - (typeof entry.model === 'string' && entry.model.includes('opus-4-6')); + const isNewModelAdaptive = isAdaptiveThinkingModel(entry.model);apps/ui/src/components/views/board-view/shared/thinking-level-selector.tsx (1)
49-51: Reuse the computedlevelsfor the helper text.
This avoids a secondgetThinkingLevelsForModelcall and keeps the messaging in sync with the rendered options.♻️ Suggested diff
- const levels = model ? getThinkingLevelsForModel(model) : getThinkingLevelsForModel(''); + const levels = getThinkingLevelsForModel(model ?? ''); + const hasAdaptive = levels.includes('adaptive'); @@ - {model && getThinkingLevelsForModel(model).includes('adaptive') + {model && hasAdaptive ? 'Adaptive thinking lets the model decide how much reasoning to use.' : 'Higher levels give more time to reason through complex problems.'}
| interface ThinkingLevelSelectorProps { | ||
| selectedLevel: ThinkingLevel; | ||
| onLevelSelect: (level: ThinkingLevel) => void; | ||
| testIdPrefix?: string; | ||
| /** Optional model ID to filter available thinking levels (e.g., Opus 4.6 only shows None/Adaptive) */ | ||
| model?: string; | ||
| } | ||
|
|
||
| export function ThinkingLevelSelector({ | ||
| selectedLevel, | ||
| onLevelSelect, | ||
| testIdPrefix = 'thinking-level', | ||
| model, | ||
| }: ThinkingLevelSelectorProps) { | ||
| const levels = model ? getThinkingLevelsForModel(model) : getThinkingLevelsForModel(''); | ||
|
|
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Search for ThinkingLevelSelector usage to see what shape model prop has at call sites
rg -A 3 "ThinkingLevelSelector" --type=tsxRepository: AutoMaker-Org/automaker
Length of output: 94
🏁 Script executed:
# Examine getThinkingLevelsForModel definition and what it expects
rg -B 2 -A 5 "getThinkingLevelsForModel" --type=tsRepository: AutoMaker-Org/automaker
Length of output: 8446
🏁 Script executed:
# Check how models are typically passed and if resolveModelString is used
rg "resolveModelString" --type=ts --type=tsxRepository: AutoMaker-Org/automaker
Length of output: 94
🏁 Script executed:
# Look at the isAdaptiveThinkingModel and related model checking logic
rg -B 2 -A 8 "isAdaptiveThinkingModel" --type=tsRepository: AutoMaker-Org/automaker
Length of output: 2035
🏁 Script executed:
rg -B 2 -A 2 "<ThinkingLevelSelector" --type tsRepository: AutoMaker-Org/automaker
Length of output: 49
🏁 Script executed:
# Check resolveModelString availability and usage
rg -A 3 "from.*model-resolver" --type tsRepository: AutoMaker-Org/automaker
Length of output: 6929
🏁 Script executed:
# Check the model-constants file referenced in thinking-level-selector
cat -n apps/ui/src/components/views/board-view/shared/model-constants.tsRepository: AutoMaker-Org/automaker
Length of output: 7384
🏁 Script executed:
cat -n apps/ui/src/components/views/board-view/shared/thinking-level-selector.tsxRepository: AutoMaker-Org/automaker
Length of output: 2466
Normalize model parameter before passing to getThinkingLevelsForModel.
The component calls getThinkingLevelsForModel(model) three times without normalization. If model receives an alias like 'opus' instead of the full ID 'claude-opus', the adaptive thinking detection fails because isAdaptiveThinkingModel checks for exact match model === 'claude-opus'. Use resolveModelString() from @automaker/model-resolver (available via model-config.ts) or a canonical lookup to normalize the model ID before use.
🤖 Prompt for AI Agents
In `@apps/ui/src/components/views/board-view/shared/thinking-level-selector.tsx`
around lines 8 - 23, The component calls getThinkingLevelsForModel with the raw
model prop (in ThinkingLevelSelector) which can be an alias like "opus" and
causes adaptive detection to fail; normalize the model string first (using
resolveModelString from model-config.ts or `@automaker/model-resolver`) and pass
the resolved canonical id to getThinkingLevelsForModel in all places where model
is used (e.g., the initial levels = ... call and any other calls inside
ThinkingLevelSelector), ensuring you default to empty string when
resolveModelString returns undefined.
| 'codex-gpt-5.3-codex': { | ||
| id: 'codex-gpt-5.3-codex', | ||
| label: 'GPT-5.3-Codex', | ||
| description: 'Latest frontier agentic coding model', | ||
| hasThinking: true, | ||
| supportsVision: true, | ||
| }, |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n libs/types/src/codex-models.ts | head -50Repository: AutoMaker-Org/automaker
Length of output: 1814
🌐 Web query:
OpenAI Codex SDK 0.98.0 GPT-5.3-Codex model identifier documentation
💡 Result:
For Codex CLI + Codex SDK (v0.98.0), the GPT‑5.3‑Codex model identifier is:
gpt-5.3-codex[1]
Where it’s documented / how to use it:
- Codex changelog (entry for Codex CLI 0.98.0, dated 2026‑02‑05) shows switching with:
codex --model gpt-5.3-codex[1] - OpenAI’s release post notes GPT‑5.3‑Codex is available across Codex surfaces (app/CLI/IDE/web) and that API access is “coming soon” [2]. For API-key workflows, the changelog says to keep using
gpt-5.2-codexwhile API support rolls out [1].
Sources: [1], [2]
🏁 Script executed:
# Search for the model resolver function mentioned in guidelines
fd -t f -e ts -e tsx -e js -e jsx | xargs rg -l "resolveModelString|model.*resolver" | head -20Repository: AutoMaker-Org/automaker
Length of output: 998
🏁 Script executed:
# Search for usages of the model ID in the codebase
rg "codex-gpt-5\.3-codex" --type ts --type tsx --type js --type jsx -B 3 -A 3Repository: AutoMaker-Org/automaker
Length of output: 94
🏁 Script executed:
# Look for `@automaker/model-resolver` imports and usage
rg "@automaker/model-resolver|resolveModelString" --type ts --type tsx --type js --type jsx -A 2Repository: AutoMaker-Org/automaker
Length of output: 94
🏁 Script executed:
# Read the model resolver implementation
cat libs/model-resolver/src/resolver.tsRepository: AutoMaker-Org/automaker
Length of output: 9207
🏁 Script executed:
# Search for codex-gpt-5.3-codex usage without tsx filter
rg "codex-gpt-5\.3-codex" -B 3 -A 3Repository: AutoMaker-Org/automaker
Length of output: 3214
🏁 Script executed:
# Check the model resolver tests
cat libs/model-resolver/tests/resolver.test.tsRepository: AutoMaker-Org/automaker
Length of output: 20075
Fix the model resolver to strip the codex- prefix before passing to the Codex SDK API.
The model ID codex-gpt-5.3-codex is correctly defined as an internal alias, but the resolver passes it unchanged to the Codex SDK. The actual SDK specification expects gpt-5.3-codex without the prefix. Update libs/model-resolver/src/resolver.ts to strip the codex- prefix for Codex models before API calls, similar to how other prefixed models are handled. The resolver currently imports stripProviderPrefix but doesn't apply it to Codex models (lines 83-86).
🤖 Prompt for AI Agents
In `@libs/types/src/codex-models.ts` around lines 33 - 39, The resolver is passing
the internal alias "codex-gpt-5.3-codex" through to the Codex SDK; update the
model-resolver logic in libs/model-resolver/src/resolver.ts to strip the
"codex-" provider prefix before calling the Codex SDK by using the existing
stripProviderPrefix utility (same approach used for other prefixed providers).
Locate the Codex branch in the resolver (where Codex models are detected —
around the code that imports stripProviderPrefix and handles provider-specific
mapping) and ensure you call stripProviderPrefix(modelId) (or an equivalent
helper) to produce "gpt-5.3-codex" and pass that to the SDK call (e.g., wherever
the Codex API parameter or variable is set), leaving internal IDs unchanged
elsewhere.
- Added `isAdaptiveThinkingModel` utility to improve model identification logic in the AddFeatureDialog. - Updated the ThinkingLevelSelector to conditionally display information based on available thinking levels. - Enhanced model name formatting in agent-context-parser to include 'GPT-5.3 Codex' for better clarity. These changes improve the user experience by refining model handling and UI feedback related to adaptive thinking capabilities.
- Introduced a new event type 'dev-server:url-detected' to enhance event handling for the development server. - This addition allows for better tracking and response to URL detection during server operations. These changes improve the event system's capability to manage server-related events effectively.
Summary
claude-opus-4-6with 128K max output tokens and adaptive thinking support@openai/codex-sdkto 0.98.0@anthropic-ai/claude-agent-sdkfrom 0.1.76 to 0.2.32Changes
Claude Opus 4.6 Integration
CLAUDE_CANONICAL_MAP,CLAUDE_MODEL_MAP,DEFAULT_MODELS) to resolveopus→claude-opus-4-6'adaptive'thinking level for Opus 4.6 (model decides its own reasoning depth)isAdaptiveThinkingModel()andgetThinkingLevelsForModel()helpers to@automaker/typesOpus 4.6) and available models list (128K output)GPT-5.3-Codex Model
gpt53Codex: 'codex-gpt-5.3-codex'toCODEX_MODEL_MAP@openai/codex-sdkfrom 0.97.0 to 0.98.0SDK Updates
@anthropic-ai/claude-agent-sdk0.1.76 → 0.2.32@openai/codex-sdk0.97.0 → 0.98.0outputFormatproperty confirmed still valid (no deprecation)Test plan
claude-opus-4-6modelclaude-opus-4-5-20251101in codebase🤖 Generated with Claude Code
Summary by CodeRabbit
New Features
Updates
Chores