Skip to content

feat: add Claude Opus 4.6 and GPT-5.3-Codex model support#757

Open
Shironex wants to merge 4 commits intov0.14.0rcfrom
feat/new-claude-and-codex-models
Open

feat: add Claude Opus 4.6 and GPT-5.3-Codex model support#757
Shironex wants to merge 4 commits intov0.14.0rcfrom
feat/new-claude-and-codex-models

Conversation

@Shironex
Copy link
Collaborator

@Shironex Shironex commented Feb 5, 2026

Summary

  • Claude Opus 4.6: Update default Opus model to claude-opus-4-6 with 128K max output tokens and adaptive thinking support
  • GPT-5.3-Codex: Add latest Codex frontier model and update @openai/codex-sdk to 0.98.0
  • Claude Agent SDK: Update @anthropic-ai/claude-agent-sdk from 0.1.76 to 0.2.32

Changes

Claude Opus 4.6 Integration

  • Updated all 3 model maps (CLAUDE_CANONICAL_MAP, CLAUDE_MODEL_MAP, DEFAULT_MODELS) to resolve opusclaude-opus-4-6
  • Added 'adaptive' thinking level for Opus 4.6 (model decides its own reasoning depth)
  • Added isAdaptiveThinkingModel() and getThinkingLevelsForModel() helpers to @automaker/types
  • Updated thinking level UI: Opus 4.6 shows only "None" / "Adaptive", other models keep full 5-level range
  • Auto-normalizes thinking level on model switch (to/from Opus 4.6)
  • Updated display names (Opus 4.6) and available models list (128K output)

GPT-5.3-Codex Model

  • Added gpt53Codex: 'codex-gpt-5.3-codex' to CODEX_MODEL_MAP
  • Added to UI model options, reasoning effort support, and available models list
  • Updated @openai/codex-sdk from 0.97.0 to 0.98.0

SDK Updates

  • @anthropic-ai/claude-agent-sdk 0.1.76 → 0.2.32
  • @openai/codex-sdk 0.97.0 → 0.98.0
  • outputFormat property confirmed still valid (no deprecation)

Test plan

  • All 547 package tests passing
  • All 1418 server tests passing (including new adaptive thinking tests)
  • 0 lint errors
  • Successfully executed a feature with claude-opus-4-6 model
  • No remaining references to claude-opus-4-5-20251101 in codebase

🤖 Generated with Claude Code

Summary by CodeRabbit

  • New Features

    • Adaptive thinking mode available for Opus 4.6 so models can auto-adjust reasoning.
    • GPT-5.3‑Codex added as a new reasoning-capable coding model.
  • Updates

    • Opus upgraded to 4.6 with much larger output capacity (up to 128K tokens).
    • Thinking-level UI and docs now show model-specific options including "Adaptive" and refreshed model descriptions/defaults.
  • Chores

    • Improved Windows CLI path detection to handle NVM symlinks.

- Upgraded @openai/codex-sdk from version 0.77.0 to 0.98.0 in package-lock.json and package.json.
- Introduced new model 'GPT-5.3-Codex' with enhanced capabilities in codex-models.ts and related files.
- Updated descriptions for existing models to reflect their latest features and improvements.
- Adjusted Codex model configuration and display to include the new model and its attributes.

These changes enhance the Codex model offerings and ensure compatibility with the latest SDK version.
…upport

- Changed model identifier from `claude-opus-4-5-20251101` to `claude-opus-4-6` across various files, including documentation and code references.
- Updated the SDK to support adaptive thinking for Opus 4.6, allowing the model to determine its own reasoning depth.
- Enhanced the thinking level options to include 'adaptive' and adjusted related components to reflect this change.
- Updated tests to ensure compatibility with the new model and its features.

These changes improve the model's capabilities and user experience by leveraging adaptive reasoning.
@coderabbitai
Copy link

coderabbitai bot commented Feb 5, 2026

📝 Walkthrough

Walkthrough

Upgrades Claude Opus alias to 4.6, adds an "adaptive" thinking level (omitting explicit token budgets for adaptive), introduces codex-gpt-5.3-codex, updates SDK deps, augments Windows CLI path detection for NVM symlinks, and updates UI/tests/docs to reflect these changes.

Changes

Cohort / File(s) Summary
Claude Opus 4.6 Migration
CLAUDE.md, docs/llm-shared-packages.md, docs/server/utilities.md, docs/server/providers.md, libs/model-resolver/README.md, libs/types/src/model.ts, apps/server/src/providers/provider-factory.ts, apps/server/src/providers/claude-provider.ts, apps/ui/src/lib/agent-context-parser.ts, apps/ui/docs/AGENT_ARCHITECTURE.md, apps/server/tests/unit/.../model-resolver.test.ts, apps/server/tests/unit/providers/claude-provider.test.ts, apps/server/tests/unit/providers/provider-factory.test.ts, libs/model-resolver/tests/resolver.test.ts
Replaced claude-opus-4-5-20251101 with claude-opus-4-6 across mappings, docs, tests; updated Opus metadata (maxOutputTokens changed).
Adaptive Thinking Level Support
libs/types/src/settings.ts, libs/types/src/index.ts, apps/server/src/lib/sdk-options.ts, apps/server/src/providers/claude-provider.ts, apps/ui/src/components/views/board-view/shared/model-constants.ts, apps/ui/src/components/views/board-view/shared/thinking-level-selector.tsx, apps/ui/src/components/views/board-view/dialogs/add-feature-dialog.tsx, apps/ui/src/components/views/settings-view/model-defaults/phase-model-selector.tsx, apps/server/tests/unit/lib/sdk-options.test.ts
Added adaptive thinking level, updated ThinkingLevel type and token budget map (adaptive → undefined), added isAdaptiveThinkingModel() and getThinkingLevelsForModel(), SDK now omits maxThinkingTokens for adaptive, and UI components fetch and normalize model-specific thinking levels.
Codex Model Additions & Updates
libs/types/src/codex-models.ts, libs/types/src/model-display.ts, libs/types/src/model.ts, apps/server/src/providers/codex-models.ts, apps/ui/src/components/views/settings-view/providers/codex-model-configuration.tsx, libs/types/src/index.ts, apps/server/package.json
Added codex-gpt-5.3-codex (GPT-5.3-Codex), updated Codex descriptions and reasoning flags, adjusted CODEX/REASONING mappings; bumped @anthropic-ai/claude-agent-sdk and @openai/codex-sdk versions.
System Path Detection
libs/platform/src/system-paths.ts
Augmented Windows CLI path detection to include NVM for Windows symlink targets when NVM_SYMLINK is set (appends symlinked claude/codex/opencode paths).
UI & Tests Adjustments
apps/ui/src/components/.../thinking-level-selector.tsx, apps/ui/src/components/.../phase-model-selector.tsx, apps/ui/src/lib/agent-context-parser.ts, various tests under apps/server/tests and libs/model-resolver/tests
UI components now source model-specific thinking levels; thinking-level normalization added when switching models; multiple tests updated to expect new model identifiers and adaptive behavior.

Sequence Diagram

sequenceDiagram
    participant User as User
    participant UI as UI (Selector)
    participant Types as Types (libs/types)
    participant Options as SDK Options
    participant Provider as Claude Provider
    participant API as Claude API

    User->>UI: Select model (e.g., Opus 4.6)
    UI->>Types: getThinkingLevelsForModel(model)
    Types-->>UI: ['none','adaptive'] or full levels
    UI->>User: Show available thinking levels

    User->>UI: Choose 'adaptive'
    UI->>Options: buildThinkingOptions(level='adaptive')
    Options->>Options: detect 'adaptive' -> omit maxThinkingTokens
    Options-->>Provider: {} (no maxThinkingTokens)

    Provider->>API: Send request (model: claude-opus-4-6, no maxThinkingTokens)
    API-->>Provider: Response (model-managed reasoning)
    Provider-->>UI: Deliver result
Loading

Estimated Code Review Effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Possibly Related PRs

Suggested Labels

Enhancement, scope: ui, Tests

Poem

🐇 I hopped from 4.5 into 4.6 bright,
Adaptive thoughts now let the model write,
Codex 5.3 pops in to help code play,
Symlinks found paths to guide the way,
A tiny rabbit cheers this upgrade night.

🚥 Pre-merge checks | ✅ 2 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 69.23% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately summarizes the main changes: adding Claude Opus 4.6 and GPT-5.3-Codex model support, which are the primary features introduced across all modified files.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feat/new-claude-and-codex-models

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @Shironex, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces significant advancements in the platform's AI model capabilities by integrating the newest Claude Opus 4.6 and GPT-5.3-Codex models. It brings sophisticated features such as adaptive thinking for Claude Opus, allowing for more intelligent and efficient reasoning processes. The changes also encompass essential SDK updates and user interface enhancements to streamline model selection, ensuring the system leverages the most current and powerful AI technologies available.

Highlights

  • Claude Opus 4.6 Integration: The default Claude Opus model has been updated to claude-opus-4-6, featuring an increased maximum output token limit of 128K and the introduction of 'adaptive thinking'. This new thinking level allows the model to dynamically determine its own reasoning depth, optimizing performance.
  • GPT-5.3-Codex Model Support: Support for the latest frontier model, GPT-5.3-Codex, has been added. This includes its integration into the UI, reasoning effort capabilities, and updates to the underlying @openai/codex-sdk to version 0.98.0.
  • SDK Updates: The @anthropic-ai/claude-agent-sdk has been upgraded from 0.1.76 to 0.2.32, and the @openai/codex-sdk has been updated from 0.97.0 to 0.98.0, ensuring compatibility with the latest model features and improvements.
  • Dynamic Thinking Level UI/UX: The user interface for selecting thinking levels now dynamically adjusts based on the chosen model. For Claude Opus 4.6, only 'None' and 'Adaptive' options are presented, while other models retain the full range of manual thinking levels. This also includes auto-normalization of thinking levels when switching between adaptive and non-adaptive models.
  • Enhanced CLI Path Resolution for Windows: CLI tool path resolution on Windows has been improved by adding support for NVM for Windows (nvm4w) symlink paths, which enhances the discovery of claude, codex, and opencode executables installed via npm global under NVM.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • CLAUDE.md
    • Updated Claude Opus model reference from claude-opus-4-5-20251101 to claude-opus-4-6.
  • apps/server/package.json
    • Updated @anthropic-ai/claude-agent-sdk to 0.2.32.
    • Updated @openai/codex-sdk to 0.98.0.
  • apps/server/src/lib/sdk-options.ts
    • Modified buildThinkingOptions to handle 'adaptive' thinking level by omitting maxThinkingTokens, allowing the model to decide its reasoning depth.
  • apps/server/src/providers/claude-provider.ts
    • Updated getAvailableModels to reflect Claude Opus 4.6 with 128K output tokens and an adaptive thinking description.
    • Adjusted maxThinkingTokens logic to support adaptive thinking for Opus 4.6.
  • apps/server/src/providers/codex-models.ts
    • Added GPT-5.3-Codex model definition.
    • Updated descriptions for existing Codex models (GPT-5.2-Codex, GPT-5.1-Codex-Max, GPT-5.1-Codex-Mini, GPT-5.2).
  • apps/server/src/providers/provider-factory.ts
    • Updated model ID reference in JSDoc for getProviderForModel to claude-opus-4-6.
  • apps/server/tests/unit/lib/model-resolver.test.ts
    • Updated expected model string for 'opus' alias to claude-opus-4-6 in resolver tests.
  • apps/server/tests/unit/lib/sdk-options.test.ts
    • Added new test cases for adaptive thinking, ensuring maxThinkingTokens is undefined for 'adaptive' and 'none' levels.
  • apps/server/tests/unit/providers/claude-provider.test.ts
    • Updated all test cases to use claude-opus-4-6 instead of the previous Opus model ID.
  • apps/server/tests/unit/providers/provider-factory.test.ts
    • Updated test cases to use claude-opus-4-6 for ClaudeProvider instantiation.
  • apps/ui/docs/AGENT_ARCHITECTURE.md
    • Updated the example model ID to claude-opus-4-6.
  • apps/ui/src/components/views/board-view/dialogs/add-feature-dialog.tsx
    • Implemented logic in handleModelChange to normalize thinking levels when switching between adaptive and non-adaptive models.
  • apps/ui/src/components/views/board-view/shared/model-constants.ts
    • Added 'adaptive' to THINKING_LEVELS and THINKING_LEVEL_LABELS.
  • apps/ui/src/components/views/board-view/shared/thinking-level-selector.tsx
    • Modified to use getThinkingLevelsForModel to dynamically display available thinking levels based on the selected model.
    • Updated descriptive text to explain adaptive thinking.
  • apps/ui/src/components/views/settings-view/model-defaults/phase-model-selector.tsx
    • Integrated getThinkingLevelsForModel to filter thinking level options.
    • Added description for 'adaptive' thinking in the UI.
  • apps/ui/src/components/views/settings-view/providers/codex-model-configuration.tsx
    • Added GPT-5.3-Codex to CODEX_MODEL_INFO.
    • Included GPT-5.3-Codex in the supportsReasoningEffort function.
    • Updated descriptions for other Codex models.
  • apps/ui/src/lib/agent-context-parser.ts
    • Updated DEFAULT_MODEL to claude-opus-4-6.
    • Added specific formatting for opus-4-6 in formatModelName.
  • docs/llm-shared-packages.md
    • Updated Claude Opus model reference to claude-opus-4-6.
  • docs/server/providers.md
    • Updated Claude Opus model reference in documentation examples to claude-opus-4-6.
  • docs/server/utilities.md
    • Updated Claude Opus model reference in documentation examples for CLAUDE_MODEL_MAP, DEFAULT_MODELS, and resolveModelString outputs to claude-opus-4-6.
  • libs/model-resolver/README.md
    • Updated Claude Opus model reference in examples to claude-opus-4-6.
  • libs/model-resolver/tests/resolver.test.ts
    • Updated expected model string for claude-opus-4-6 in resolvePhaseModel tests.
  • libs/platform/src/system-paths.ts
    • Added NVM for Windows symlink paths to getClaudeCliPaths, getCodexCliPaths, and getOpenCodeCliPaths.
  • libs/types/src/codex-models.ts
    • Added codex-gpt-5.3-codex to CodexModelId type.
    • Added codex-gpt-5.3-codex configuration to CODEX_MODEL_CONFIG_MAP.
  • libs/types/src/index.ts
    • Exported new helper functions: isAdaptiveThinkingModel and getThinkingLevelsForModel.
  • libs/types/src/model-display.ts
    • Added GPT-5.3-Codex to CODEX_MODELS and getModelDisplayName.
    • Added 'adaptive' to THINKING_LEVELS and THINKING_LEVEL_LABELS.
  • libs/types/src/model.ts
    • Updated CLAUDE_CANONICAL_MAP, CLAUDE_MODEL_MAP, and DEFAULT_MODELS to reference claude-opus-4-6.
    • Added gpt53Codex to CODEX_MODEL_MAP and REASONING_CAPABLE_MODELS.
  • libs/types/src/settings.ts
    • Added 'adaptive' to the ThinkingLevel type.
    • Added 'adaptive' with undefined token budget to THINKING_TOKEN_BUDGET.
    • Implemented isAdaptiveThinkingModel and getThinkingLevelsForModel functions to manage adaptive thinking logic.
  • package-lock.json
    • Updated package versions for @anthropic-ai/claude-agent-sdk and @openai/codex-sdk.
Activity
  • The pull request introduces support for new AI models and updates existing SDKs.
  • Extensive test coverage is maintained, with all 547 package tests and 1418 server tests passing, including new adaptive thinking tests.
  • No lint errors were introduced during the changes.
  • The changes were generated with Claude Code, indicating AI assistance in the development process.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces support for the new Claude Opus 4.6 and GPT-5.3-Codex models, along with updates to the corresponding SDKs. The changes are comprehensive, covering model maps, UI components, documentation, and tests to integrate the new models and their features, such as adaptive thinking for Opus 4.6. Overall, the implementation is solid. I've identified a couple of minor opportunities for improvement related to code duplication and ensuring consistency in model defaults.

Comment on lines +57 to +69
const nvmSymlink = process.env.NVM_SYMLINK;
const paths = [
path.join(os.homedir(), '.local', 'bin', 'claude.exe'),
path.join(appData, 'npm', 'claude.cmd'),
path.join(appData, 'npm', 'claude'),
path.join(appData, '.npm-global', 'bin', 'claude.cmd'),
path.join(appData, '.npm-global', 'bin', 'claude'),
];
// nvm4w (NVM for Windows) symlink path
if (nvmSymlink) {
paths.push(path.join(nvmSymlink, 'claude.cmd'), path.join(nvmSymlink, 'claude'));
}
return paths;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The logic for finding CLIs installed via NVM for Windows (nvm4w) by checking NVM_SYMLINK is duplicated in getCodexCliPaths and getOpenCodeCliPaths. To improve maintainability and reduce redundancy, consider extracting this into a helper function that can be reused across these three functions.

supportsVision: true,
supportsTools: true,
tier: 'premium' as const,
default: true,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This correctly sets gpt53Codex as the new default model. However, there's a related constant in libs/types/src/model.ts that appears to have been missed. The DEFAULT_MODELS.codex is still set to CODEX_MODEL_MAP.gpt52Codex. Please update it to CODEX_MODEL_MAP.gpt53Codex for consistency across the codebase.

@Shironex Shironex self-assigned this Feb 5, 2026
@Shironex Shironex added Testers-Requested Request for others to test an enhancement or bug fix/etc. Work-In-Progress Currently being addressed. Do Not Merge Use this label if something should not be merged. labels Feb 5, 2026
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
apps/server/package.json (1)

27-38: ⚠️ Potential issue | 🔴 Critical

Fix critical version discrepancies and missing SDK resolutions.

The specified versions have serious issues:

  • @anthropic-ai/claude-agent-sdk declares 0.2.32 in package.json, but this version does not exist on npm (latest public is 0.2.29). The lockfile shows 0.1.69 with specifier ^0.1.61—a significant downgrade from the declared version.
  • @openai/codex-sdk is not resolved in the lockfile, indicating a missing or unresolved dependency.

These mismatches will cause build failures and inconsistent environments. Verify the correct versions and ensure package.json and lockfile are synchronized.

apps/ui/src/lib/agent-context-parser.ts (1)

37-47: ⚠️ Potential issue | 🟡 Minor

Add display names for GPT‑5.3 Codex models.

With the new codex-gpt-5.3-codex entry, this formatter will fall back to generic labeling. Adding explicit cases keeps UI display consistent.

✅ Suggested addition
   // Codex/GPT models - specific formatting
+  if (model === 'codex-gpt-5.3-codex') return 'GPT-5.3 Codex';
+  if (model === 'codex-gpt-5.3') return 'GPT-5.3';
   if (model === 'codex-gpt-5.2-codex') return 'GPT-5.2 Codex';
   if (model === 'codex-gpt-5.2') return 'GPT-5.2';
🤖 Fix all issues with AI agents
In `@apps/ui/src/components/views/board-view/shared/thinking-level-selector.tsx`:
- Around line 8-23: The component calls getThinkingLevelsForModel with the raw
model prop (in ThinkingLevelSelector) which can be an alias like "opus" and
causes adaptive detection to fail; normalize the model string first (using
resolveModelString from model-config.ts or `@automaker/model-resolver`) and pass
the resolved canonical id to getThinkingLevelsForModel in all places where model
is used (e.g., the initial levels = ... call and any other calls inside
ThinkingLevelSelector), ensuring you default to empty string when
resolveModelString returns undefined.

In `@libs/types/src/codex-models.ts`:
- Around line 33-39: The resolver is passing the internal alias
"codex-gpt-5.3-codex" through to the Codex SDK; update the model-resolver logic
in libs/model-resolver/src/resolver.ts to strip the "codex-" provider prefix
before calling the Codex SDK by using the existing stripProviderPrefix utility
(same approach used for other prefixed providers). Locate the Codex branch in
the resolver (where Codex models are detected — around the code that imports
stripProviderPrefix and handles provider-specific mapping) and ensure you call
stripProviderPrefix(modelId) (or an equivalent helper) to produce
"gpt-5.3-codex" and pass that to the SDK call (e.g., wherever the Codex API
parameter or variable is set), leaving internal IDs unchanged elsewhere.
🧹 Nitpick comments (5)
apps/ui/docs/AGENT_ARCHITECTURE.md (1)

200-215: Consider documenting adaptive thinking support.

While the model update is correct, Opus 4.6 introduces adaptive thinking as a notable new feature (per PR objectives). Consider adding a brief note or example in this configuration section to help developers leverage this capability.

📝 Example documentation addition

You could add a comment or additional section after line 215:

   sandbox: {
     enabled: true,                // Sandboxed bash execution
     autoAllowBashIfSandboxed: true
   }
 }
+
+// Note: Claude Opus 4.6 supports adaptive thinking mode, which automatically
+// adjusts the thinking budget based on task complexity (max 128K output tokens).
libs/platform/src/system-paths.ts (1)

1141-1163: Consistent implementation, consider extracting helper to reduce duplication.

The NVM symlink logic is now duplicated across three functions (getClaudeCliPaths, getCodexCliPaths, getOpenCodeCliPaths). Consider extracting a small helper to DRY this up.

♻️ Optional: Extract NVM symlink helper

Add a helper function near the other private helpers:

/**
 * Get NVM for Windows symlink paths for a given CLI tool
 */
function getNvmSymlinkPaths(baseName: string): string[] {
  const nvmSymlink = process.env.NVM_SYMLINK;
  if (!nvmSymlink) return [];
  return [path.join(nvmSymlink, `${baseName}.cmd`), path.join(nvmSymlink, baseName)];
}

Then simplify each function:

 if (isWindows) {
   const appData = process.env.APPDATA || path.join(homeDir, 'AppData', 'Roaming');
   const localAppData = process.env.LOCALAPPDATA || path.join(homeDir, 'AppData', 'Local');
-  const nvmSymlink = process.env.NVM_SYMLINK;
-  const paths = [
+  return [
     // ... existing paths ...
-  ];
-  // nvm4w (NVM for Windows) symlink path
-  if (nvmSymlink) {
-    paths.push(path.join(nvmSymlink, 'opencode.cmd'), path.join(nvmSymlink, 'opencode'));
-  }
-  return paths;
+    // nvm4w (NVM for Windows) symlink path
+    ...getNvmSymlinkPaths('opencode'),
+  ];
 }
apps/ui/src/components/views/settings-view/providers/codex-model-configuration.tsx (1)

23-60: Derive reasoning-effort support from model metadata to avoid drift.
The hardcoded reasoningModels list duplicates model capability data and will need manual updates whenever new models are added.

♻️ Suggested refactor (metadata-driven)
 interface CodexModelInfo {
   id: CodexModelId;
   label: string;
   description: string;
+  supportsReasoningEffort?: boolean;
 }

 const CODEX_MODEL_INFO: Record<CodexModelId, CodexModelInfo> = {
   'codex-gpt-5.3-codex': {
     id: 'codex-gpt-5.3-codex',
     label: 'GPT-5.3-Codex',
     description: 'Latest frontier agentic coding model',
+    supportsReasoningEffort: true,
   },
   'codex-gpt-5.2-codex': {
     id: 'codex-gpt-5.2-codex',
     label: 'GPT-5.2-Codex',
     description: 'Frontier agentic coding model',
+    supportsReasoningEffort: true,
   },
   'codex-gpt-5.1-codex-max': {
     id: 'codex-gpt-5.1-codex-max',
     label: 'GPT-5.1-Codex-Max',
     description: 'Codex-optimized flagship for deep and fast reasoning',
+    supportsReasoningEffort: true,
   },
   'codex-gpt-5.1-codex-mini': {
     id: 'codex-gpt-5.1-codex-mini',
     label: 'GPT-5.1-Codex-Mini',
     description: 'Optimized for codex. Cheaper, faster, but less capable',
+    supportsReasoningEffort: false,
   },
   'codex-gpt-5.2': {
     id: 'codex-gpt-5.2',
     label: 'GPT-5.2',
     description: 'Latest frontier model with improvements across knowledge, reasoning and coding',
+    supportsReasoningEffort: true,
   },
   'codex-gpt-5.1': {
     id: 'codex-gpt-5.1',
     label: 'GPT-5.1',
     description: 'Great for coding and agentic tasks across domains',
+    supportsReasoningEffort: true,
   },
 };

-function supportsReasoningEffort(modelId: string): boolean {
-  const reasoningModels = [
-    'codex-gpt-5.3-codex',
-    'codex-gpt-5.2-codex',
-    'codex-gpt-5.1-codex-max',
-    'codex-gpt-5.2',
-    'codex-gpt-5.1',
-  ];
-  return reasoningModels.includes(modelId);
+function supportsReasoningEffort(modelId: CodexModelId): boolean {
+  return CODEX_MODEL_INFO[modelId]?.supportsReasoningEffort ?? false;
 }

Based on learnings: When implementing AI model capability flags (like thinking/reasoning support), always add per-model flags to model definitions and check the specific model being used, rather than assuming all models from a provider have the same capabilities.

Also applies to: 166-174

apps/ui/src/components/views/board-view/dialogs/add-feature-dialog.tsx (1)

266-281: Consider using the isAdaptiveThinkingModel() helper for consistency and maintainability.

The current inline checks duplicate the same logic as the shared helper function. Using isAdaptiveThinkingModel(entry.model) consolidates the adaptive model detection and aligns with how getThinkingLevelsForModel() also uses this helper—making it a single source of truth if the adaptation rules ever change.

♻️ Suggested update
-import { supportsReasoningEffort } from '@automaker/types';
+import { supportsReasoningEffort, isAdaptiveThinkingModel } from '@automaker/types';
-    const isNewModelAdaptive =
-      entry.model === 'claude-opus' ||
-      (typeof entry.model === 'string' && entry.model.includes('opus-4-6'));
+    const isNewModelAdaptive = isAdaptiveThinkingModel(entry.model);
apps/ui/src/components/views/board-view/shared/thinking-level-selector.tsx (1)

49-51: Reuse the computed levels for the helper text.
This avoids a second getThinkingLevelsForModel call and keeps the messaging in sync with the rendered options.

♻️ Suggested diff
-  const levels = model ? getThinkingLevelsForModel(model) : getThinkingLevelsForModel('');
+  const levels = getThinkingLevelsForModel(model ?? '');
+  const hasAdaptive = levels.includes('adaptive');
@@
-        {model && getThinkingLevelsForModel(model).includes('adaptive')
+        {model && hasAdaptive
           ? 'Adaptive thinking lets the model decide how much reasoning to use.'
           : 'Higher levels give more time to reason through complex problems.'}

Comment on lines 8 to +23
interface ThinkingLevelSelectorProps {
selectedLevel: ThinkingLevel;
onLevelSelect: (level: ThinkingLevel) => void;
testIdPrefix?: string;
/** Optional model ID to filter available thinking levels (e.g., Opus 4.6 only shows None/Adaptive) */
model?: string;
}

export function ThinkingLevelSelector({
selectedLevel,
onLevelSelect,
testIdPrefix = 'thinking-level',
model,
}: ThinkingLevelSelectorProps) {
const levels = model ? getThinkingLevelsForModel(model) : getThinkingLevelsForModel('');

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

# Search for ThinkingLevelSelector usage to see what shape model prop has at call sites
rg -A 3 "ThinkingLevelSelector" --type=tsx

Repository: AutoMaker-Org/automaker

Length of output: 94


🏁 Script executed:

# Examine getThinkingLevelsForModel definition and what it expects
rg -B 2 -A 5 "getThinkingLevelsForModel" --type=ts

Repository: AutoMaker-Org/automaker

Length of output: 8446


🏁 Script executed:

# Check how models are typically passed and if resolveModelString is used
rg "resolveModelString" --type=ts --type=tsx

Repository: AutoMaker-Org/automaker

Length of output: 94


🏁 Script executed:

# Look at the isAdaptiveThinkingModel and related model checking logic
rg -B 2 -A 8 "isAdaptiveThinkingModel" --type=ts

Repository: AutoMaker-Org/automaker

Length of output: 2035


🏁 Script executed:

rg -B 2 -A 2 "<ThinkingLevelSelector" --type ts

Repository: AutoMaker-Org/automaker

Length of output: 49


🏁 Script executed:

# Check resolveModelString availability and usage
rg -A 3 "from.*model-resolver" --type ts

Repository: AutoMaker-Org/automaker

Length of output: 6929


🏁 Script executed:

# Check the model-constants file referenced in thinking-level-selector
cat -n apps/ui/src/components/views/board-view/shared/model-constants.ts

Repository: AutoMaker-Org/automaker

Length of output: 7384


🏁 Script executed:

cat -n apps/ui/src/components/views/board-view/shared/thinking-level-selector.tsx

Repository: AutoMaker-Org/automaker

Length of output: 2466


Normalize model parameter before passing to getThinkingLevelsForModel.

The component calls getThinkingLevelsForModel(model) three times without normalization. If model receives an alias like 'opus' instead of the full ID 'claude-opus', the adaptive thinking detection fails because isAdaptiveThinkingModel checks for exact match model === 'claude-opus'. Use resolveModelString() from @automaker/model-resolver (available via model-config.ts) or a canonical lookup to normalize the model ID before use.

🤖 Prompt for AI Agents
In `@apps/ui/src/components/views/board-view/shared/thinking-level-selector.tsx`
around lines 8 - 23, The component calls getThinkingLevelsForModel with the raw
model prop (in ThinkingLevelSelector) which can be an alias like "opus" and
causes adaptive detection to fail; normalize the model string first (using
resolveModelString from model-config.ts or `@automaker/model-resolver`) and pass
the resolved canonical id to getThinkingLevelsForModel in all places where model
is used (e.g., the initial levels = ... call and any other calls inside
ThinkingLevelSelector), ensuring you default to empty string when
resolveModelString returns undefined.

Comment on lines +33 to +39
'codex-gpt-5.3-codex': {
id: 'codex-gpt-5.3-codex',
label: 'GPT-5.3-Codex',
description: 'Latest frontier agentic coding model',
hasThinking: true,
supportsVision: true,
},
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

cat -n libs/types/src/codex-models.ts | head -50

Repository: AutoMaker-Org/automaker

Length of output: 1814


🌐 Web query:

OpenAI Codex SDK 0.98.0 GPT-5.3-Codex model identifier documentation

💡 Result:

For Codex CLI + Codex SDK (v0.98.0), the GPT‑5.3‑Codex model identifier is:

  • gpt-5.3-codex [1]

Where it’s documented / how to use it:

  • Codex changelog (entry for Codex CLI 0.98.0, dated 2026‑02‑05) shows switching with: codex --model gpt-5.3-codex [1]
  • OpenAI’s release post notes GPT‑5.3‑Codex is available across Codex surfaces (app/CLI/IDE/web) and that API access is “coming soon” [2]. For API-key workflows, the changelog says to keep using gpt-5.2-codex while API support rolls out [1].

Sources: [1], [2]


🏁 Script executed:

# Search for the model resolver function mentioned in guidelines
fd -t f -e ts -e tsx -e js -e jsx | xargs rg -l "resolveModelString|model.*resolver" | head -20

Repository: AutoMaker-Org/automaker

Length of output: 998


🏁 Script executed:

# Search for usages of the model ID in the codebase
rg "codex-gpt-5\.3-codex" --type ts --type tsx --type js --type jsx -B 3 -A 3

Repository: AutoMaker-Org/automaker

Length of output: 94


🏁 Script executed:

# Look for `@automaker/model-resolver` imports and usage
rg "@automaker/model-resolver|resolveModelString" --type ts --type tsx --type js --type jsx -A 2

Repository: AutoMaker-Org/automaker

Length of output: 94


🏁 Script executed:

# Read the model resolver implementation
cat libs/model-resolver/src/resolver.ts

Repository: AutoMaker-Org/automaker

Length of output: 9207


🏁 Script executed:

# Search for codex-gpt-5.3-codex usage without tsx filter
rg "codex-gpt-5\.3-codex" -B 3 -A 3

Repository: AutoMaker-Org/automaker

Length of output: 3214


🏁 Script executed:

# Check the model resolver tests
cat libs/model-resolver/tests/resolver.test.ts

Repository: AutoMaker-Org/automaker

Length of output: 20075


Fix the model resolver to strip the codex- prefix before passing to the Codex SDK API.

The model ID codex-gpt-5.3-codex is correctly defined as an internal alias, but the resolver passes it unchanged to the Codex SDK. The actual SDK specification expects gpt-5.3-codex without the prefix. Update libs/model-resolver/src/resolver.ts to strip the codex- prefix for Codex models before API calls, similar to how other prefixed models are handled. The resolver currently imports stripProviderPrefix but doesn't apply it to Codex models (lines 83-86).

🤖 Prompt for AI Agents
In `@libs/types/src/codex-models.ts` around lines 33 - 39, The resolver is passing
the internal alias "codex-gpt-5.3-codex" through to the Codex SDK; update the
model-resolver logic in libs/model-resolver/src/resolver.ts to strip the
"codex-" provider prefix before calling the Codex SDK by using the existing
stripProviderPrefix utility (same approach used for other prefixed providers).
Locate the Codex branch in the resolver (where Codex models are detected —
around the code that imports stripProviderPrefix and handles provider-specific
mapping) and ensure you call stripProviderPrefix(modelId) (or an equivalent
helper) to produce "gpt-5.3-codex" and pass that to the SDK call (e.g., wherever
the Codex API parameter or variable is set), leaving internal IDs unchanged
elsewhere.

- Added `isAdaptiveThinkingModel` utility to improve model identification logic in the AddFeatureDialog.
- Updated the ThinkingLevelSelector to conditionally display information based on available thinking levels.
- Enhanced model name formatting in agent-context-parser to include 'GPT-5.3 Codex' for better clarity.

These changes improve the user experience by refining model handling and UI feedback related to adaptive thinking capabilities.
- Introduced a new event type 'dev-server:url-detected' to enhance event handling for the development server.
- This addition allows for better tracking and response to URL detection during server operations.

These changes improve the event system's capability to manage server-related events effectively.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Do Not Merge Use this label if something should not be merged. Testers-Requested Request for others to test an enhancement or bug fix/etc. Work-In-Progress Currently being addressed.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant