Skip to content

fix(provider): prefer per-model temperature over agent override#64

Open
Alezander9 wants to merge 1 commit into
mainfrom
fix/strict-temperature-required-models
Open

fix(provider): prefer per-model temperature over agent override#64
Alezander9 wants to merge 1 commit into
mainfrom
fix/strict-temperature-required-models

Conversation

@Alezander9
Copy link
Copy Markdown
Member

@Alezander9 Alezander9 commented May 15, 2026

Problem

ProviderTransform.temperature(model) in src/provider/transform.ts returns the temperature value a given model family requires (kimi-k2.* -> 1.0 or 0.6; gemini / glm-4.6 / glm-4.7 / minimax-m2 -> 1.0; qwen -> 0.55). Several of these are hard requirements: Moonshot rejects any other value for the kimi-k2 family with HTTP 400 invalid temperature: only 1 is allowed for this model.

The call site in src/session/llm.ts consulted that function in the wrong order:

input.agent.temperature ?? ProviderTransform.temperature(input.model)

The built-in title agent (src/agent/agent.ts:271) hard-codes temperature: 0.5. Because ?? short-circuits on the first non-nullish operand, agents 0.5 always won and a temperature of 0.5 was sent to kimi-k2.6, which then 400d.

I observed this in production telemetry across multiple users on kimi-k2.6 (the most common case) and the same code path will trip whenever an agent sets a temperature on any of the strict-required model families.

Fix

Swap the operands at the single call site:

ProviderTransform.temperature(input.model) ?? input.agent.temperature

The transform function exists precisely because the maintainer encoded model-aware knowledge of what each family expects; that knowledge should beat a generic agent default. For models not covered by the transform map (claude, openai, etc.), the function returns undefined and the agents preference still applies through normal ?? fallthrough — so the only behavior change is on the families the maintainer has already flagged as needing specific values.

This also aligns temperature with the adjacent topK (no agent override) and brings it closer to topP (still agent ?? transform, but topP has no known strict-required cases today).

Diff

 packages/opencode/src/session/llm.ts | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

Test plan

  • bun typecheck clean.
  • Existing test/provider/transform.test.ts suite: 220 pass / 0 fail.
  • Manual verification against Moonshot kimi-k2.6: title-generation no longer 400s.

Copy link
Copy Markdown

@cubic-dev-ai cubic-dev-ai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No issues found across 3 files

`ProviderTransform.temperature(model)` encodes per-model-family temperature
values that the model's provider requires (e.g. Moonshot's kimi-k2.* returns
HTTP 400 "invalid temperature: only 1 is allowed for this model" if any
other value is sent). The session/llm.ts call site consulted it in the wrong
order:

  input.agent.temperature ?? ProviderTransform.temperature(input.model)

so the built-in title agent's hard-coded `temperature: 0.5` (agent.ts:271)
won over the kimi-required 1.0, and Moonshot rejected the request.

Swap the operands. The transform value, when present, is the model-aware
answer and should beat a generic agent default. For models that aren't in
the transform map, the function returns `undefined` and the agent's
preference still applies through `??` fallthrough.

Affects: kimi-k2.* (incl. k2.6, thinking, k2-5), gemini, glm-4.6/4.7,
minimax-m2, qwen — anywhere the transform has an entry.
@Alezander9 Alezander9 force-pushed the fix/strict-temperature-required-models branch from 29f118e to f386eb3 Compare May 15, 2026 19:04
@Alezander9 Alezander9 changed the title fix(provider): honor strict temperature requirements over agent override fix(provider): prefer per-model temperature over agent override May 15, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant