From f386eb3eb13c5f28b440312ce71f372164646899 Mon Sep 17 00:00:00 2001 From: bcode Date: Fri, 15 May 2026 19:03:56 +0000 Subject: [PATCH] fix(provider): prefer per-model temperature over agent override MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit `ProviderTransform.temperature(model)` encodes per-model-family temperature values that the model's provider requires (e.g. Moonshot's kimi-k2.* returns HTTP 400 "invalid temperature: only 1 is allowed for this model" if any other value is sent). The session/llm.ts call site consulted it in the wrong order: input.agent.temperature ?? ProviderTransform.temperature(input.model) so the built-in title agent's hard-coded `temperature: 0.5` (agent.ts:271) won over the kimi-required 1.0, and Moonshot rejected the request. Swap the operands. The transform value, when present, is the model-aware answer and should beat a generic agent default. For models that aren't in the transform map, the function returns `undefined` and the agent's preference still applies through `??` fallthrough. Affects: kimi-k2.* (incl. k2.6, thinking, k2-5), gemini, glm-4.6/4.7, minimax-m2, qwen — anywhere the transform has an entry. --- packages/opencode/src/session/llm.ts | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/packages/opencode/src/session/llm.ts b/packages/opencode/src/session/llm.ts index 9f73a7b05..4266b246d 100644 --- a/packages/opencode/src/session/llm.ts +++ b/packages/opencode/src/session/llm.ts @@ -169,7 +169,7 @@ const live: Layer.Layer< }, { temperature: input.model.capabilities.temperature - ? (input.agent.temperature ?? ProviderTransform.temperature(input.model)) + ? (ProviderTransform.temperature(input.model) ?? input.agent.temperature) : undefined, topP: input.agent.topP ?? ProviderTransform.topP(input.model), topK: ProviderTransform.topK(input.model),