Ollama's /api/chat endpoint supports several top-level request body parameters that are siblings of model, messages, and stream — notably the think parameter for enabling/disabling extended thinking in reasoning models (QwQ, DeepSeek-R1, Gemma 3, etc.).
Currently, OllamaLanguageModel.CustomGenerationOptions ([String: JSONValue]) only flows into the nested "options" dictionary in the request body via convertOptions() / createChatParams(). There is no way to inject arbitrary top-level keys.
Current behavior
Custom options end up nested under "options", which is the incorrect location for parameters like think:
{
"model": "...",
"messages": [...],
"options": { "think": false }
}
Code usage:
var opts = GenerationOptions()
opts[custom: OllamaLanguageModel.self] = ["think": .bool(false)]
Expected behavior
A way to set top-level parameters like think:
{
"model": "...",
"messages": [...],
"think": false
}
Ollama's
/api/chatendpoint supports several top-level request body parameters that are siblings ofmodel,messages, andstream— notably thethinkparameter for enabling/disabling extended thinking in reasoning models (QwQ, DeepSeek-R1, Gemma 3, etc.).Currently,
OllamaLanguageModel.CustomGenerationOptions([String: JSONValue]) only flows into the nested"options"dictionary in the request body viaconvertOptions()/createChatParams(). There is no way to inject arbitrary top-level keys.Current behavior
Custom options end up nested under
"options", which is the incorrect location for parameters likethink:{ "model": "...", "messages": [...], "options": { "think": false } }Code usage:
Expected behavior
A way to set top-level parameters like
think:{ "model": "...", "messages": [...], "think": false }