-
Notifications
You must be signed in to change notification settings - Fork 2.6k
voice: add response_format to ModelSettings and chat() #4559
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -119,6 +119,7 @@ def chat( | |
| conn_options: APIConnectOptions = DEFAULT_API_CONNECT_OPTIONS, | ||
| parallel_tool_calls: NotGivenOr[bool] = NOT_GIVEN, | ||
| tool_choice: NotGivenOr[ToolChoice] = NOT_GIVEN, | ||
| response_format: NotGivenOr[Any] = NOT_GIVEN, | ||
| extra_kwargs: NotGivenOr[dict[str, Any]] = NOT_GIVEN, | ||
|
Comment on lines
+122
to
123
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 🧩 Analysis chain🌐 Web query:
💡 Result: Short answer — structure and examples (2025)
Sources
🏁 Script executed: # First, let's check the actual file to understand the context
fd -type f -name "llm.py" | grep -i openai | head -5Repository: livekit/agents Length of output: 228 🏁 Script executed: # Find the llm.py file in openai responses directory
find . -path "*/openai/responses/llm.py" -type fRepository: livekit/agents Length of output: 138 🏁 Script executed: # Also check the directory structure
fd "llm.py" | grep openaiRepository: livekit/agents Length of output: 205 🏁 Script executed: # Read the llm.py file to examine the response_format parameter and API call
cat -n ./livekit-plugins/livekit-plugins-openai/livekit/plugins/openai/responses/llm.py | sed -n '110,170p'Repository: livekit/agents Length of output: 2576 🏁 Script executed: # Search for all usages of response_format in the openai plugin
rg "response_format" livekit-plugins/livekit-plugins-openai/Repository: livekit/agents Length of output: 1615 🏁 Script executed: # Check LLMStream class definition to see if it handles response_format
cat -n ./livekit-plugins/livekit-plugins-openai/livekit/plugins/openai/responses/llm.py | sed -n '170,250p'Repository: livekit/agents Length of output: 3694 🏁 Script executed: # Check the regular llm.py file to see response_format handling
cat -n ./livekit-plugins/livekit-plugins-openai/livekit/plugins/openai/llm.py | sed -n '1,50p'Repository: livekit/agents Length of output: 1868 🏁 Script executed: # Search for the chat() method in llm.py to see how response_format is used
rg -A 30 "def chat" ./livekit-plugins/livekit-plugins-openai/livekit/plugins/openai/llm.py | head -80Repository: livekit/agents Length of output: 1173 🏁 Script executed: # Check llm_utils to understand to_openai_response_format
fd "llm_utils" livekit-plugins/livekit-plugins-openai/Repository: livekit/agents Length of output: 40 🏁 Script executed: # Find where llm_utils is located
fd -name "*.py" | xargs grep -l "def to_openai_response_format" | head -3Repository: livekit/agents Length of output: 288 🏁 Script executed: # Check how response_format is used in the regular llm.py
rg -B 5 -A 10 "extra\[\"response_format\"\]" ./livekit-plugins/livekit-plugins-openai/livekit/plugins/openai/llm.pyRepository: livekit/agents Length of output: 684 🏁 Script executed: # Check if there are any tests or examples showing how response_format is used
rg -B 5 -A 5 "response_format" ./livekit-plugins/livekit-plugins-openai/livekit/plugins/openai/responses/ | head -40Repository: livekit/agents Length of output: 1398 🏁 Script executed: # Find and examine the llm_utils to_openai_response_format function
find . -path "*/agents/llm/utils.py" -o -path "*/livekit/agents/llm/utils.py" | xargs cat 2>/dev/null | head -200Repository: livekit/agents Length of output: 6557 🏁 Script executed: # Or search more broadly
rg -B 2 -A 15 "def to_openai_response_format" --type pyRepository: livekit/agents Length of output: 1403 🏁 Script executed: # Let's verify the exact structure of response_format for Responses API
# by checking if there are any tests or examples
rg -r "responses.create" ./livekit-plugins/livekit-plugins-openai/ --type py -A 10 | head -50Repository: livekit/agents Length of output: 40 🏁 Script executed: # Check the full to_openai_response_format function to understand the structure better
rg -B 5 -A 25 "def to_openai_response_format" livekit-agents/Repository: livekit/agents Length of output: 2378
OpenAI's Responses API supports 🔧 Suggested fix to propagate response_format if is_given(tool_choice):
oai_tool_choice: response_create_params.ToolChoice
if isinstance(tool_choice, dict):
oai_tool_choice = {
"type": "function",
"name": tool_choice["function"]["name"],
}
extra["tool_choice"] = oai_tool_choice
elif tool_choice in ("auto", "required", "none"):
oai_tool_choice = tool_choice # type: ignore
extra["tool_choice"] = oai_tool_choice
+ if is_given(response_format):
+ extra["text"] = {"format": response_format}
+
return LLMStream(🤖 Prompt for AI Agents
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Since the goal of this PR is to make it work only for google, leaving this as a passthrough.
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
✏️ Learnings added
🧠 Learnings used |
||
| ) -> LLMStream: | ||
| extra = {} | ||
|
|
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
since not all LLMs support response format, perhaps a better way is to use it in a custom
llm_nodeinstead of adding it toModelSettings, here is an example.btw, I didn't see how you want to pass the
response_formatto the agent from the user's code.