Conversation
| lines = _get_dict_lines_from_response(response) | ||
|
|
||
| assert len(lines) == 6 | ||
| assert len(lines) == 7 |
There was a problem hiding this comment.
I added another tool call here so things are bumped up by one!
|
|
||
| def map_messages_to_pydantic_ai_format(messages: Sequence[Message]) -> list[ModelMessage]: | ||
| return [map_message(message) for message in messages] | ||
| part_lists = [map_message_to_part(message) for message in messages] |
There was a problem hiding this comment.
This changed because Pydantic will remove the run ID from a message if it collapses it itself. To remedy that we make sure it's in the right format before we pass it in.
| raise HTTPException(status_code=status.HTTP_403_FORBIDDEN, detail=str(e)) from e | ||
|
|
||
|
|
||
| async def jsonl_stream(stream: AsyncIterator[ChatStreamOutput]) -> AsyncIterator[str]: |
There was a problem hiding this comment.
The stream from chat_service is now streaming chunks so we need to format them here. If FastAPI's JSONL handling handles our exceptions correctly we can use that instead in the future!
| mcp_servers = [_get_mcp_server(config) for config in MCP_SERVERS] | ||
|
|
||
| if not settings.ENV.is_production and settings.INCLUDE_TEST_MCP_SERVERS: | ||
| fake_mcp_server_module = importlib.import_module("api.test_utils.fake_mcp_server") |
There was a problem hiding this comment.
I wanted to be able to test failing tools without hitting Asta's MCP server. This lets us hit the fake tools in e2e tests and when hitting the API.
| return await self._get_tools_for_servers(general_mcp_servers) | ||
| tools = await self._get_tools_for_servers(general_mcp_servers) | ||
|
|
||
| if not settings.ENV.is_production and settings.INCLUDE_TEST_MCP_SERVERS: |
There was a problem hiding this comment.
This lets us use the fake tools in the UI
bronzle
left a comment
There was a problem hiding this comment.
This seems good -- shame about having to break out of pydantic agent so much.
Closes https://github.com/allenai/playground-issues-repo/issues/1028
Now that we've released this it's time for me to regret the decision to use P-AI's UI event streaming handling!
I had to hack around it to get error saving working. P-AI doesn't return errors in the run output handling or if you use something like
capture_run_messages. To get it, I'm gathering relevant chunks while streaming by yielding throughstream_chat_message. Since that could cause issues with throwing exceptions properly I splitstream_chat_messageintoinitialize_stream_adapterandstream_chat_message.Since those chunks don't have the full response we still need to map from the Pydantic chunks. Since we have the errors as well now, we can map those over by their message ID.
Image of me loading a thread that had an error:

Loading a thread that had a failing tool call:
