Add inworld router#530
Draft
maxkahan wants to merge 101 commits intochore/audio-processing-v2from
Draft
Conversation
Stream is a sort of queue with __aiter__ which can be cleared or closed. Clearing the Stream keeps iterators running but drops the queued data. Closing it signals the running iterators to stop.
…ripts without deleting them
…ripts without deleting them
Also: - openrouter llm no longer inherits from openai.LLM
Also simplified the LLM integration
They flood the logs
…ushed on final chunk
- replace assert with ValueError in collect_simple_response - add todos to fix some undefined behavior later
send_nowait() was infinitely accumulating carryover buffer because of incorrect handling of 2d numpy arrays
fd3ea5a to
f288778
Compare
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
This pull request introduces asynchronous backend persistence for Stream Chat message syncing in the
StreamConversationclass, improving latency for voice and LLM-driven pipelines by dispatching REST calls as background tasks. It also ensures test and shutdown reliability by allowing callers to await completion of all in-flight syncs. Additionally, the changes update tests to await these background tasks and improve error handling and documentation.Asynchronous backend persistence and ordering:
StreamConversationto dispatch Stream Chat syncs as fire-and-forget background tasks, serializing them with anasyncio.Lockto preserve message ordering and reduce latency on the critical path. Added await_for_pending_syncs()method to allow draining of in-flight tasks during tests and shutdown. [1] [2] [3] [4]wait_for_pending_syncs()in the base conversation class, with an override inStreamConversationfor actual draining.Test reliability improvements:
test_stream_conversation.pyandtest_message_chunking.pyto awaitwait_for_pending_syncs()before making assertions, ensuring test correctness with asynchronous persistence. [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21]Robustness and documentation:
Bug fixes and pipeline improvements:
Documentation update:
inworld/README.mdto better describe plugin capabilities.