-
Notifications
You must be signed in to change notification settings - Fork 512
bidi - remove python 3.11+ features #1302
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -390,9 +390,22 @@ async def run_outputs(inputs_task: asyncio.Task) -> None: | |
| for start in [*input_starts, *output_starts]: | ||
| await start(self) | ||
|
|
||
| async with asyncio.TaskGroup() as task_group: | ||
| inputs_task = task_group.create_task(run_inputs()) | ||
| task_group.create_task(run_outputs(inputs_task)) | ||
| inputs_task = asyncio.create_task(run_inputs()) | ||
| outputs_task = asyncio.create_task(run_outputs(inputs_task)) | ||
|
|
||
| try: | ||
| await asyncio.gather(inputs_task, outputs_task) | ||
| except (Exception, asyncio.CancelledError) as error: | ||
| inputs_task.cancel() | ||
| outputs_task.cancel() | ||
| await asyncio.gather(inputs_task, outputs_task, return_exceptions=True) | ||
|
|
||
| if not isinstance(error, asyncio.CancelledError): | ||
| raise | ||
|
|
||
| run_task = asyncio.current_task() | ||
| if run_task and run_task.cancelling() > 0: # externally cancelled | ||
| raise | ||
|
Member
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. As an example here, if the user is running Internal cancellations however, like with the |
||
|
|
||
| finally: | ||
| input_stops = [input_.stop for input_ in inputs if isinstance(input_, BidiInput)] | ||
|
|
||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -1,3 +1,4 @@ | ||
| import traceback | ||
|
Member
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. There are some lint fixes in these 2 test files. They weren't caught previously because there is a bug in our lint command used specifically for bidi. I am addressing this as part of #1299 which resolves the issue by checking bidi using the existing hatch scripts we have configured for the rest of strands. |
||
| from unittest.mock import AsyncMock | ||
|
|
||
| import pytest | ||
|
|
@@ -10,17 +11,19 @@ async def test_stop_exception(): | |
| func1 = AsyncMock() | ||
| func2 = AsyncMock(side_effect=ValueError("stop 2 failed")) | ||
| func3 = AsyncMock() | ||
| func4 = AsyncMock(side_effect=ValueError("stop 4 failed")) | ||
|
|
||
| with pytest.raises(ExceptionGroup) as exc_info: | ||
| await stop_all(func1, func2, func3) | ||
| with pytest.raises(RuntimeError, match=r"failed stop sequence") as exc_info: | ||
| await stop_all(func1, func2, func3, func4) | ||
|
|
||
| func1.assert_called_once() | ||
| func2.assert_called_once() | ||
| func3.assert_called_once() | ||
| func4.assert_called_once() | ||
|
|
||
| assert len(exc_info.value.exceptions) == 1 | ||
| with pytest.raises(ValueError, match=r"stop 2 failed"): | ||
| raise exc_info.value.exceptions[0] | ||
| tru_tb = "".join(traceback.format_exception(RuntimeError, exc_info.value, exc_info.tb)) | ||
| assert "ValueError: stop 2 failed" in tru_tb | ||
| assert "ValueError: stop 4 failed" in tru_tb | ||
|
|
||
|
|
||
| @pytest.mark.asyncio | ||
|
|
||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -131,7 +131,9 @@ def test_audio_config_defaults(api_key, model_name): | |
| def test_audio_config_partial_override(api_key, model_name): | ||
| """Test partial audio configuration override.""" | ||
| provider_config = {"audio": {"output_rate": 48000, "voice": "echo"}} | ||
| model = BidiOpenAIRealtimeModel(model_id=model_name, client_config={"api_key": api_key}, provider_config=provider_config) | ||
| model = BidiOpenAIRealtimeModel( | ||
| model_id=model_name, client_config={"api_key": api_key}, provider_config=provider_config | ||
| ) | ||
|
|
||
| # Overridden values | ||
| assert model.config["audio"]["output_rate"] == 48000 | ||
|
|
@@ -154,7 +156,9 @@ def test_audio_config_full_override(api_key, model_name): | |
| "voice": "shimmer", | ||
| } | ||
| } | ||
| model = BidiOpenAIRealtimeModel(model_id=model_name, client_config={"api_key": api_key}, provider_config=provider_config) | ||
| model = BidiOpenAIRealtimeModel( | ||
| model_id=model_name, client_config={"api_key": api_key}, provider_config=provider_config | ||
| ) | ||
|
|
||
| assert model.config["audio"]["input_rate"] == 48000 | ||
| assert model.config["audio"]["output_rate"] == 48000 | ||
|
|
@@ -349,7 +353,7 @@ async def async_connect(*args, **kwargs): | |
| model4 = BidiOpenAIRealtimeModel(model_id=model_name, client_config={"api_key": api_key}) | ||
| await model4.start() | ||
| mock_ws.close.side_effect = Exception("Close failed") | ||
| with pytest.raises(ExceptionGroup): | ||
| with pytest.raises(RuntimeError, match=r"failed stop sequence"): | ||
|
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Do we have guidelines on when we use a custom exception vs general?
Member
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. We don't have guidelines and probably should. In this particular case, users wouldn't have reason to act specifically on this error which is why I made it a RuntimeError. They would just likely catch this generally with So if we are to develop guidelines, I would probably start with something like this. Would the exception be used to drive a specific action/workflow. If not and the user would just end up catching it with |
||
| await model4.stop() | ||
|
|
||
|
|
||
|
|
@@ -510,7 +514,7 @@ async def test_receive_lifecycle_events(mock_websocket, model): | |
| format="pcm", | ||
| sample_rate=24000, | ||
| channels=1, | ||
| ) | ||
| ), | ||
| ] | ||
| assert tru_events == exp_events | ||
|
|
||
|
|
||
Uh oh!
There was an error while loading. Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The exception traceback will show all the causes chained together. We have this unit tested down below.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is slightly misleading right? e.g. in reality they didn't really all cause each other, we're just presenting it that way?
Should we indicate that somehow? (Like via the message or something)
Can we also document this in the code
Uh oh!
There was an error while loading. Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah shoot, I actually meant to get a traceback message that looks something like:
Traceback (most recent call last): File "/Users/pgrayy/Projects/Strands/sdk-python/src/strands/experimental/bidi/_async/__init__.py", line 24, in stop_all await func() File "/Users/pgrayy/Library/Application Support/hatch/env/virtual/.pythons/3.12/python/lib/python3.12/unittest/mock.py", line 2291, in _execute_mock_call raise effect ValueError: stop 2 failed During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Users/pgrayy/Projects/Strands/sdk-python/src/strands/experimental/bidi/_async/__init__.py", line 24, in stop_all await func() File "/Users/pgrayy/Library/Application Support/hatch/env/virtual/.pythons/3.12/python/lib/python3.12/unittest/mock.py", line 2291, in _execute_mock_call raise effect ValueError: stop 4 failed During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Users/pgrayy/Projects/Strands/sdk-python/tests/strands/experimental/bidi/_async/test__init__.py", line 17, in test_stop_exception await stop_all(func1, func2, func3, func4) File "/Users/pgrayy/Projects/Strands/sdk-python/src/strands/experimental/bidi/_async/__init__.py", line 33, in stop_all raise exceptions[-1] RuntimeError: failed stop sequenceNote, this says
During handling of the above exception, another exception occurredinstead ofThe above exception was the direct cause of the following exception. I would just need to switch__cause__to__context__.What would you think of this?