Conversation
Greptile SummaryThis PR introduces a Key issues found:
Confidence Score: 1/5
Important Files Changed
Sequence DiagramsequenceDiagram
participant Skill as Skill (PersonFollow / PerceiveLoop)
participant TS as ToolStream
participant LCM as pLCMTransport (LCM /tool_streams)
participant McpSrv as McpServer (subscriber)
participant SSEQueue as asyncio.Queue (per client)
participant McpClient as McpClient (SSE thread)
participant Agent as Agent / McpClient LLM queue
Skill->>TS: start()
TS->>LCM: transport.start()
Skill->>TS: send("update message")
TS->>LCM: broadcast(None, {type:"update", ...})
LCM-->>McpSrv: _on_tool_stream_message(msg)
McpSrv->>SSEQueue: queue.put(msg) [run_coroutine_threadsafe]
SSEQueue-->>McpClient: SSE line "data: {...}"
McpClient->>Agent: message_queue.put(HumanMessage)
Skill->>TS: stop()
TS->>LCM: broadcast(None, {type:"close", ...})
LCM-->>McpSrv: _on_tool_stream_message(msg)
McpSrv->>SSEQueue: queue.put({type:"close"})
TS->>LCM: transport.stop()
Last reviewed commit: 4e1e816 |
| self._transport.broadcast( | ||
| { | ||
| "stream_id": self.id, | ||
| "tool_name": self.tool_name, | ||
| "type": "close", | ||
| }, | ||
| ) |
There was a problem hiding this comment.
Wrong argument count in broadcast() call
pLCMTransport.broadcast has the signature def broadcast(self, _: Out[T] | None, msg: T) -> None: — it requires two positional arguments. In ToolStream.send() this is called correctly with self._transport.broadcast(None, {...}). However, in stop() the close message is passed as the first argument (_), completely omitting the required msg positional argument. This will raise a TypeError: broadcast() missing 1 required positional argument: 'msg' at runtime every time a ToolStream is stopped.
| self._transport.broadcast( | |
| { | |
| "stream_id": self.id, | |
| "tool_name": self.tool_name, | |
| "type": "close", | |
| }, | |
| ) | |
| try: | |
| self._transport.broadcast( | |
| None, | |
| { | |
| "stream_id": self.id, | |
| "tool_name": self.tool_name, | |
| "type": "close", | |
| }, | |
| ) |
| def start(self) -> None: | ||
| super().start() | ||
| self._tool_stream.start() |
There was a problem hiding this comment.
AttributeError on None during start()
self._tool_stream is initialised to None in __init__ and is only assigned a real ToolStream instance inside look_out_for(). Calling self._tool_stream.start() unconditionally in start() will therefore raise AttributeError: 'NoneType' object has no attribute 'start' every time the module is started, before any lookout has been requested.
The ToolStream instance is already started immediately after it is created inside look_out_for(), so this call in start() appears to be leftover/accidental and should be removed entirely.
| def start(self) -> None: | |
| super().start() | |
| self._tool_stream.start() | |
| @rpc | |
| def start(self) -> None: | |
| super().start() |
| def _on_tool_stream_message(msg: dict[str, Any]) -> None: | ||
| if loop is None: | ||
| return | ||
| for queue in app.state.sse_queues: | ||
| asyncio.run_coroutine_threadsafe(queue.put(msg), loop) |
There was a problem hiding this comment.
Race condition on sse_queues list
app.state.sse_queues is a plain list that is mutated from multiple concurrent contexts without any synchronisation:
- Async context —
streams_sse_endpointappends a queue, and theevent_generatorfinaliser removes it. - Sync thread context —
_on_tool_stream_message(called from the RxPY subscriber thread) iterates over the list withfor queue in app.state.sse_queues.
Python's GIL protects individual atomic operations, but iterating over a list while another coroutine or thread appends/removes items from it can still raise RuntimeError: list changed size during iteration or silently skip/double-process entries.
Consider protecting mutations and the iteration with a threading.Lock, or replacing the list with a set guarded by a lock, to make concurrent access safe.
Problem
Closes DIM-XXX
Solution
Breaking Changes
How to Test
Contributor License Agreement