-
Notifications
You must be signed in to change notification settings - Fork 2.6k
feat(langgraph): add custom stream mode support in LangChain LLMAdapter #4511
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 7b0639d783
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
livekit-plugins/livekit-plugins-langchain/livekit/plugins/langchain/langgraph.py
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 6eb8409988
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
livekit-plugins/livekit-plugins-langchain/livekit/plugins/langchain/langgraph.py
Show resolved
Hide resolved
|
@davidzhao I saw you’ve reviewed related pr #3112 before. |
6eb8409 to
15f7e53
Compare
|
Note Other AI code review bot(s) detectedCodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review. 📝 WalkthroughWalkthroughAdds StreamMode-based streaming support to the LangGraph integration. Introduces validation and storage of stream_mode parameter in LLMAdapter and LangGraphStream, supporting both single-mode and multi-mode streaming with different chunk emission logic based on mode type. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
📜 Recent review detailsConfiguration used: Organization UI Review profile: CHILL Plan: Pro 📒 Files selected for processing (1)
🧰 Additional context used📓 Path-based instructions (1)**/*.py📄 CodeRabbit inference engine (AGENTS.md)
Files:
🧠 Learnings (2)📓 Common learnings📚 Learning: 2026-01-19T07:59:36.851ZApplied to files:
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
🔇 Additional comments (5)
✏️ Tip: You can disable this entire section by setting Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In
`@livekit-plugins/livekit-plugins-langchain/livekit/plugins/langchain/langgraph.py`:
- Around line 141-175: In the multi-mode loop (is_multi_mode) unexpected items
that are not 2-tuples with a string mode currently fall through into the
single-mode checks and get silently dropped because self._stream_mode is a list;
fix by adding an explicit else/guard in the async for loop to handle unexpected
tuple shapes or non-string modes: when an item is in multi-mode but not
(isinstance(item, tuple) and len(item)==2 and isinstance(mode,str)), call a
diagnostic path (e.g., log a warning via the same logger/context or send an
error chunk) and continue, and ensure you only use the single-mode handling when
not is_multi_mode; update references around is_multi_mode, _stream_mode,
_to_chat_chunk, _extract_message_chunk, and _event_ch.send_nowait to implement
this defensive branch.
🧹 Nitpick comments (1)
livekit-plugins/livekit-plugins-langchain/livekit/plugins/langchain/langgraph.py (1)
50-55: Consider validating non-empty stream_mode.An empty list
stream_mode=[]passes validation (no unsupported modes) but would setis_multi_mode=Truein_run()with no modes to match, potentially causing unexpected behavior where all items are silently dropped.💡 Suggested validation
modes = {stream_mode} if isinstance(stream_mode, str) else set(stream_mode) + if not modes: + raise ValueError("stream_mode must specify at least one mode.") unsupported = modes - _SUPPORTED_MODES
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
.claude/settings.jsonlivekit-plugins/livekit-plugins-langchain/livekit/plugins/langchain/langgraph.py
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Format code with ruff
Run ruff linter and auto-fix issues
Run mypy type checker in strict mode
Maintain line length of 100 characters maximum
Ensure Python 3.9+ compatibility
Use Google-style docstrings
Files:
livekit-plugins/livekit-plugins-langchain/livekit/plugins/langchain/langgraph.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: unit-tests
- GitHub Check: type-check (3.9)
- GitHub Check: type-check (3.13)
🔇 Additional comments (3)
.claude/settings.json (1)
1-10: LGTM!The Claude settings file appropriately scopes permissions to development tooling (mypy, pytest, make) and restricts web fetches to the LangChain documentation domain. This follows the principle of least privilege.
livekit-plugins/livekit-plugins-langchain/livekit/plugins/langchain/langgraph.py (2)
34-36: LGTM!Good design: the broad
StreamModetype alias enables future extensibility and type-checker compatibility, while_SUPPORTED_MODESenforces runtime constraints. This separation is clean.
247-254: LGTM!The extended
_to_chat_chunkcorrectly handles dict and object inputs from StreamWriter. The defensiveisinstance(raw, str)checks ensure only valid string content is processed.
✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.
livekit-plugins/livekit-plugins-langchain/livekit/plugins/langchain/langgraph.py
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In
`@livekit-plugins/livekit-plugins-langchain/livekit/plugins/langchain/langgraph.py`:
- Around line 46-54: In the LangGraph constructor (__init__), add validation to
reject empty list values for the stream_mode parameter: detect when stream_mode
is a list and is empty (so that modes becomes an empty set) and raise a
ValueError with a clear message (similar style to the existing check that uses
_SUPPORTED_MODES); this prevents downstream logic that checks is_multi_mode
(e.g., is_multi_mode = isinstance(self._stream_mode, list)) from treating an
empty list as multi-mode and stalling event emission. Ensure the check occurs
before computing unsupported = modes - _SUPPORTED_MODES and references
stream_mode/self._stream_mode and _SUPPORTED_MODES so it fails fast on empty
lists.
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
livekit-plugins/livekit-plugins-langchain/livekit/plugins/langchain/langgraph.py
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Format code with ruff
Run ruff linter and auto-fix issues
Run mypy type checker in strict mode
Maintain line length of 100 characters maximum
Ensure Python 3.9+ compatibility
Use Google-style docstrings
Files:
livekit-plugins/livekit-plugins-langchain/livekit/plugins/langchain/langgraph.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: type-check (3.9)
- GitHub Check: type-check (3.13)
- GitHub Check: unit-tests
🔇 Additional comments (4)
livekit-plugins/livekit-plugins-langchain/livekit/plugins/langchain/langgraph.py (4)
79-90: Config propagation looks good.Passing
stream_modethrough the adapter keeps stream construction consistent.
94-118: Storingstream_modeon the stream is clear.Keeps
_run()logic straightforward and avoids re-deriving configuration.
119-175: Multi vs single mode handling is clean and readable.The branching logic is easy to follow and matches the intended behavior.
236-253: Nice normalization for dict/object payloads.Handling
"content"in dicts/objects expands compatibility with custom stream payloads.
✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.
livekit-plugins/livekit-plugins-langchain/livekit/plugins/langchain/langgraph.py
Show resolved
Hide resolved
04b4d26 to
c9bd9b2
Compare
Langgraph not only supports messages streams but also custom. And it is only way to modify what to stream in livekit-langgraph integration.
So for that,
stream_modeparameter supporting"messages"and"custom"modes_to_chat_chunk()to handle dict and object inputsSummary by CodeRabbit
✏️ Tip: You can customize this high-level summary in your review settings.