fix(core): 正式支持 direct tool observation#815
Conversation
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: Repository UI Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (1)
✅ Files skipped from review due to trivial changes (1)
Walkthrough本次变更在 Agent 观察类型中加入对直接工具输出(DirectToolOutput)的支持,新增并导出 Changes
Sequence Diagram(s)sequenceDiagram
participant Tool
participant AgentExecutor
participant MessageBuilder
participant Storage
Tool->>AgentExecutor: return DirectToolOutput / string / complex content
AgentExecutor->>AgentExecutor: isAgentObservation() 判断(包括 DirectToolOutput)
AgentExecutor->>MessageBuilder: observationToMessageContent(observation)
MessageBuilder->>Storage: create ToolMessage(name, tool_call_id, content)
Storage-->>AgentExecutor: 存储/确认
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Code Review
This pull request introduces support for DirectToolOutput within the agent's observation handling. It updates the AgentObservation type definition and modifies the logic in the legacy executor, OpenAI agent, sub-agent, and database history to correctly identify and process direct tool outputs, typically by setting the ToolMessage content to an empty string when such outputs are detected. I have no feedback to provide.
There was a problem hiding this comment.
🧹 Nitpick comments (1)
packages/core/src/llm-core/agent/legacy-executor.ts (1)
312-320: 直接工具输出的replyEmitted属性访问存在类型安全隐患。在第 314 行和第 319 行,代码通过索引访问
last.observation['replyEmitted']。虽然AgentDirectToolObservation类型定义了这个可选属性,但AgentObservation是联合类型,当observation是string或AgentObservationComplexContent[]时,这种访问方式会返回undefined而不会报错,所以运行时是安全的。不过,建议在 TODO 注释中标记的代码移除后,考虑使用类型守卫来提高代码可读性。
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/src/llm-core/agent/legacy-executor.ts` around lines 312 - 320, The code in legacy-executor.ts indexes last.observation['replyEmitted'] (used when building the output object and replyEmitted field) but observation is a union (AgentDirectToolObservation | string | AgentObservationComplexContent[]); add a type guard that narrows last.observation to AgentDirectToolObservation before accessing replyEmitted (e.g., create/inline an isAgentDirectToolObservation predicate that checks typeof last.observation === 'object' && last.observation !== null && 'replyEmitted' in last.observation), then use that guard when computing output (toOutput(last.observation) branch) and replyEmitted to avoid unsafe indexing and improve readability.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@packages/core/src/llm-core/agent/legacy-executor.ts`:
- Around line 312-320: The code in legacy-executor.ts indexes
last.observation['replyEmitted'] (used when building the output object and
replyEmitted field) but observation is a union (AgentDirectToolObservation |
string | AgentObservationComplexContent[]); add a type guard that narrows
last.observation to AgentDirectToolObservation before accessing replyEmitted
(e.g., create/inline an isAgentDirectToolObservation predicate that checks
typeof last.observation === 'object' && last.observation !== null &&
'replyEmitted' in last.observation), then use that guard when computing output
(toOutput(last.observation) branch) and replyEmitted to avoid unsafe indexing
and improve readability.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: 12943ea6-4c26-486c-8e58-f2e2246d9de5
📒 Files selected for processing (5)
packages/core/src/llm-core/agent/legacy-executor.tspackages/core/src/llm-core/agent/openai/index.tspackages/core/src/llm-core/agent/sub-agent.tspackages/core/src/llm-core/agent/types.tspackages/core/src/llm-core/memory/message/database_history.ts
Reuse a shared observationToMessageContent helper so direct tool observations are converted consistently and ToolMessage content stays type-safe across agent flows.
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/extension-agent/src/sub-agent/session.ts`:
- Around line 9-12: 当前从 'koishi-plugin-chatluna/llm-core/agent'
的命名导入顺序未按字母表排序,触发 lint 的 sort-imports 警告;请将导入项按字母序重新排列,例如确保
observationToMessageContent 在 AgentStep 之前或之后按字母顺序排序,从而消除警告,并保持原有导出名称不变(参照导入符号
observationToMessageContent 和 AgentStep 来定位修改处)。
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: 7b435b46-e94d-400b-a5d3-bdf713ee5424
📒 Files selected for processing (6)
packages/core/src/llm-core/agent/executor.tspackages/core/src/llm-core/agent/legacy-executor.tspackages/core/src/llm-core/agent/openai/index.tspackages/core/src/llm-core/agent/sub-agent.tspackages/core/src/llm-core/memory/message/database_history.tspackages/extension-agent/src/sub-agent/session.ts
✅ Files skipped from review due to trivial changes (2)
- packages/core/src/llm-core/agent/sub-agent.ts
- packages/core/src/llm-core/agent/executor.ts
🚧 Files skipped from review as they are similar to previous changes (2)
- packages/core/src/llm-core/memory/message/database_history.ts
- packages/core/src/llm-core/agent/openai/index.ts
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Summary
AgentObservation,避免最终回复对象在 agent 流程中被提前当成非法 observation 转成字符串。character_reply一类最终回复工具可以通过 LangChain 原生的lc_direct_tool_output正常结束,而不是依赖额外兜底。