Skip to content

fix(core): 将空响应视为失败并交给模型重试#809

Merged
dingyi222666 merged 4 commits intoChatLunaLab:v1-devfrom
CookSleep:fix/empty-response-retry
Apr 5, 2026
Merged

fix(core): 将空响应视为失败并交给模型重试#809
dingyi222666 merged 4 commits intoChatLunaLab:v1-devfrom
CookSleep:fix/empty-response-retry

Conversation

@CookSleep
Copy link
Copy Markdown
Member

变更说明

  • 将非流式空响应视为 API_REQUEST_FAILED,交给模型层现有 maxRetries 处理。
  • 流式响应改为检查是否收到有效文本或 tool call,避免空流被当作成功返回。
  • 保留纯 tool call 响应的成功路径,不把这类响应误判为空响应失败。

验证

  • yarn fast-build core

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Apr 1, 2026

Walkthrough

将流式与非流式响应验证从仅检测是否收到数据块(hasChunk)改为检测是否有实际响应内容或工具/函数调用证据(hasResponse);调整重试回退为指数退避,并扩展工具调用检测类型与相关判断位置。

Changes

Cohort / File(s) Summary
流与单次响应验证与重试
packages/core/src/llm-core/platform/model.ts
新增 hasResponse 判定(非空文本或工具/函数调用证据);移除基于 _ensureChunksReceived(hasChunk) 的后置校验;在流式结束与非流式 _generateWithRetry 路径均以 hasResponse 决定是否抛出 ChatLunaError(API_REQUEST_FAILED);重试等待改为指数退避 sleep(2000 * 2**attempt)
工具调用/块检测签名调整
packages/core/src/llm-core/platform/model.ts
_hasToolCallChunk 签名扩展为接受 `AIMessage

Sequence Diagram(s)

(无)

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

诗歌

🐰 我在流里细听响,块与语意我分辨,
空声不当回应算,函数呼叫也记全,
指数退避轻轻跳,小兔笑着去测试 ✨🐇

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Title check ✅ Passed The PR title clearly describes the main change: treating empty responses as failures and delegating to model retry logic, which aligns with the core modifications in the changeset.
Description check ✅ Passed The PR description is directly related to the changeset, explaining the handling of empty responses in both streaming and non-streaming cases, along with validation steps.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (2)
packages/core/src/llm-core/platform/model.ts (2)

330-337: 类型签名扩展合理,需修复格式

将参数类型从 AIMessageChunk 扩展为 AIMessage | AIMessageChunk 是合理的,因为现在需要在非流式路径(第 507-509 行)中复用此方法。对 tool_call_chunks 的类型转换是必要的,因为只有 AIMessageChunk 有此属性。

🛠️ 修复 Prettier 格式
     private _hasToolCallChunk(message?: AIMessage | AIMessageChunk): boolean {
         return (
             (message?.tool_calls?.length ?? 0) > 0 ||
-            ((message as AIMessageChunk | undefined)?.tool_call_chunks?.length ??
-                0) > 0 ||
+            ((message as AIMessageChunk | undefined)?.tool_call_chunks
+                ?.length ?? 0) > 0 ||
             (message?.invalid_tool_calls?.length ?? 0) > 0
         )
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/core/src/llm-core/platform/model.ts` around lines 330 - 337, The
_hasToolCallChunk method signature change to accept AIMessage | AIMessageChunk
is fine; fix the formatting so it meets Prettier/style rules: ensure the
conditional expression is line-broken and indented cleanly, keep the explicit
cast for accessing tool_call_chunks on AIMessageChunk (e.g., (message as
AIMessageChunk | undefined)?.tool_call_chunks) and align the nullish coalescing
checks for tool_calls, tool_call_chunks, and invalid_tool_calls so the
parentheses and operators are on the correct lines to satisfy the project's
formatter.

256-260: 代码逻辑正确,但有格式问题

hasResponse 的追踪逻辑正确地区分了"收到数据块"和"收到有效内容"。静态分析工具指出此处有格式问题。

🛠️ 修复 Prettier 格式
                     hasResponse =
                         hasResponse ||
                         hasTool ||
-                        getMessageContent(chunk.message.content).trim().length > 0
+                        getMessageContent(chunk.message.content).trim()
+                            .length > 0
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/core/src/llm-core/platform/model.ts` around lines 256 - 260,
代码逻辑无误但存在 Prettier 格式问题:在包含 hasResponse 更新和 yield chunk 的代码块(涉及变量
hasResponse、hasTool、getMessageContent 和 chunk)应用项目的 Prettier/格式化规则以修复换行与缩进,使表达式和
yield 语句符合统一风格(例如将条件表达式和 yield 分为单独行并确保正确空格/分号),然后保存以通过静态检查。
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@packages/core/src/llm-core/platform/model.ts`:
- Around line 330-337: The _hasToolCallChunk method signature change to accept
AIMessage | AIMessageChunk is fine; fix the formatting so it meets
Prettier/style rules: ensure the conditional expression is line-broken and
indented cleanly, keep the explicit cast for accessing tool_call_chunks on
AIMessageChunk (e.g., (message as AIMessageChunk | undefined)?.tool_call_chunks)
and align the nullish coalescing checks for tool_calls, tool_call_chunks, and
invalid_tool_calls so the parentheses and operators are on the correct lines to
satisfy the project's formatter.
- Around line 256-260: 代码逻辑无误但存在 Prettier 格式问题:在包含 hasResponse 更新和 yield chunk
的代码块(涉及变量 hasResponse、hasTool、getMessageContent 和 chunk)应用项目的
Prettier/格式化规则以修复换行与缩进,使表达式和 yield 语句符合统一风格(例如将条件表达式和 yield
分为单独行并确保正确空格/分号),然后保存以通过静态检查。

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Run ID: fa60dab4-877f-439b-b914-d78f7e70a20d

📥 Commits

Reviewing files that changed from the base of the PR and between 086a1f7 and d805a89.

📒 Files selected for processing (1)
  • packages/core/src/llm-core/platform/model.ts

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request improves response validation in ChatLunaChatModel by introducing a hasResponse flag to track whether a stream or completion contains actual content or tool calls, ensuring that empty responses are correctly identified as failures. Feedback suggests optimizing the streaming loop to avoid redundant string processing once a response is detected and notes that the final validation check is redundant for streaming paths already handled by internal assertions.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
packages/core/src/llm-core/platform/model.ts (1)

516-524: 非流式路径的空响应验证

对非流式响应的 _hasResponse 检查是正确且必要的。对于流式响应,虽然 _streamResponseChunks 已在内部进行了检查,但此处作为安全兜底也是合理的。

语义方面的小建议:根据相关代码片段,API_REQUEST_FAILED (103) 在错误码定义中语义上指代 API/传输层失败。此处用于"响应为空"可能导致上游错误处理代码混淆(例如区分网络故障与模型返回空内容)。如果未来需要更精细的错误区分,可考虑新增专用错误码如 EMPTY_RESPONSE。但当前实现可有效复用现有重试机制,满足 PR 需求。

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/core/src/llm-core/platform/model.ts` around lines 516 - 524, The
non-stream empty-response check inside model.ts currently throws ChatLunaError
with ChatLunaErrorCode.API_REQUEST_FAILED which conflates transport failures
with empty model output; create a new enum value
ChatLunaErrorCode.EMPTY_RESPONSE (or similar) and throw
ChatLunaError(ChatLunaErrorCode.EMPTY_RESPONSE) in the branch where
!_hasResponse(response.message ...) instead of API_REQUEST_FAILED, leaving
_streamResponseChunks and _hasResponse logic intact; update any error
constructors or handlers as needed to recognize the new EMPTY_RESPONSE code.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@packages/core/src/llm-core/platform/model.ts`:
- Around line 516-524: The non-stream empty-response check inside model.ts
currently throws ChatLunaError with ChatLunaErrorCode.API_REQUEST_FAILED which
conflates transport failures with empty model output; create a new enum value
ChatLunaErrorCode.EMPTY_RESPONSE (or similar) and throw
ChatLunaError(ChatLunaErrorCode.EMPTY_RESPONSE) in the branch where
!_hasResponse(response.message ...) instead of API_REQUEST_FAILED, leaving
_streamResponseChunks and _hasResponse logic intact; update any error
constructors or handlers as needed to recognize the new EMPTY_RESPONSE code.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Run ID: 25f7b321-ab35-4c55-802a-840ffc5e5f91

📥 Commits

Reviewing files that changed from the base of the PR and between d805a89 and 561717b.

📒 Files selected for processing (1)
  • packages/core/src/llm-core/platform/model.ts

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
packages/core/src/llm-core/platform/model.ts (1)

345-357: _hasResponse 检测逻辑全面,有一处可选优化。

方法覆盖了多种响应来源:内容、工具调用块、以及 additional_kwargs 中的遗留格式。根据相关代码片段,适配器(shared-adapter、gemini、claude)将工具调用放在 tool_call_chunks 而非 additional_kwargs.tool_calls

第 353-354 行对 additional_kwargs.tool_calls 的检查可能冗余,因为 _hasToolCallChunk 已经检查了 tool_call_chunks。但作为防御性代码处理原始 API 响应的边缘情况,保留也是合理的。

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/core/src/llm-core/platform/model.ts` around lines 345 - 357, The
_hasResponse method currently checks content, _hasToolCallChunk,
additional_kwargs.tool_calls, and additional_kwargs.function_call; to optimize
and clarify intent, reorder the logic to call this._hasToolCallChunk(message)
first (since adapters put tool calls in tool_call_chunks) and keep the
additional_kwargs.tool_calls check only as a defensive fallback; ensure you
still null-check and cast (as unknown[]|undefined) when inspecting
message?.additional_kwargs?.tool_calls and preserve the function_call check so
behavior doesn't change, referencing the _hasResponse and _hasToolCallChunk
symbols.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@packages/core/src/llm-core/platform/model.ts`:
- Around line 345-357: The _hasResponse method currently checks content,
_hasToolCallChunk, additional_kwargs.tool_calls, and
additional_kwargs.function_call; to optimize and clarify intent, reorder the
logic to call this._hasToolCallChunk(message) first (since adapters put tool
calls in tool_call_chunks) and keep the additional_kwargs.tool_calls check only
as a defensive fallback; ensure you still null-check and cast (as
unknown[]|undefined) when inspecting message?.additional_kwargs?.tool_calls and
preserve the function_call check so behavior doesn't change, referencing the
_hasResponse and _hasToolCallChunk symbols.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Run ID: b144baf8-3167-4f23-bdb8-30c398e6f76b

📥 Commits

Reviewing files that changed from the base of the PR and between 561717b and e1855c3.

📒 Files selected for processing (1)
  • packages/core/src/llm-core/platform/model.ts

@dingyi222666 dingyi222666 merged commit d24ed26 into ChatLunaLab:v1-dev Apr 5, 2026
5 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants