-
Notifications
You must be signed in to change notification settings - Fork 6.9k
fix(session): safeguard message conversion against invalid prompts (#… #8888
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: dev
Are you sure you want to change the base?
fix(session): safeguard message conversion against invalid prompts (#… #8888
Conversation
|
The following comment was made by an LLM, it may be inaccurate: No duplicate PRs found |
| ) | ||
| const filtered = result | ||
| .filter((msg) => msg.parts.length > 0) | ||
| .filter((msg) => msg.parts.some((part) => part.type !== "step-start")) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The filter can do both at the same time. And I still think the empty string checks in all the codebase should handle spaces properly: #7086
const filtered = result.filter((msg) =>
msg.parts.length > 0 &&
msg.parts.some((part) => part.type !== "step-start")
)There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Adding unit tests would avoid such regression to happen again.
|
Can confirm that I've encountered the same problem and this fixed it |
|
@jerome-benoit Thank you for the review! I've updated the PR with the following changes: Optimization: Combined the .filter() calls as requested. Ready for re-review. |
What does this PR do?
Fixes #8862 - Resolves "Invalid prompt at compaction" error (AI_InvalidPromptError) during session compaction.
Problem: The error "The messages must be a ModelMessage[]" occurred when convertToModelMessages failed validation. This happened because
toModelMessage
was occasionally passing:
Empty messages (with 0 parts) to the converter.
undefined values for attachments or providerMetadata in tool-result parts, which some AI providers (like Anthropic) check strictly.
How did you verify your code works?
Solution:
Filter empty messages: Added a filter to remove messages with parts.length === 0 before conversion.
Safeguard Tool Results:
Default attachments to [] if undefined.
Default callProviderMetadata to {} if undefined.
typescript
const filtered = result
.filter((msg) => msg.parts.length > 0) // Filter out empty messages
.filter((msg) => msg.parts.some((part) => part.type !== "step-start"))
return convertToModelMessages(filtered, {
tools: options?.tools,
})
How did you verify your code works?
Verified by code inspection that filtered array will strictly contain messages with content, satisfying
ModelMessage
requirements.
Verified that part.state.attachments and part.metadata access is now null-safe, preventing undefined values from reaching the AI SDK validators.