Disclaimer: LLM Generated Content Contained Herein. I verified it before posting.
Summary
After condensation, the CondensationSummaryEvent can be inserted at a position that breaks action-observation pairs or splits parallel tool call batches. This causes LLM API errors because the message ordering requirements are violated.
Problem
The LLMSummarizingCondenser uses a fixed summary_offset equal to keep_first when creating a Condensation event. However, after View.from_events() applies filtering (via _enforce_batch_atomicity and filter_unmatched_tool_calls), the actual kept_events list structure may differ from the original.
This causes the summary to be inserted at an unsafe position:
- Between an ActionEvent and its matching ObservationEvent - The LLM API expects tool results to immediately follow the assistant's tool call
- Between two ActionEvents from the same parallel batch - Parallel tool calls must be grouped in a single assistant message
Observed Behavior
Debug output showing the bug:
[DEBUG] Events BEFORE summary insertion:
[0] SystemPromptEvent
[1] MessageEvent
[2] ObservationEvent [tool_call_id=toolu_01TL...] ← orphaned from previous condensation
[3] ActionEvent [tool_call_id=toolu_01Vk...]
[4] ObservationEvent [tool_call_id=toolu_01Vk...] --> INSERT HERE (offset=4)
...
[DEBUG] ⚠️ WARNING: Inserting BETWEEN action and observation!
Action at [3] with tool_call_id=toolu_01Vk...
Observation at [4] with tool_call_id=toolu_01Vk...
After insertion, the event order becomes:
ActionEvent → CondensationSummaryEvent → ObservationEvent
When converted to LLM messages:
assistant (tool_call) → user (summary) → tool (result)
This violates Anthropic's API requirement that tool results must immediately follow the assistant's tool_use block.
Impact
- LLM API errors due to malformed message sequences
- Conversation state corruption after condensation
- Intermittent failures (only occurs when
summary_offset happens to land at an unsafe position)
Root Cause
View.from_events() inserts the summary at the offset stored in the Condensation event without verifying the position is safe after filtering has modified the event list structure.
See: https://openhands-ai.slack.com/archives/C06U8UTKSAD/p1765744162377229
Disclaimer: LLM Generated Content Contained Herein. I verified it before posting.
Summary
After condensation, the
CondensationSummaryEventcan be inserted at a position that breaks action-observation pairs or splits parallel tool call batches. This causes LLM API errors because the message ordering requirements are violated.Problem
The
LLMSummarizingCondenseruses a fixedsummary_offsetequal tokeep_firstwhen creating aCondensationevent. However, afterView.from_events()applies filtering (via_enforce_batch_atomicityandfilter_unmatched_tool_calls), the actualkept_eventslist structure may differ from the original.This causes the summary to be inserted at an unsafe position:
Observed Behavior
Debug output showing the bug:
After insertion, the event order becomes:
When converted to LLM messages:
This violates Anthropic's API requirement that tool results must immediately follow the assistant's tool_use block.
Impact
summary_offsethappens to land at an unsafe position)Root Cause
View.from_events()inserts the summary at the offset stored in theCondensationevent without verifying the position is safe after filtering has modified the event list structure.See: https://openhands-ai.slack.com/archives/C06U8UTKSAD/p1765744162377229