Skip to content

Upgraded harness#38

Merged
xprilion merged 5 commits intomainfrom
dev
May 3, 2026
Merged

Upgraded harness#38
xprilion merged 5 commits intomainfrom
dev

Conversation

@xprilion
Copy link
Copy Markdown
Owner

@xprilion xprilion commented May 3, 2026

This pull request introduces several improvements to the agent's context management, prompt handling, and compatibility with Anthropic's API. The main focus is on making context compaction more robust and research-friendly, improving tool output handling, and ensuring system/user message alternation is Anthropic-compliant. There are also enhancements to the system prompt to clarify persistent memory and long-running task usage.

Agent Context Management & Compaction:

  • Implements a structured, four-phase context compaction in ContextManager, including pruning old tool outputs, protecting a token-budgeted tail, generating structured research summaries, and assembling compressed messages. The summary now uses a research-adapted template and supports iterative updates. [1] [2] [3]
  • Adds a more accurate token estimation using tiktoken if available, falling back to the previous method if not.

Prompt & System Message Improvements:

  • Updates system_prompt.yaml to inject memory, project, and knowledge context dynamically, and provides clearer instructions for using persistent memory, session search, and long-running background tasks. [1] [2]
  • Per-message mode hints are now appended to user messages (instead of using system role), ensuring compatibility with Anthropic's strict alternation rules. [1] [2]

Anthropic API Compatibility & Tool Output Handling:

  • Refactors message conversion for Anthropic to merge consecutive tool outputs into a single user message, and merges consecutive user messages to comply with Anthropic's user/assistant alternation.
  • Sets system prompt as a typed block and adds prompt caching headers for Anthropic API calls. [1] [2]

User Guidance & Nudges:

  • Appends urgent compaction and knowledge persistence hints directly to the last user message, and ensures only one hint is injected per loop iteration, prioritizing doom loop detection. [1] [2]

These changes collectively improve the agent's ability to manage long research conversations, reduce context loss, and provide a more seamless experience across different LLM providers.


Agent Context Management & Compaction

  • Implements a four-phase context compaction: pruning old tool outputs, protecting a token-budgeted tail, generating/updating structured research summaries, and reassembling messages. Adds a research-specific summary template and iterative summary updates. [1] [2] [3]
  • Improves token estimation by using tiktoken if available, for more accurate context management.

Prompt & System Message Improvements

  • Enhances system_prompt.yaml to inject memory, project, and knowledge context, and provides explicit instructions for persistent memory, session search, and long-running tasks. [1] [2]
  • Changes per-message mode hints to be appended to user messages, ensuring Anthropic compatibility. [1] [2]

Anthropic API Compatibility & Tool Output Handling

  • Refactors Anthropic message conversion to merge tool outputs and user messages, ensuring strict user/assistant alternation.
  • Updates Anthropic API calls to use typed system prompt blocks and enable prompt caching. [1] [2]

User Guidance & Nudges

  • Appends compaction and knowledge nudges directly to the last user message, and ensures only one hint is injected per loop iteration, prioritizing urgent hints. [1] [2]

@sonarqubecloud
Copy link
Copy Markdown

sonarqubecloud Bot commented May 3, 2026

Quality Gate Failed Quality Gate failed

Failed conditions
19 Security Hotspots

See analysis details on SonarQube Cloud

@xprilion xprilion merged commit 054ea18 into main May 3, 2026
5 of 6 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant