Skip to content

context_compaction added#4398

Draft
jayy-77 wants to merge 1 commit intogoogle:mainfrom
jayy-77:context_compaction
Draft

context_compaction added#4398
jayy-77 wants to merge 1 commit intogoogle:mainfrom
jayy-77:context_compaction

Conversation

@jayy-77
Copy link

@jayy-77 jayy-77 commented Feb 6, 2026

Please ensure you have read the contribution guide before creating a pull request.

Link to Issue or Description of Change

#4146

1. Link to an existing issue (if applicable):

  • Closes: #issue_number
  • Related: #issue_number

2. Or, if no issue exists, describe the change:

If applicable, please follow the issue templates to provide as much detail as
possible.

Problem:
A clear and concise description of what the problem is.

Solution:
A clear and concise description of what you want to happen and why you choose
this solution.

Testing Plan

Please describe the tests that you ran to verify your changes. This is required
for all PRs that are not small documentation or typo fixes.

Unit Tests:

  • I have added or updated unit tests for my change.
  • All unit tests pass locally.

Please include a summary of passed pytest results.

Manual End-to-End (E2E) Tests:

Please provide instructions on how to manually test your changes, including any
necessary setup or configuration. Please provide logs or screenshots to help
reviewers better understand the fix.

Checklist

  • I have read the CONTRIBUTING.md document.
  • I have performed a self-review of my own code.
  • I have commented my code, particularly in hard-to-understand areas.
  • I have added tests that prove my fix is effective or that my feature works.
  • New and existing unit tests pass locally with my changes.
  • I have manually tested my changes end-to-end.
  • Any dependent changes have been merged and published in downstream modules.

Additional context

Add any other context or screenshots about the feature request here.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @jayy-77, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request enhances the event compaction mechanism by introducing configurable thresholds for token count and recent event retention. These additions provide more control over when and how events are summarized, improving the efficiency and relevance of context management within LLM-backed applications.

Highlights

  • New Compaction Configuration: Introduced token_threshold and retain_recent_events fields to EventsCompactionConfig for more granular control over when and how event compaction occurs.
  • Conditional Event Summarization: The maybe_summarize_events function now utilizes the new configuration fields to conditionally trigger compaction based on token count and to retain a specified number of recent events.
  • Integration Test for Compaction: Added a new integration test, test_context_compaction_with_thresholds, to validate the functionality of the updated context compaction logic.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • src/google/adk/apps/app.py
    • Added token_threshold: Optional[int] to EventsCompactionConfig to specify a token count that triggers compaction.
    • Added retain_recent_events: Optional[int] to EventsCompactionConfig to define the number of most recent events to keep during compaction.
  • src/google/adk/apps/compaction.py
    • Modified the call to maybe_summarize_events to pass the app.events_compaction_config object, providing the summarizer with the necessary configuration.
  • src/google/adk/apps/llm_event_summarizer.py
    • Updated the maybe_summarize_events method signature to accept config: EventsCompactionConfig.
    • Implemented logic to check config.token_threshold and skip compaction if the current token count is below the threshold.
    • Implemented logic to retain a specified number of recent events by slicing the events list based on config.retain_recent_events before formatting for the prompt.
    • Added a placeholder count_tokens function to estimate token count based on event content parts.
  • tests/integration/test_multi_turn.py
    • Added a new asynchronous integration test, test_context_compaction_with_thresholds, to verify the new compaction features with token thresholds and event retention.
Activity
  • No human activity (comments, reviews, or progress updates) has been recorded for this pull request yet.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@adk-bot adk-bot added the core [Component] This issue is related to the core interface and implementation label Feb 6, 2026
@adk-bot
Copy link
Collaborator

adk-bot commented Feb 6, 2026

Response from ADK Triaging Agent

Hello @jayy-77, thank you for creating this PR!

Could you please fill out the PR template in the description? This includes linking to the issue this PR is addressing and providing a testing plan.

This information will help reviewers to review your PR more efficiently. Thanks!

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces new context compaction features, specifically token_threshold and retain_recent_events, to better manage conversation context in LLM-backed applications. The changes include adding these configuration options to EventsCompactionConfig, updating the maybe_summarize_events function to utilize them, and adding a new integration test. The overall structure and intent of the changes are clear and address a common challenge in LLM applications.

Comment on lines +99 to +100
def count_tokens(events: list[Event]) -> int:
return sum(len(event.content.parts) for event in events if event.content)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The count_tokens function is currently a placeholder that sums the number of parts in an event's content. This is a very rough approximation and does not accurately reflect the actual token count that an LLM would use. For the token_threshold feature to be effective and reliable, this logic needs to be replaced with a proper tokenization mechanism, ideally one that is specific to the LLM being used or a widely accepted tokenizer (e.g., tiktoken). An inaccurate token count could lead to either premature compaction or exceeding the LLM's context window, impacting performance and context retention.

Suggested change
def count_tokens(events: list[Event]) -> int:
return sum(len(event.content.parts) for event in events if event.content)
def count_tokens(events: list[Event]) -> int:
# TODO(developer): Replace this placeholder with actual LLM-specific token counting logic.
# For example, using a tokenizer from the LLM or a library like tiktoken.
return sum(len(event.content.parts) for event in events if event.content)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

core [Component] This issue is related to the core interface and implementation

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants