Skip to content

Conversation

@dlarocque
Copy link
Contributor

Adds all remaining general use models to the integration tests. This includes the lite models and the new gemini-3-pro-preview.

Adds all remaining general use models to the integration tests. This
includes the `lite` models and the new `gemini-3-pro-preview`.
@dlarocque dlarocque requested a review from a team as a code owner December 5, 2025 21:27
@changeset-bot
Copy link

changeset-bot bot commented Dec 5, 2025

⚠️ No Changeset found

Latest commit: 53f02d3

Merging this PR will not cause a version bump for any packages. If these changes should not result in a new version, you're good to go. If these changes should result in a version bump, you need to add a changeset.

This PR includes no changesets

When changesets are added to this PR, you'll see the packages that this PR includes changesets for and the associated semver types

Click here to learn what changesets are, and how to add one.

Click here if you're a maintainer who wants to add a changeset to this PR

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @dlarocque, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the integration test suite by incorporating all general-use AI models, including the new gemini-3-pro-preview and various lite models. The changes ensure comprehensive testing coverage for these new models, adapting existing tests for token counting, content generation, and chat sessions, while also adjusting test timeouts and feature support checks to align with model capabilities.

Highlights

  • Expanded Model Coverage: Integration tests now include gemini-3-pro-preview and lite versions of gemini-2.0-flash and gemini-2.5-flash models, ensuring comprehensive testing for these new additions.
  • Updated Token Count Expectations: Adjusted countTokens test assertions to reflect varying token counts for image and file parts across different models, specifically accommodating the different tokenization behavior of gemini-3-pro-preview.
  • Increased Test Timeouts: Extended timeouts for Chat Session and Generate Content tests to accommodate potentially longer response times from newer and more complex models, particularly gemini-3-pro-preview.
  • Refined Test Conditions: Modified URL Context and Code Execution tests to conditionally skip for models that do not support these features (e.g., lite models for code execution), improving test accuracy and efficiency.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request expands the integration test suite to include several new general-use models, which is a great addition for ensuring broader compatibility. The changes correctly update the model lists and adjust test expectations, such as token counts and timeouts, to accommodate the new models.

My review includes a couple of suggestions to improve code maintainability by reducing duplication and updating a stale comment. These are minor points in an otherwise solid update.

Comment on lines +121 to +129
let expectedImageTokens: number;
if (testConfig.model === 'gemini-3-pro-preview') {
expectedImageTokens =
testConfig.ai.backend.backendType === BackendType.GOOGLE_AI
? 1089
: 1120;
} else {
expectedImageTokens = 258;
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This logic for determining expectedImageTokens is duplicated in a few places in this file (e.g., in the 'text, image, and audio input' test). To improve maintainability, consider extracting it into a shared helper function. This would centralize the logic and make future updates easier.

Comment on lines 177 to 181
if (
testConfig.ai.backend.backendType === BackendType.GOOGLE_AI &&
testConfig.model === 'gemini-2.0-flash'
['gemini-2.0-flash-001', 'gemini-2.0-flash-lite-001'].includes(
testConfig.model
) // Models that don't support URL Context
) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The logic in this if condition has been updated to be model-specific, rather than backend-specific. However, the comment on the preceding line (176) // URL Context is not supported in Google AI for gemini-2.0-flash is now outdated and potentially misleading. Please consider updating or removing it to align with the new implementation.

@google-oss-bot
Copy link
Contributor

@google-oss-bot
Copy link
Contributor

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants