Skip to content

Add separate agent_base_url for OpenAI-compatible completions endpoint & Fix CLI test terminal width issue#154

Open
salvirezwan wants to merge 1 commit intovlm-run:mainfrom
salvirezwan:sal/test
Open

Add separate agent_base_url for OpenAI-compatible completions endpoint & Fix CLI test terminal width issue#154
salvirezwan wants to merge 1 commit intovlm-run:mainfrom
salvirezwan:sal/test

Conversation

@salvirezwan
Copy link

Summary

  • Introduces a new agent_base_url parameter on the VLMRun client, allowing the agent/OpenAI-compatible completions endpoint to be configured independently from the main API base URL. Falls back to VLMRUN_AGENT_BASE_URL env var, then to base_url.
  • Updates Agent class, CLI chat command, type protocol, and mock client to use the new agent_base_url.
  • Fixes CLI prediction tests by setting COLUMNS=200 to prevent Rich table truncation in narrow terminal widths.

Changes

  • vlmrun/client/client.py — Add agent_base_url attribute with env var / fallback resolution
  • vlmrun/client/agent.py — Use agent_base_url for OpenAI client (sync + async)
  • vlmrun/types/abstract.py — Add agent_base_url to VLMRunProtocol
  • vlmrun/cli/_cli/chat.py — Update --base-url to check VLMRUN_AGENT_BASE_URL first
  • tests/conftest.py — Update mock client to support agent_base_url
  • tests/cli/test_cli_predictions.py — Fix terminal width for table output tests

@gemini-code-assist
Copy link

Summary of Changes

Hello @salvirezwan, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request enhances the 'VLMRun' client's flexibility by introducing a dedicated configuration option for its OpenAI-compatible completions endpoint, allowing it to operate independently from the main API base URL. This change provides more granular control over endpoint routing. Additionally, it addresses a testing fragility by ensuring CLI output tests are robust against varying terminal widths.

Highlights

  • Agent Base URL Introduction: Introduced a new 'agent_base_url' parameter in the 'VLMRun' client, allowing for independent configuration of OpenAI-compatible completions endpoints.
  • Flexible Endpoint Configuration: Implemented fallback logic for 'agent_base_url', prioritizing the 'VLMRUN_AGENT_BASE_URL' environment variable, then defaulting to the main 'base_url'.
  • Component Integration: Updated relevant components, including the 'Agent' class, CLI 'chat' command, type protocol, and mock client, to properly utilize the new 'agent_base_url'.
  • CLI Test Reliability Fix: Resolved an issue in CLI prediction tests by explicitly setting the 'COLUMNS' environment variable to '200', preventing Rich table truncation in narrow terminal widths.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • tests/cli/test_cli_predictions.py
    • Modified 'test_list_predictions' to use 'monkeypatch' to set the 'COLUMNS' environment variable to '200'.
    • Modified 'test_list_predictions_table_format' to use 'monkeypatch' to set the 'COLUMNS' environment variable to '200'.
  • tests/conftest.py
    • Updated 'MockVLMRunClient' constructor to accept 'agent_base_url' and initialized it with a fallback to 'base_url'.
  • vlmrun/cli/_cli/chat.py
    • Updated the 'base_url' Typer option to check the 'VLMRUN_AGENT_BASE_URL' environment variable first, then 'VLMRUN_BASE_URL', and revised its help text.
  • vlmrun/client/agent.py
    • Modified the 'completions' property to use 'self._client.agent_base_url' for the OpenAI client's 'base_url'.
    • Modified the 'async_completions' property to use 'self._client.agent_base_url' for the AsyncOpenAI client's 'base_url'.
  • vlmrun/client/client.py
    • Added 'agent_base_url' parameter to the 'VLMRun' class constructor documentation.
    • Added 'agent_base_url' as an optional string attribute to the 'VLMRun' class.
    • Implemented 'post_init' logic to resolve 'agent_base_url' from environment variables or fallback to 'base_url'.
  • vlmrun/types/abstract.py
    • Added 'agent_base_url' as a string attribute to the 'VLMRunProtocol' interface.
Activity
  • No human activity recorded for this pull request.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

The pull request successfully introduces the agent_base_url parameter to the VLMRun client, allowing for independent configuration of the OpenAI-compatible completions endpoint. It also correctly addresses a terminal width issue in CLI tests. I have identified one medium-severity issue regarding an unused parameter in the CLI chat command that could lead to confusion.

Comment on lines 490 to 494
base_url: Optional[str] = typer.Option(
os.getenv("VLMRUN_BASE_URL", DEFAULT_BASE_URL),
os.getenv("VLMRUN_AGENT_BASE_URL", os.getenv("VLMRUN_BASE_URL", DEFAULT_BASE_URL)),
"--base-url",
help="VLM Run Agent API base URL.",
help="VLM Run Agent API base URL. Falls back to VLMRUN_AGENT_BASE_URL, then VLMRUN_BASE_URL.",
),

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The base_url parameter defined here is currently unused within the chat function body. Since the VLMRun client is initialized in the parent command group (vlmrun/cli/cli.py) and passed via ctx.obj, this local option does not affect the client's configuration.

If the intention is to allow overriding the agent URL specifically for the chat command, you should apply this value to the client instance inside the function (e.g., client.agent_base_url = base_url). Additionally, consider renaming this option to --agent-base-url to maintain consistency with the client property and avoid confusion with the global --base-url option.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant