Skip to content

fix(api): include full model metadata in /v1/models response#417

Open
ahostbr wants to merge 1 commit intorouter-for-me:mainfrom
ahostbr:fix/openai-models-include-metadata
Open

fix(api): include full model metadata in /v1/models response#417
ahostbr wants to merge 1 commit intorouter-for-me:mainfrom
ahostbr:fix/openai-models-include-metadata

Conversation

@ahostbr
Copy link

@ahostbr ahostbr commented Mar 6, 2026

Summary

The /v1/models endpoint (OpenAI-compatible) was stripping all metadata except id, object, created, and owned_by. Meanwhile:

  • The model registry's convertModelToMap("openai") already provides rich fields: type, display_name, context_length, max_completion_tokens, supported_parameters, supported_endpoints
  • The Claude /v1/models endpoint (ClaudeModels) already returns all fields without filtering

This forced downstream consumers (dashboards, IDEs, proxy UIs) to maintain separate hardcoded model lists instead of fetching from the API — because the API didn't return enough metadata to group models by provider (type) or display human-readable names (display_name).

Changes

  • sdk/api/handlers/openai/openai_handlers.go: Remove the filtering loop in OpenAIModels() that stripped fields down to only 4. Now passes through all fields from Models() (which calls registry.GetAvailableModels("openai")), matching the Claude handler behavior.

Before

{
  "object": "list",
  "data": [
    {"id": "gpt-5.4", "object": "model", "created": 1780000000, "owned_by": "openai"}
  ]
}

After

{
  "object": "list",
  "data": [
    {
      "id": "gpt-5.4",
      "object": "model",
      "created": 1780000000,
      "owned_by": "openai",
      "type": "openai",
      "display_name": "GPT 5.4",
      "context_length": 1050000,
      "max_completion_tokens": 100000
    }
  ]
}

Backwards Compatibility

  • The 4 original fields (id, object, created, owned_by) are still present
  • Additional fields are additive — existing clients that only read those 4 fields are unaffected
  • Matches what the Claude models endpoint already returns (all fields)

The OpenAI /v1/models endpoint was stripping all metadata except id,
object, created, and owned_by. The model registry already provides
rich fields (type, display_name, context_length, max_completion_tokens,
supported_parameters, supported_endpoints) via convertModelToMap, and
the Claude /v1/models endpoint already returns them all.

This caused downstream consumers (dashboards, IDEs, proxy UIs) to be
unable to group models by provider or show capabilities without
maintaining a separate hardcoded model list.

Now the endpoint passes through all fields from the registry, matching
the behavior of the Claude models endpoint.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@gemini-code-assist
Copy link

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses an issue where the /v1/models endpoint was overly restrictive in the metadata it returned, hindering the ability of downstream applications to effectively utilize model information. By removing the filtering logic, the API now provides a complete set of model details, significantly improving its utility for clients that need rich model context for display and grouping purposes, and ensuring consistency across different model endpoints.

Highlights

  • API Response Enhancement: The /v1/models endpoint now returns comprehensive model metadata, moving beyond the previous four-field limitation (id, object, created, owned_by).
  • Downstream Consumer Support: This change enables dashboards, IDEs, and proxy UIs to dynamically group and display models using API data, eliminating the need for hardcoded lists.
  • Consistency with Claude API: The OpenAI-compatible /v1/models endpoint now aligns with the behavior of the Claude /v1/models endpoint, which already returned all available fields.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • sdk/api/handlers/openai/openai_handlers.go
    • Removed explicit filtering of model metadata, allowing all available fields to be returned in the /v1/models response.
Activity
  • No specific activity has been recorded for this pull request since its creation.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request updates the /v1/models OpenAI-compatible endpoint to return the full model metadata from the model registry, rather than a filtered subset of fields. This is achieved by removing the filtering logic within the OpenAIModels handler in sdk/api/handlers/openai/openai_handlers.go. The change makes the endpoint's response richer and more useful for downstream applications, and aligns its behavior with other model endpoints in the service. The change is additive and should not break existing clients. The implementation is a straightforward removal of now-unnecessary code.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant