Skip to content

feat: prioritize explicit provider config over model registry lookup#6

Open
wolvever wants to merge 18 commits intobadlogic:mainfrom
wolvever:prioritize-provider-config
Open

feat: prioritize explicit provider config over model registry lookup#6
wolvever wants to merge 18 commits intobadlogic:mainfrom
wolvever:prioritize-provider-config

Conversation

@wolvever
Copy link

This update enables access to models through an LLM proxy that supports multiple model backends. For example, if there is a proxy (e.g., http://example-proxy.com) that supports models like gpt-o3 and claude3.7, and uses an OpenAI-compatible API interface, this implementation will allow seamless integration with such proxies.

By leveraging this functionality, users can interact with different models through a single proxy endpoint, simplifying the integration process.

jucheng added 16 commits June 12, 2025 13:56
…uments and add LLM response debugging"

This reverts commit d8f9d97.
wolvever added 2 commits July 4, 2025 16:31
- Fix request format issue where OpenAI client with custom baseURL was sending malformed requests
- Add claude-opus-4-20250514 model support to OpenAI provider for proxy usage
- Enable debug logging for all providers to show request/response details
- Remove duplicate logging that was interfering with request transformation
- Update claude-bridge version to 1.0.13

This resolves the issue where requests were being sent with {"content": "..."} format
instead of the proper OpenAI format with {"messages": [...], "model": "..."}
- Fix missing tool parameters in lemmy-to-anthropic transformation
- Tool call content_block.input now uses toolCall.arguments instead of empty object
- Resolves issue where Write tool 'content' parameter was missing
- Bump version from 1.0.13 to 1.0.14
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant