feat: prioritize explicit provider config over model registry lookup#6
Open
wolvever wants to merge 18 commits intobadlogic:mainfrom
Open
feat: prioritize explicit provider config over model registry lookup#6wolvever wants to merge 18 commits intobadlogic:mainfrom
wolvever wants to merge 18 commits intobadlogic:mainfrom
Conversation
added 16 commits
June 12, 2025 13:56
…nd add LLM response debugging
… to arrays" This reverts commit 7d4b1a1.
…th instead of fallback" This reverts commit 80f3246.
…uments and add LLM response debugging" This reverts commit d8f9d97.
This reverts commit 7515774.
…gnosis" This reverts commit 0866741.
This reverts commit c8fc2be.
… lookup" This reverts commit 4c9cfdd.
- Fix request format issue where OpenAI client with custom baseURL was sending malformed requests
- Add claude-opus-4-20250514 model support to OpenAI provider for proxy usage
- Enable debug logging for all providers to show request/response details
- Remove duplicate logging that was interfering with request transformation
- Update claude-bridge version to 1.0.13
This resolves the issue where requests were being sent with {"content": "..."} format
instead of the proper OpenAI format with {"messages": [...], "model": "..."}
- Fix missing tool parameters in lemmy-to-anthropic transformation - Tool call content_block.input now uses toolCall.arguments instead of empty object - Resolves issue where Write tool 'content' parameter was missing - Bump version from 1.0.13 to 1.0.14
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
This update enables access to models through an LLM proxy that supports multiple model backends. For example, if there is a proxy (e.g., http://example-proxy.com) that supports models like gpt-o3 and claude3.7, and uses an OpenAI-compatible API interface, this implementation will allow seamless integration with such proxies.
By leveraging this functionality, users can interact with different models through a single proxy endpoint, simplifying the integration process.