Chinese Model provider#1131
Conversation
There was a problem hiding this comment.
We want to support this by allowing folks to configure something like
--completions-api --base-url --api-key etc
rather than using a lot of specific env variables per provider -- since a lot of providers rely on the exact same conventions
Using the completions API will be basically unlock all models at once that rely on completions API compatibility
Even if we add this kind of configuration when starting the service, Google and Anthropic still have their own API invocation patterns. This means we would still need to add a lot of conditional logic to distinguish between the Completions API and other API styles. If we introduce a .env file and switch to configuring the model through environment variables instead, the create_model method would only need to read parameters from the file, without requiring changes in many other places. I’m not sure whether this approach is feasible, so I would greatly appreciate your advice. Thank you. |
|
Closing in favor of #1127 |
feat(cli): add Qwen and DeepSeek model support via ChatOpenAI
Adds Qwen and DeepSeek model support using langchain_community.chat_models.ChatOpenAI base implementation. Extends model providers while maintaining existing functionality.
Fixes #ISSUE_NUMBER
Changes:
Verification:
make formatandmake lintpassedNo breaking changes.