feat: support provider_base_urls for native-openai path#809
Open
leoge007 wants to merge 1 commit intogarrytan:masterfrom
Open
feat: support provider_base_urls for native-openai path#809leoge007 wants to merge 1 commit intogarrytan:masterfrom
leoge007 wants to merge 1 commit intogarrytan:masterfrom
Conversation
When provider_base_urls.openai is configured in ~/.gbrain/config.json, gbrain now correctly routes requests through the custom base URL for all native-openai touchpoints (embedding, expansion, chat). Previously, provider_base_urls was only used by the openai-compatible recipe tier, causing requests to go directly to OpenAI's API even when a custom endpoint was configured for the native-openai recipe. Three changes: 1. gateway.ts: Pass cfg.base_urls[recipe.id] as baseURL to createOpenAI() in the native-openai embedding instantiation path (expansion and chat paths already had this). Also pass hasCustomBaseUrl to assertTouchpoint() and dimsProviderOptions() for all three touchpoints. 2. model-resolver.ts: assertTouchpoint() accepts hasCustomBaseUrl param. When true, skips the model-name whitelist check for native-tier recipes, since custom endpoints may offer models not in the OpenAI recipe list. 3. dims.ts: dimsProviderOptions() accepts hasCustomBaseUrl param. When true, passes the dimensions parameter for native-openai embeddings even for non-text-embedding-3 models, since OpenAI-compatible endpoints (e.g. SiliconFlow, Azure OpenAI) typically support this parameter. Enables using providers like SiliconFlow, Azure OpenAI, vLLM, or any OpenAI-compatible endpoint through the native-openai recipe with proper base URL routing, model name flexibility, and dimension control.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Problem
When
provider_base_urls.openaiis configured in~/.gbrain/config.json(e.g. pointing to SiliconFlow, Azure OpenAI, vLLM, or any OpenAI-compatible endpoint), gbrain still sends requests directly to OpenAI's API for the native-openai recipe. Thebase_urlsconfig is only respected by theopenai-compatrecipe tier.This means users who want to use OpenAI-compatible embedding/chat/expansion providers through the
native-openaipath cannot do so, even thoughprovider_base_urlsis a documented config field inGBrainConfig.Changes
1.
gateway.ts— Pass baseURL to createOpenAI() in embedding pathThe expansion and chat native-openai paths already read
cfg.base_urls?.[recipe.id]and pass it asbaseURL. The embedding path was missing this. Now all three paths consistently route through the configured base URL.Also passes
hasCustomBaseUrltoassertTouchpoint()anddimsProviderOptions()so downstream logic can adapt.2.
model-resolver.ts— Relax model whitelist when custom base URL is setassertTouchpoint()now accepts ahasCustomBaseUrlparameter. When true, it skips the model-name whitelist check for native-tier recipes. This is safe because:Qwen/Qwen3-Embedding-8B)provider_base_urls3.
dims.ts— Pass dimensions when using custom base URLdimsProviderOptions()now accepts ahasCustomBaseUrlparameter. When true for thenative-openaipath, it passes thedimensionsparameter unconditionally (not just fortext-embedding-3-*). This is needed because:dimensionsparameterUsage Example
Then set
OPENAI_API_KEYto your SiliconFlow API key, and gbrain will route all OpenAI requests through SiliconFlow with correct base URL, model name, and dimension handling.Testing
bun run verifypasses (tsc, privacy checks, admin build, all linting)gbrain embedsuccessfully embeds pages using SiliconFlow + Qwen3-Embedding-8B → 1536 dimensionsScope
This PR only changes behavior when
provider_base_urlsis explicitly configured. Default behavior (routing directly to OpenAI) is completely unchanged.Need help on this PR? Tag
@codesmithwith what you need.