Conversation
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 961b3797d8
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| const handleDelete = (id: string) => { | ||
| if (window.confirm("Are you sure you want to delete this provider?")) { | ||
| if (activeProviderId === id) { | ||
| setActiveProviderId(null) | ||
| } |
There was a problem hiding this comment.
Reset selected model when deleting active provider
When a user deletes the currently active custom provider, you clear activeProviderId but leave lastSelectedModelId pointing at the custom model. The transport fallback uses lastSelectedModelId even when no provider is active (see ipc-chat-transport.ts modelString construction), so after deletion the app can still try to send a model id that only existed in the removed provider, which can fail for default Claude Code. This happens if a user deletes the active provider without first switching to a standard model. Consider resetting lastSelectedModelId to a standard model (or the first available base model) when removing the active provider.
Useful? React with 👍 / 👎.
| // New: Model providers storage | ||
| export const modelProvidersAtom = atomWithStorage<ModelProvider[]>( | ||
| "agents:model-providers", | ||
| [OFFLINE_PROVIDER], // Start with offline provider | ||
| undefined, |
There was a problem hiding this comment.
Migrate legacy custom Claude config to providers
The new model provider storage only initializes from agents:model-providers and drops the previous agents:claude-custom-config key entirely. There’s no migration path in this commit, so existing users who set a custom API key/model via the legacy storage will lose that configuration on upgrade and fall back to the default Claude Code setup until they re‑enter credentials. Consider migrating the old customClaudeConfig into a provider entry (or reading it as a fallback) to avoid silent regression for existing installs.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
Pull request overview
This PR implements a comprehensive provider-based model configuration system replacing the previous single custom configuration approach. The changes enable users to configure multiple model providers (e.g., Ollama, LM Studio, custom endpoints) with multi-model support per provider.
Changes:
- Refactored model configuration from single
customClaudeConfigAtomto multipleModelProviderentries inmodelProvidersAtom - Added immediate token encryption during onboarding flow using Electron's safeStorage API
- Implemented IPC model discovery endpoint that supports both Ollama and OpenAI-compatible APIs
- Updated UI components to display and select from multiple providers and their models
- Added provider management dialog in settings with model discovery feature
Reviewed changes
Copilot reviewed 10 out of 11 changed files in this pull request and generated 13 comments.
Show a summary per file
| File | Description |
|---|---|
| src/renderer/lib/atoms/index.ts | Replaced single custom config with ModelProvider type and provider array storage |
| src/renderer/features/onboarding/api-key-onboarding-page.tsx | Updated onboarding to encrypt tokens immediately and create provider entries |
| src/renderer/features/agents/main/new-chat-form.tsx | Added provider-aware model dropdown with sections for standard and custom models |
| src/renderer/features/agents/main/chat-input-area.tsx | Updated chat input model selector to support custom providers |
| src/renderer/features/agents/main/active-chat.tsx | Updated active chat display to show custom model indicator |
| src/renderer/features/agents/lib/models.ts | Added getAvailableModels function to combine standard and custom provider models |
| src/renderer/features/agents/lib/ipc-chat-transport.ts | Updated transport to use provider-based configuration and encrypt tokens on-the-fly |
| src/renderer/components/dialogs/settings-tabs/agents-models-tab.tsx | Replaced simple override fields with full provider management UI including add/edit/delete |
| src/main/lib/trpc/routers/claude.ts | Added encryptToken and fetchModels endpoints, updated query to decrypt tokens |
| package.json | Version bump to 0.0.55 and added TypeScript native preview dependency |
| bun.lock | Updated lockfile with new dependencies and version changes |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
src/main/lib/trpc/routers/claude.ts
Outdated
| try { | ||
| const ollamaRes = await fetch(`${cleanUrl}/api/tags`) | ||
| if (ollamaRes.ok) { | ||
| const data = await ollamaRes.json() | ||
| if (Array.isArray(data.models)) { | ||
| const models = data.models | ||
| .map((model: { name?: string }) => model.name) | ||
| .filter((name: string | undefined): name is string => Boolean(name)) | ||
| return { models } | ||
| } | ||
| } | ||
| } catch (error) { | ||
| console.warn("[models] Failed to fetch Ollama tags:", error) | ||
| } | ||
|
|
||
| try { | ||
| const res = await fetch(`${cleanUrl}/v1/models`, { | ||
| headers: authToken ? { Authorization: `Bearer ${authToken}` } : {}, | ||
| }) | ||
| if (res.ok) { | ||
| const data = await res.json() | ||
| if (Array.isArray(data.data)) { | ||
| const models = data.data | ||
| .map((model: { id?: string }) => model.id) | ||
| .filter((id: string | undefined): id is string => Boolean(id)) | ||
| return { models } | ||
| } | ||
| } | ||
| } catch (error) { | ||
| console.warn("[models] Failed to fetch OpenAI-compatible models:", error) | ||
| } | ||
|
|
||
| return { models: [] as string[] } |
There was a problem hiding this comment.
The fetchModels endpoint tries Ollama format first, then falls back to OpenAI-compatible format. However, both attempts swallow errors and return empty models array on failure. This makes it difficult to distinguish between:
- A server that responds but has no models
- A server that's unreachable
- A server that returns an error
Consider returning more detailed information about what failed, especially for debugging purposes. At minimum, the response could include a status field indicating whether the request succeeded or failed and why.
| try { | |
| const ollamaRes = await fetch(`${cleanUrl}/api/tags`) | |
| if (ollamaRes.ok) { | |
| const data = await ollamaRes.json() | |
| if (Array.isArray(data.models)) { | |
| const models = data.models | |
| .map((model: { name?: string }) => model.name) | |
| .filter((name: string | undefined): name is string => Boolean(name)) | |
| return { models } | |
| } | |
| } | |
| } catch (error) { | |
| console.warn("[models] Failed to fetch Ollama tags:", error) | |
| } | |
| try { | |
| const res = await fetch(`${cleanUrl}/v1/models`, { | |
| headers: authToken ? { Authorization: `Bearer ${authToken}` } : {}, | |
| }) | |
| if (res.ok) { | |
| const data = await res.json() | |
| if (Array.isArray(data.data)) { | |
| const models = data.data | |
| .map((model: { id?: string }) => model.id) | |
| .filter((id: string | undefined): id is string => Boolean(id)) | |
| return { models } | |
| } | |
| } | |
| } catch (error) { | |
| console.warn("[models] Failed to fetch OpenAI-compatible models:", error) | |
| } | |
| return { models: [] as string[] } | |
| const debugInfo: { | |
| ollama?: { ok: boolean; status?: number; error?: string } | |
| openai?: { ok: boolean; status?: number; error?: string } | |
| } = {} | |
| // Try Ollama-compatible endpoint first | |
| try { | |
| const ollamaRes = await fetch(`${cleanUrl}/api/tags`) | |
| debugInfo.ollama = { | |
| ok: ollamaRes.ok, | |
| status: ollamaRes.status, | |
| } | |
| if (ollamaRes.ok) { | |
| const data = await ollamaRes.json() | |
| if (Array.isArray((data as any).models)) { | |
| const models = (data as any).models | |
| .map((model: { name?: string }) => model.name) | |
| .filter((name: string | undefined): name is string => Boolean(name)) | |
| if (models.length > 0) { | |
| return { | |
| models, | |
| status: { | |
| success: true, | |
| source: "ollama" as const, | |
| details: debugInfo, | |
| }, | |
| } | |
| } | |
| debugInfo.ollama.error = "No models found in Ollama response" | |
| } else { | |
| debugInfo.ollama.error = "Ollama response missing 'models' array" | |
| } | |
| } else { | |
| debugInfo.ollama.error = `Ollama endpoint returned HTTP ${ollamaRes.status}` | |
| } | |
| } catch (error) { | |
| const message = | |
| error instanceof Error ? error.message : String(error) | |
| debugInfo.ollama = { | |
| ok: false, | |
| error: `Failed to fetch Ollama tags: ${message}`, | |
| } | |
| console.warn("[models] Failed to fetch Ollama tags:", error) | |
| } | |
| // Fallback: try OpenAI-compatible /v1/models endpoint | |
| try { | |
| const res = await fetch(`${cleanUrl}/v1/models`, { | |
| headers: authToken ? { Authorization: `Bearer ${authToken}` } : {}, | |
| }) | |
| debugInfo.openai = { | |
| ok: res.ok, | |
| status: res.status, | |
| } | |
| if (res.ok) { | |
| const data = await res.json() | |
| if (Array.isArray((data as any).data)) { | |
| const models = (data as any).data | |
| .map((model: { id?: string }) => model.id) | |
| .filter((id: string | undefined): id is string => Boolean(id)) | |
| if (models.length > 0) { | |
| return { | |
| models, | |
| status: { | |
| success: true, | |
| source: "openai" as const, | |
| details: debugInfo, | |
| }, | |
| } | |
| } | |
| debugInfo.openai.error = "No models found in OpenAI-compatible response" | |
| } else { | |
| debugInfo.openai.error = "OpenAI-compatible response missing 'data' array" | |
| } | |
| } else { | |
| debugInfo.openai.error = `OpenAI-compatible endpoint returned HTTP ${res.status}` | |
| } | |
| } catch (error) { | |
| const message = | |
| error instanceof Error ? error.message : String(error) | |
| debugInfo.openai = { | |
| ok: false, | |
| error: `Failed to fetch OpenAI-compatible models: ${message}`, | |
| } | |
| console.warn("[models] Failed to fetch OpenAI-compatible models:", error) | |
| } | |
| // Both attempts failed to return any models | |
| return { | |
| models: [] as string[], | |
| status: { | |
| success: false, | |
| source: "none" as const, | |
| reason: | |
| "Failed to fetch models from both Ollama and OpenAI-compatible endpoints", | |
| details: debugInfo, | |
| }, | |
| } |
| <div className="bg-background rounded-lg border border-border overflow-hidden divide-y divide-border"> | ||
| {providers.map(provider => ( | ||
| <div key={provider.id} className="flex items-center justify-between p-3 hover:bg-muted/50"> | ||
| <div className="flex items-center gap-3"> | ||
| <div className="flex-shrink-0"> | ||
| {provider.isOffline ? ( | ||
| <div className="h-8 w-8 rounded bg-blue-500/10 flex items-center justify-center text-blue-500"> | ||
| <Server className="h-4 w-4" /> | ||
| </div> | ||
| ) : ( | ||
| <div className="h-8 w-8 rounded bg-purple-500/10 flex items-center justify-center text-purple-500"> | ||
| <Globe className="h-4 w-4" /> | ||
| </div> | ||
| )} | ||
| </div> | ||
| <div> | ||
| <div className="text-sm font-medium flex items-center gap-2"> | ||
| {provider.name} | ||
| {provider.isOffline && <Badge variant="secondary" className="text-[10px] h-4">OFFLINE</Badge>} | ||
| </div> | ||
| <div className="text-xs text-muted-foreground"> | ||
| {provider.models.length} model{provider.models.length === 1 ? "" : "s"} • {provider.baseUrl} | ||
| </div> | ||
| </div> | ||
| </div> | ||
| <div className="flex items-center gap-1"> | ||
| <Button size="icon" variant="ghost" className="h-8 w-8 text-muted-foreground hover:text-foreground" onClick={() => handleEdit(provider)}> | ||
| <Pencil className="h-3.5 w-3.5" /> | ||
| </Button> | ||
| {!provider.isOffline && ( | ||
| <Button size="icon" variant="ghost" className="h-8 w-8 text-muted-foreground hover:text-red-500" onClick={() => handleDelete(provider.id)}> | ||
| <Trash className="h-3.5 w-3.5" /> | ||
| </Button> | ||
| )} | ||
| </div> | ||
| </div> | ||
| ))} | ||
| </div> |
There was a problem hiding this comment.
The provider list includes the offline Ollama provider by default (OFFLINE_PROVIDER). However, the settings UI shows all providers in the list including the offline one. Users might be confused why they can't delete the offline provider (line 549 prevents deletion with !provider.isOffline check). Consider adding a visual indicator or help text explaining that the offline provider is built-in and cannot be removed.
| <div className="bg-background rounded-lg border border-border overflow-hidden divide-y divide-border"> | |
| {providers.map(provider => ( | |
| <div key={provider.id} className="flex items-center justify-between p-3 hover:bg-muted/50"> | |
| <div className="flex items-center gap-3"> | |
| <div className="flex-shrink-0"> | |
| {provider.isOffline ? ( | |
| <div className="h-8 w-8 rounded bg-blue-500/10 flex items-center justify-center text-blue-500"> | |
| <Server className="h-4 w-4" /> | |
| </div> | |
| ) : ( | |
| <div className="h-8 w-8 rounded bg-purple-500/10 flex items-center justify-center text-purple-500"> | |
| <Globe className="h-4 w-4" /> | |
| </div> | |
| )} | |
| </div> | |
| <div> | |
| <div className="text-sm font-medium flex items-center gap-2"> | |
| {provider.name} | |
| {provider.isOffline && <Badge variant="secondary" className="text-[10px] h-4">OFFLINE</Badge>} | |
| </div> | |
| <div className="text-xs text-muted-foreground"> | |
| {provider.models.length} model{provider.models.length === 1 ? "" : "s"} • {provider.baseUrl} | |
| </div> | |
| </div> | |
| </div> | |
| <div className="flex items-center gap-1"> | |
| <Button size="icon" variant="ghost" className="h-8 w-8 text-muted-foreground hover:text-foreground" onClick={() => handleEdit(provider)}> | |
| <Pencil className="h-3.5 w-3.5" /> | |
| </Button> | |
| {!provider.isOffline && ( | |
| <Button size="icon" variant="ghost" className="h-8 w-8 text-muted-foreground hover:text-red-500" onClick={() => handleDelete(provider.id)}> | |
| <Trash className="h-3.5 w-3.5" /> | |
| </Button> | |
| )} | |
| </div> | |
| </div> | |
| ))} | |
| </div> | |
| <> | |
| <div className="bg-background rounded-lg border border-border overflow-hidden divide-y divide-border"> | |
| {providers.map(provider => ( | |
| <div key={provider.id} className="flex items-center justify-between p-3 hover:bg-muted/50"> | |
| <div className="flex items-center gap-3"> | |
| <div className="flex-shrink-0"> | |
| {provider.isOffline ? ( | |
| <div className="h-8 w-8 rounded bg-blue-500/10 flex items-center justify-center text-blue-500"> | |
| <Server className="h-4 w-4" /> | |
| </div> | |
| ) : ( | |
| <div className="h-8 w-8 rounded bg-purple-500/10 flex items-center justify-center text-purple-500"> | |
| <Globe className="h-4 w-4" /> | |
| </div> | |
| )} | |
| </div> | |
| <div> | |
| <div className="text-sm font-medium flex items-center gap-2"> | |
| {provider.name} | |
| {provider.isOffline && <Badge variant="secondary" className="text-[10px] h-4">OFFLINE</Badge>} | |
| </div> | |
| <div className="text-xs text-muted-foreground"> | |
| {provider.models.length} model{provider.models.length === 1 ? "" : "s"} • {provider.baseUrl} | |
| </div> | |
| </div> | |
| </div> | |
| <div className="flex items-center gap-1"> | |
| <Button size="icon" variant="ghost" className="h-8 w-8 text-muted-foreground hover:text-foreground" onClick={() => handleEdit(provider)}> | |
| <Pencil className="h-3.5 w-3.5" /> | |
| </Button> | |
| {!provider.isOffline && ( | |
| <Button size="icon" variant="ghost" className="h-8 w-8 text-muted-foreground hover:text-red-500" onClick={() => handleDelete(provider.id)}> | |
| <Trash className="h-3.5 w-3.5" /> | |
| </Button> | |
| )} | |
| </div> | |
| </div> | |
| ))} | |
| </div> | |
| {providers.some(p => p.isOffline) && ( | |
| <div className="text-[11px] text-muted-foreground"> | |
| The offline provider is built in and cannot be removed. | |
| </div> | |
| )} | |
| </> |
| const encrypted = safeStorage.isEncryptionAvailable() | ||
| ? safeStorage.encryptString(input.token).toString("base64") | ||
| : Buffer.from(input.token, "utf-8").toString("base64") |
There was a problem hiding this comment.
The fallback encryption when safeStorage is unavailable uses plain base64 encoding, which provides no actual encryption. While it's prefixed with "enc:", this gives a false sense of security. Consider either:
- Refusing to store tokens when encryption is unavailable and displaying an error to the user
- Clearly documenting this limitation and warning users
- Using an alternative encryption method as fallback
This is critical because tokens stored with base64 can be easily decoded by anyone with file system access.
| const encrypted = safeStorage.isEncryptionAvailable() | |
| ? safeStorage.encryptString(input.token).toString("base64") | |
| : Buffer.from(input.token, "utf-8").toString("base64") | |
| if (!safeStorage.isEncryptionAvailable()) { | |
| throw new Error( | |
| "Secure token storage is not available on this system; cannot encrypt token.", | |
| ) | |
| } | |
| const encrypted = safeStorage.encryptString(input.token).toString("base64") |
| appStore.set(modelProvidersAtom, updatedProviders) | ||
| } | ||
| } catch (err) { | ||
| console.error("[models] Failed to encrypt custom model token:", err) |
There was a problem hiding this comment.
The encryption error is caught and logged but the function continues with an unencrypted token in the customConfig. This could lead to tokens being sent in plain text. After a failed encryption attempt, the token should either:
- Not be used at all (abort the operation)
- Be set to undefined/empty to prevent accidental plain text transmission
- Re-throw the error to prevent the chat from proceeding
Additionally, the user should be notified that their token could not be secured.
| console.error("[models] Failed to encrypt custom model token:", err) | |
| console.error("[models] Failed to encrypt custom model token:", err) | |
| Sentry.captureException(err) | |
| // Ensure we do not continue with a plaintext token after a failed encryption | |
| customConfig = { ...customConfig, token: undefined } | |
| if (activeProviderId) { | |
| const providers = appStore.get(modelProvidersAtom) | |
| const updatedProviders = providers.map((provider) => | |
| provider.id === activeProviderId | |
| ? { ...provider, token: undefined } | |
| : provider, | |
| ) | |
| appStore.set(modelProvidersAtom, updatedProviders) | |
| } | |
| toast.error( | |
| "Your API token could not be securely encrypted. It has been cleared. Please re-enter it in settings.", | |
| ) |
| const { encrypted } = await encryptTokenMutation.mutateAsync({ | ||
| token: trimmedToken, | ||
| }) | ||
| const providerId = `provider-${Date.now()}` |
There was a problem hiding this comment.
Using Date.now() for ID generation can lead to collisions if multiple providers are created in rapid succession (e.g., within the same millisecond). Consider using a more robust ID generation method such as:
- crypto.randomUUID() for truly unique IDs
- A counter combined with timestamp
- nanoid or similar library
This is particularly important in the onboarding flow where users might submit quickly or the same code path could be triggered multiple times.
| ) | ||
| .mutation(async ({ input }) => { | ||
| const cleanUrl = input.baseUrl.replace(/\/$/, "") | ||
| const authToken = input.token ? decryptIfNeeded(input.token) : "" |
There was a problem hiding this comment.
The token field is optional (z.string().optional()) but the decryptIfNeeded function is called on it without checking if it exists first. While decryptIfNeeded does handle falsy values, the type signature suggests the token could be undefined. Consider making the validation more explicit by using .nullable() or adding a default value to make the intent clearer.
| export const modelProvidersAtom = atomWithStorage<ModelProvider[]>( | ||
| "agents:model-providers", | ||
| [OFFLINE_PROVIDER], // Start with offline provider | ||
| undefined, | ||
| { getOnInit: true }, | ||
| ) |
There was a problem hiding this comment.
The old customClaudeConfigAtom (stored at "agents:claude-custom-config") has been completely removed without providing a migration path for existing users. Users who configured custom models in previous versions will lose their configuration after this update. Consider adding a one-time migration that:
- Checks for existing data in the old storage key
- Converts it to the new ModelProvider format
- Adds it to modelProvidersAtom
- Clears the old storage key
This should be done during app initialization to ensure a smooth upgrade experience.
| const { encrypted } = await encryptTokenMutation.mutateAsync({ | ||
| token: key.trim(), | ||
| }) | ||
| const providerId = `provider-${Date.now()}` |
There was a problem hiding this comment.
Using Date.now() for ID generation can lead to collisions if multiple providers are created in rapid succession (e.g., within the same millisecond). Consider using a more robust ID generation method such as:
- crypto.randomUUID() for truly unique IDs
- A counter combined with timestamp
- nanoid or similar library
This is particularly important in the onboarding flow where users might submit quickly or the same code path could be triggered multiple times.
| const providerId = `provider-${Date.now()}` | |
| const providerId = `provider-${ | |
| typeof crypto !== "undefined" && typeof crypto.randomUUID === "function" | |
| ? crypto.randomUUID() | |
| : Date.now() | |
| }` |
| } | ||
|
|
||
| const newProvider: ModelProvider = { | ||
| id: provider?.id || `provider-${Date.now()}`, |
There was a problem hiding this comment.
Using Date.now() for ID generation can lead to collisions if multiple providers are created in rapid succession. Consider using a more robust ID generation method such as crypto.randomUUID(), a counter combined with timestamp, or nanoid.
| id: provider?.id || `provider-${Date.now()}`, | |
| id: provider?.id || `provider-${crypto.randomUUID()}`, |
|
|
||
| import { useAtom, useAtomValue, useSetAtom } from "jotai" | ||
| import { ChevronDown, Loader2, RefreshCw, Zap } from "lucide-react" | ||
| import { ChevronDown, Loader2, RefreshCw, Zap, Globe } from "lucide-react" |
There was a problem hiding this comment.
Unused imports Loader2, RefreshCw.
| import { ChevronDown, Loader2, RefreshCw, Zap, Globe } from "lucide-react" | |
| import { ChevronDown, Zap, Globe } from "lucide-react" |
|
Addressed review feedback: added legacy custom config migration, reset lastSelectedModelId on provider deletion, improved fetchModels status/debug info, switched provider id to randomUUID, and removed unused imports. |
Implements custom providers with multi-model support, adds IPC model discovery, and updates onboarding to encrypt tokens immediately.