Context
When the LLM server is unreachable or returns an error during ping/model listing, the UI can silently degrade — e.g. showing "model returned empty response" instead of surfacing the actual connection or model-not-found error. The Ping() method is a no-op for providers that don't support model listing (like Anthropic), so errors from those providers may also go unnoticed.
What needs auditing
- Trace every code path where
Ping() and ListModels() are called and verify the error propagates to a user-visible surface (status bar, error overlay, etc.)
- Ensure connection-refused / timeout / model-not-found errors from the LLM client are not swallowed or masked by downstream checks (e.g. empty response guards) in any LLM-dependent feature: insights, SQL chat, document extraction, model picker
- Verify all LLM connectivity errors surface with actionable messages rather than confusing secondary symptoms
- Consider whether
Ping() should run proactively when any LLM feature is first used, with the error shown before attempting the full ChatComplete call
Context
When the LLM server is unreachable or returns an error during ping/model listing, the UI can silently degrade — e.g. showing "model returned empty response" instead of surfacing the actual connection or model-not-found error. The
Ping()method is a no-op for providers that don't support model listing (like Anthropic), so errors from those providers may also go unnoticed.What needs auditing
Ping()andListModels()are called and verify the error propagates to a user-visible surface (status bar, error overlay, etc.)Ping()should run proactively when any LLM feature is first used, with the error shown before attempting the fullChatCompletecall