Skip to content

fix(llm): audit ping/model listing errors to ensure they surface properly in the UI #739

@cpcloud

Description

@cpcloud

Context

When the LLM server is unreachable or returns an error during ping/model listing, the UI can silently degrade — e.g. showing "model returned empty response" instead of surfacing the actual connection or model-not-found error. The Ping() method is a no-op for providers that don't support model listing (like Anthropic), so errors from those providers may also go unnoticed.

What needs auditing

  • Trace every code path where Ping() and ListModels() are called and verify the error propagates to a user-visible surface (status bar, error overlay, etc.)
  • Ensure connection-refused / timeout / model-not-found errors from the LLM client are not swallowed or masked by downstream checks (e.g. empty response guards) in any LLM-dependent feature: insights, SQL chat, document extraction, model picker
  • Verify all LLM connectivity errors surface with actionable messages rather than confusing secondary symptoms
  • Consider whether Ping() should run proactively when any LLM feature is first used, with the error shown before attempting the full ChatComplete call

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions