Skip to content

Add support for local custom LLM providers (OpenAI-compatible, e.g. Ollama) #4828

@amalakprm

Description

@amalakprm

Clear and concise description of the problem

As a user of Folo Desktop, I want to use a local LLM (such as Ollama) as the AI backend so that I can run summaries and analysis offline, privately, and without per-request costs. Impossible to integrate Folo with locally hosted models, even though many users already run these successfully in tools like RSS-Deck or Open WebUI.

Suggested solution

Add a “Custom / OpenAI-compatible LLM provider” option in AI settings with the following fields:
Provider type: OpenAI-compatible
Base URL (e.g. http://127.0.0.1:11434/v1)
API key (optional / dummy for local runtimes)
Model name (free text or auto-discovered)
Toggles for:
Streaming (on/off)
Tools / function calling (on/off)
Vision (on/off)
This would allow seamless integration with:
Ollama
LM Studio
Open WebUI
Any self-hosted OpenAI-compatible API

Alternative

No response

Additional context

I have a working local Ollama setup (http://127.0.0.1:11434) with models like mistral, but there is currently no place in the Folo UI to configure or select a custom LLM endpoint.

Happy to test or provide feedback if this feature is implemented.

Validations

  • Check that there isn't already an issue that request the same feature to avoid creating a duplicate.
  • This issue is valid

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions