Just like Gemini and OpenAI. Add an Ollama provider to access a model running locally
Just like Gemini and OpenAI. Add an Ollama provider to access a model running locally