LlmComposer is an Elixir library that simplifies interaction with large language models (LLMs). It provides a unified interface to OpenAI (Chat Completions and Responses API), OpenRouter, Ollama, AWS Bedrock, and Google (Gemini), with support for streaming, function calls, structured outputs, cost tracking, and multi-provider failover routing.
Add llm_composer to your dependencies in mix.exs:
def deps do
[
{:llm_composer, "~> 0.19"}
]
endLlmComposer uses Tesla for HTTP. The default adapter is Tesla.Adapter.Mint, which supports
streaming out of the box. Finch is also supported if you prefer its connection pooling:
# config/config.exs
config :llm_composer, :tesla_adapter, {Tesla.Adapter.Finch, name: MyApp.Finch}
# application.ex supervision tree
{Finch, name: MyApp.Finch}You can also customize the JSON engine (defaults to JSON, falls back to Jason):
config :llm_composer, :json_engine, Jason| Feature | OpenAI | OpenRouter | Ollama | Bedrock | |
|---|---|---|---|---|---|
| Basic Chat | ✅ | ✅ | ✅ | ✅ | ✅ |
| Streaming | ✅ | ✅ | ✅ | ✅ | ✅ |
| Function Calls | ✅ | ✅ | ✅ | ✅ | |
| Structured Outputs | ✅ | ✅ | ✅ | ✅ | |
| Cost Tracking | ✅ | ✅ | ❌ | ✅ | ✅ |
| Fallback Models | ❌ | ✅ | ❌ | ❌ | ❌ |
| Provider Routing | ❌ | ✅ | ❌ | ❌ | ❌ |
¹ Via Ollama's OpenAI-compatible endpoint — see the Providers guide.
Application.put_env(:llm_composer, :open_ai, api_key: "<your api key>")
settings = %LlmComposer.Settings{
providers: [
{LlmComposer.Providers.OpenAI, [model: "gpt-4.1-mini"]}
],
system_prompt: "You are a helpful assistant."
}
{:ok, res} = LlmComposer.simple_chat(settings, "hi")
IO.inspect(res.main_response)For multi-turn conversations, use run_completion/2 with an explicit message list:
messages = [
LlmComposer.Message.new(:user, "What is the Roman Empire?"),
LlmComposer.Message.new(:assistant, "The Roman Empire was a period of ancient Roman civilization."),
LlmComposer.Message.new(:user, "When did it begin?")
]
{:ok, res} = LlmComposer.run_completion(settings, messages)
IO.inspect(res.main_response)All five providers share the same interface. Quick references:
| Provider | Setup |
|---|---|
| OpenAI | Application.put_env(:llm_composer, :open_ai, api_key: "...") |
| OpenRouter | Application.put_env(:llm_composer, :open_router, api_key: "...") |
| Ollama | No API key — start Ollama server locally |
| AWS Bedrock | Configure via ExAws |
Application.put_env(:llm_composer, :google, api_key: "...") or Goth/Vertex |
See the Providers guide for full examples, Vertex AI setup, OpenAI-compatible servers, and provider-specific options.
Complete reference documentation is available on HexDocs:
- Providers — per-provider setup, Vertex AI, OpenAI-compatible servers, structured outputs, custom request params
- Streaming — Finch setup, StreamChunk fields, token tracking
- Function Calls — 3-step manual function call workflow, FunctionExecutor API
- Provider Router — multi-provider failover with exponential backoff
- Cost Tracking — automatic and manual pricing, ETS cache setup
- Custom Providers — implementing the Provider behaviour
- Configuration — all global options, retry configuration