Skip to content

doofinder/llm_composer

Repository files navigation

LlmComposer

LlmComposer is an Elixir library that simplifies interaction with large language models (LLMs). It provides a unified interface to OpenAI (Chat Completions and Responses API), OpenRouter, Ollama, AWS Bedrock, and Google (Gemini), with support for streaming, function calls, structured outputs, cost tracking, and multi-provider failover routing.

Table of Contents

Installation

Add llm_composer to your dependencies in mix.exs:

def deps do
  [
    {:llm_composer, "~> 0.19"}
  ]
end

Tesla Configuration

LlmComposer uses Tesla for HTTP. The default adapter is Tesla.Adapter.Mint, which supports streaming out of the box. Finch is also supported if you prefer its connection pooling:

# config/config.exs
config :llm_composer, :tesla_adapter, {Tesla.Adapter.Finch, name: MyApp.Finch}

# application.ex supervision tree
{Finch, name: MyApp.Finch}

You can also customize the JSON engine (defaults to JSON, falls back to Jason):

config :llm_composer, :json_engine, Jason

Provider Compatibility

Feature OpenAI OpenRouter Ollama Bedrock Google
Basic Chat
Streaming
Function Calls ⚠️¹
Structured Outputs ⚠️¹
Cost Tracking
Fallback Models
Provider Routing

¹ Via Ollama's OpenAI-compatible endpoint — see the Providers guide.

Usage

Simple Chat

Application.put_env(:llm_composer, :open_ai, api_key: "<your api key>")

settings = %LlmComposer.Settings{
  providers: [
    {LlmComposer.Providers.OpenAI, [model: "gpt-4.1-mini"]}
  ],
  system_prompt: "You are a helpful assistant."
}

{:ok, res} = LlmComposer.simple_chat(settings, "hi")
IO.inspect(res.main_response)

Using Message History

For multi-turn conversations, use run_completion/2 with an explicit message list:

messages = [
  LlmComposer.Message.new(:user, "What is the Roman Empire?"),
  LlmComposer.Message.new(:assistant, "The Roman Empire was a period of ancient Roman civilization."),
  LlmComposer.Message.new(:user, "When did it begin?")
]

{:ok, res} = LlmComposer.run_completion(settings, messages)
IO.inspect(res.main_response)

Providers

All five providers share the same interface. Quick references:

Provider Setup
OpenAI Application.put_env(:llm_composer, :open_ai, api_key: "...")
OpenRouter Application.put_env(:llm_composer, :open_router, api_key: "...")
Ollama No API key — start Ollama server locally
AWS Bedrock Configure via ExAws
Google Application.put_env(:llm_composer, :google, api_key: "...") or Goth/Vertex

See the Providers guide for full examples, Vertex AI setup, OpenAI-compatible servers, and provider-specific options.

Full Documentation

Complete reference documentation is available on HexDocs:

  • Providers — per-provider setup, Vertex AI, OpenAI-compatible servers, structured outputs, custom request params
  • Streaming — Finch setup, StreamChunk fields, token tracking
  • Function Calls — 3-step manual function call workflow, FunctionExecutor API
  • Provider Router — multi-provider failover with exponential backoff
  • Cost Tracking — automatic and manual pricing, ETS cache setup
  • Custom Providers — implementing the Provider behaviour
  • Configuration — all global options, retry configuration

About

An Elixir library for integrating large language models (LLMs) via HTTP, supporting OpenAI, Ollama, and other future backends.

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors

Languages