A production-ready TypeScript library that provides unified access to multiple LLM providers using their official SDKs. This library wraps the native SDKs while maintaining a consistent interface across providers.
- π§ Unified Interface: Single API for multiple LLM providers
- β‘ Official SDKs: Uses native provider SDKs under the hood
- π― TypeScript First: Full type safety and excellent developer experience
- π Built-in Analytics: Track usage, costs, and performance
- π Streaming Support: Real-time response streaming
- π οΈ Tool Support: Function calling across providers
- πΌοΈ Multimodal: Support for text, images, audio, video, and documents
- π° Cost Tracking: Built-in pricing calculator and usage analytics
- π MCP Integration: Connect to external tools via Model Context Protocol
| Provider | Models | Streaming | Tools | Vision | Audio | Embeddings |
|---|---|---|---|---|---|---|
| OpenAI | GPT-4.1, o3, o4-mini, TTS etc | β | β | β | β | β |
| Anthropic | Claude 4, Claude 3.5 etc | β | β | β | β | β |
| Google Gemini | Gemini 2.5 Pro, Imagen, Gemini TTS etc | β | β | β | β | β |
| Mistral | Mistral Large, Mistral Embed etc | β | β | β | β | β |
| Cohere | Command A, Command R+ etc | β | β | β | β | β |
| OpenRouter | Access to 300+ models | β | β | β | β | β |
| GitHub Copilot | GPT 4.1, Claude 4 Sonnet etc | β | β | β | β | β |
bun add kepler-ai-sdkYou'll also need the official provider SDKs for the providers you want to use:
bun add openai @anthropic-ai/sdk @google/gen-ai @mistralai/mistralai cohere-aiimport { Kepler, OpenAIProvider, AnthropicProvider, GeminiProvider } from 'kepler-ai-sdk';
// 1. Initialize Kepler with providers
const kepler = new Kepler({
providers: [
{ provider: new OpenAIProvider({ apiKey: process.env.OPENAI_API_KEY }) },
{ provider: new AnthropicProvider({ apiKey: process.env.ANTHROPIC_API_KEY }) },
{ provider: new GeminiProvider({ apiKey: process.env.GEMINI_API_KEY }) }
]
});
// 2. Generate a completion
const response = await kepler.generateCompletion({
model: 'gpt-4o-mini',
messages: [
{ role: 'user', content: 'Hello, world!' }
]
});
console.log(response.content);Add external tool capabilities by connecting to MCP servers:
import { Kepler, AnthropicProvider } from 'kepler-ai-sdk';
const kepler = new Kepler({
providers: [
{ provider: new AnthropicProvider({ apiKey: process.env.ANTHROPIC_API_KEY }) }
],
mcpServers: [
{
id: "filesystem",
name: "File System",
command: "npx",
args: ["@modelcontextprotocol/server-filesystem", "/path/to/workspace"]
}
]
});
// LLM automatically has access to filesystem tools
const response = await kepler.generateCompletion({
model: "claude-3-5-sonnet-20240620",
messages: [{ role: "user", content: "List files in the current directory" }]
});Popular MCP servers: @modelcontextprotocol/server-filesystem, @modelcontextprotocol/server-git, @modelcontextprotocol/server-sqlite
Provider adapters wrap the official SDKs and are configured through Kepler:
import { Kepler, OpenAIProvider, AnthropicProvider } from 'kepler-ai-sdk';
const kepler = new Kepler({
providers: [
{
provider: new OpenAIProvider({
apiKey: 'your-api-key',
organization: 'your-org-id', // optional
baseURL: 'https://custom-proxy.com' // optional
})
},
{
provider: new AnthropicProvider({
apiKey: 'your-api-key',
baseURL: 'https://custom-proxy.com' // optional
})
}
]
});All providers use the same message format:
const messages = [
{
role: 'system',
content: 'You are a helpful assistant.'
},
{
role: 'user',
content: 'What is the capital of France?'
},
{
role: 'assistant',
content: 'The capital of France is Paris.'
}
];For images and other media:
const messages = [
{
role: 'user',
content: [
{ type: 'text', text: 'What do you see in this image?' },
{
type: 'image',
imageUrl: 'data:image/jpeg;base64,/9j/4AAQSkZJRg...',
mimeType: 'image/jpeg'
}
]
}
];for await (const chunk of kepler.streamCompletion({
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Tell me a story' }]
})) {
if (chunk.delta) {
process.stdout.write(chunk.delta);
}
if (chunk.finished) {
console.log('\nDone!');
console.log('Tokens used:', chunk.usage?.totalTokens);
}
}const response = await kepler.generateCompletion({
model: 'gpt-4o',
messages: [
{ role: 'user', content: 'What\'s the weather in New York?' }
],
tools: [
{
name: 'get_weather',
description: 'Get current weather for a city',
parameters: {
type: 'object',
properties: {
city: { type: 'string', description: 'City name' },
unit: { type: 'string', enum: ['celsius', 'fahrenheit'] }
},
required: ['city']
}
}
],
toolChoice: 'auto'
});
if (response.toolCalls) {
for (const call of response.toolCalls) {
console.log(`Tool: ${call.name}`);
console.log(`Args:`, call.arguments);
}
}// List all models
const models = await kepler.listModels();
// Access advanced model management features
const modelManager = kepler.getModelManager();
const visionModels = await modelManager.findModelsByCapability('vision');
const functionModels = await modelManager.findModelsByCapability('functionCalling');
// Get the cheapest model
const cheapest = await modelManager.getCheapestModel(['streaming']);
// Get the most capable model
const best = await modelManager.getMostCapableModel(['vision', 'functionCalling']);import { PricingCalculator, UsageTracker } from 'kepler-ai-sdk';
const pricing = new PricingCalculator();
const usage = new UsageTracker();
// Generate completion
const response = await kepler.generateCompletion({
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Hello!' }]
});
// Calculate cost
const cost = await pricing.calculateCost(response.usage, response.model);
console.log(`Cost: $${cost?.totalCost.toFixed(6)}`);
// Track usage
usage.trackUsage(response.model, response.usage, cost?.totalCost);
// Get statistics
const stats = usage.getUsage('gpt-4o');
if (stats && !Array.isArray(stats)) {
console.log(`Total requests: ${stats.totalRequests}`);
console.log(`Total cost: $${stats.totalCost.toFixed(4)}`);
}import { LLMError } from 'kepler-ai-sdk';
try {
const response = await kepler.generateCompletion({
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Hello!' }]
});
} catch (error) {
if (error instanceof LLMError) {
console.log('Provider:', error.provider);
console.log('Status:', error.statusCode);
console.log('Retryable:', error.isRetryable());
console.log('User message:', error.getUserMessage());
}
}// OpenAI DALL-E (access provider for specialized APIs)
const modelManager = kepler.getModelManager();
const openai = modelManager.getProvider('openai');
const images = await openai.generateImage({
prompt: 'A futuristic city at sunset',
model: 'dall-e-3',
size: '1024x1024',
quality: 'hd',
n: 1
});
console.log('Generated image URL:', images.images[0].url);// OpenAI TTS (access provider for specialized APIs)
const modelManager = kepler.getModelManager();
const openai = modelManager.getProvider('openai');
const audio = await openai.generateAudio({
text: 'Hello, this is a test of text-to-speech.',
model: 'tts-1',
voice: 'alloy',
format: 'mp3'
});
// audio.audio is an ArrayBuffer containing the MP3 data// OpenAI embeddings (access provider for specialized APIs)
const modelManager = kepler.getModelManager();
const openai = modelManager.getProvider('openai');
const embeddings = await openai.generateEmbedding({
model: 'text-embedding-3-small',
input: ['Hello world', 'How are you?'],
encodingFormat: 'float'
});
console.log('Embeddings:', embeddings.embeddings);
console.log('Dimensions:', embeddings.embeddings[0].length);# Provider API Keys
OPENAI_API_KEY=your_openai_key
ANTHROPIC_API_KEY=your_anthropic_key
# Optional: Organization IDs
OPENAI_ORG_ID=your_org_idFor proxies or custom endpoints:
const kepler = new Kepler({
providers: [
{
provider: new OpenAIProvider({
apiKey: 'your-key',
baseURL: 'https://your-proxy.com/v1'
})
},
{
provider: new AnthropicProvider({
apiKey: 'your-key',
baseURL: 'https://your-proxy.com'
})
}
]
});interface CompletionRequest {
model: string;
messages: Message[];
temperature?: number;
maxTokens?: number;
tools?: ToolDefinition[];
toolChoice?: 'auto' | 'none' | 'required' | { type: 'function'; function: { name: string } };
responseFormat?: ResponseFormat;
stream?: boolean;
stop?: string | string[];
}interface CompletionResponse {
id: string;
content: string;
model: string;
usage: TokenUsage;
finishReason: 'stop' | 'length' | 'tool_calls' | 'content_filter';
toolCalls?: ToolCall[];
reasoning?: string;
metadata?: Record<string, unknown>;
}interface ModelInfo {
id: string;
provider: string;
name: string;
description?: string;
contextWindow: number;
maxOutputTokens?: number;
capabilities: ModelCapabilities;
pricing?: ModelPricing;
createdAt?: Date;
type?: string;
}All providers implement the ProviderAdapter interface:
generateCompletion(request: CompletionRequest): Promise<CompletionResponse>streamCompletion(request: CompletionRequest): AsyncIterable<CompletionChunk>listModels(): Promise<ModelInfo[]>getModel(modelId: string): Promise<ModelInfo | null>
Optional methods (if supported):
generateEmbedding?(request: EmbeddingRequest): Promise<EmbeddingResponse>generateImage?(request: ImageRequest): Promise<ImageResponse>generateAudio?(request: AudioRequest): Promise<AudioResponse>
Check out the examples/ directory for complete working examples. The examples are numbered to provide a clear learning path:
01-basic-usage.ts: Demonstrates fundamental features like initializing theModelManager, listing models, and generating simple completions.02-streaming.ts: Shows how to handle streaming responses for real-time applications.03-tool-usage.ts: Covers how to define and use tools with supported models.04-multimodality.ts: Provides an example of sending images to vision-capable models.05-embeddings.ts: Explains how to generate text embeddings.06-cost-tracking.ts: Demonstrates how to use thePricingCalculatorandUsageTrackerto monitor API costs.07-oauth-and-custom-providers.ts: Covers advanced topics like setting up OAuth and creating custom provider adapters.
# Install dependencies
bun install
# Build the library
bun run build
# Run examples
bun run examples/01-basic-usage.tsWe welcome contributions! Please see our Contributing Guide for details.
MIT License - see LICENSE for details.
- π Documentation
- π Issue Tracker
- π§ Email Support