diff --git a/README.md b/README.md index 4c684ca..7713330 100644 --- a/README.md +++ b/README.md @@ -1,85 +1,96 @@ # llmapi -> Modern C++ LLM API client with openai-compatible support +> Modern C++23 LLM client built with modules [![C++23](https://img.shields.io/badge/C%2B%2B-23-blue.svg)](https://en.cppreference.com/w/cpp/23) [![Module](https://img.shields.io/badge/module-ok-green.svg)](https://en.cppreference.com/w/cpp/language/modules) [![License](https://img.shields.io/badge/license-Apache_2.0-blue.svg)](LICENSE) -[![OpenAI Compatible](https://img.shields.io/badge/OpenAI_API-Compatible-green.svg)](https://platform.openai.com/docs/api-reference) +[![OpenAI Compatible](https://img.shields.io/badge/OpenAI-Compatible-green.svg)](https://platform.openai.com/docs/api-reference) | English - [简体中文](README.zh.md) - [繁體中文](README.zh.hant.md) | |:---:| -| [Documentation](docs/) - [C++ API](docs/cpp-api.md) - [C API](docs/c-api.md) - [Examples](docs/examples.md) | +| [Documentation](docs/) - [C++ API](docs/cpp-api.md) - [Examples](docs/examples.md) | -Clean, type-safe LLM API client using C++23 modules. Fluent interface with zero-cost abstractions. Works with OpenAI, Poe, DeepSeek and compatible endpoints. +`llmapi` provides a typed `Client` API for chat, streaming, embeddings, tool calls, and conversation persistence. The default config alias `Config` maps to OpenAI-style providers, so the common case does not need an explicit `openai::OpenAI` wrapper. -## ✨ Features +## Features -- **C++23 Modules** - `import mcpplibs.llmapi` -- **Auto-Save History** - Conversation history managed automatically -- **Type-Safe Streaming** - Concept-constrained callbacks -- **Fluent Interface** - Chainable methods -- **Provider Agnostic** - OpenAI, Poe, and compatible endpoints +- `import mcpplibs.llmapi` with C++23 modules +- Strongly typed messages, tools, and response structs +- Sync, async, and streaming chat APIs +- Embeddings via the OpenAI provider +- Conversation save/load helpers +- OpenAI-compatible endpoint support through `openai::Config::baseUrl` ## Quick Start ```cpp -import std; import mcpplibs.llmapi; +import std; int main() { - using namespace mcpplibs; - - llmapi::Client client(std::getenv("OPENAI_API_KEY"), llmapi::URL::Poe); - - client.model("gpt-5") - .system("You are a helpful assistant.") - .user("In one sentence, introduce modern C++. 并给出中文翻译") - .request([](std::string_view chunk) { - std::print("{}", chunk); - std::cout.flush(); - }); + using namespace mcpplibs::llmapi; + + auto apiKey = std::getenv("OPENAI_API_KEY"); + if (!apiKey) { + std::cerr << "OPENAI_API_KEY not set\n"; + return 1; + } + + auto client = Client(Config{ + .apiKey = apiKey, + .model = "gpt-4o-mini", + }); + client.system("You are a concise assistant."); + auto resp = client.chat("Explain why C++23 modules are useful in two sentences."); + + std::cout << resp.text() << '\n'; return 0; } ``` -### Models / Providers +## Providers + +- `openai::OpenAI` for OpenAI chat, streaming, embeddings, and OpenAI-compatible endpoints +- `anthropic::Anthropic` for Anthropic chat and streaming +- `Config` as a convenient alias for `openai::Config` + +Compatible endpoints can reuse the OpenAI provider: ```cpp -llmapi::Client client(apiKey, llmapi::URL::OpenAI); // OpenAI -llmapi::Client client(apiKey, llmapi::URL::Poe); // Poe -llmapi::Client client(apiKey, llmapi::URL::DeepSeek); // Deepseek -llmapi::Client client(apiKey, "https://custom.com"); // Custom +auto provider = openai::OpenAI({ + .apiKey = std::getenv("DEEPSEEK_API_KEY"), + .baseUrl = std::string(URL::DeepSeek), + .model = "deepseek-chat", +}); ``` -## Building +## Build And Run ```bash -xmake # Build -xmake run basic # Run example(after cofig OPENAI_API_KEY) +xmake +xmake run hello_mcpp +xmake run basic +xmake run chat ``` -## Use in Build Tools - -### xmake +## Package Usage ```lua --- 0 - Add mcpplibs's index repos -add_repositories("mcpplibs-index https://github.com/mcpplibs/llmapi.git") - --- 1 - Add the libraries and versions you need +add_repositories("mcpplibs-index https://github.com/mcpplibs/mcpplibs-index.git") add_requires("llmapi 0.0.2") -``` - -> More: [mcpplibs-index](https://github.com/mcpplibs/mcpplibs-index) -### cmake - -``` -todo... +target("demo") + set_kind("binary") + set_languages("c++23") + set_policy("build.c++.modules", true) + add_files("src/*.cpp") + add_packages("llmapi") ``` -## 📄 License +See [docs/getting-started.md](docs/getting-started.md) and [docs/providers.md](docs/providers.md) for more setup detail. + +## License Apache-2.0 - see [LICENSE](LICENSE) diff --git a/docs/README.md b/docs/README.md index 52c267b..71ec80d 100644 --- a/docs/README.md +++ b/docs/README.md @@ -1,29 +1,23 @@ # llmapi Documentation -Complete documentation for the llmapi library - a modern C++ LLM API client. +Current documentation for the `llmapi` C++23 module library. -## 📚 Contents +## Contents -- [Getting Started](getting-started.md) - Installation and first steps -- [C++ API Guide](cpp-api.md) - Complete C++ API reference -- [Examples](examples.md) - Code examples and use cases -- [Providers](providers.md) - Supported LLM providers -- [Advanced Usage](advanced.md) - Advanced features and patterns +- [Getting Started](getting-started.md) - install, build, and first request +- [C++ API Guide](cpp-api.md) - types, providers, and `Client

` +- [Examples](examples.md) - chat, streaming, embeddings, and tool flows +- [Providers](providers.md) - OpenAI, Anthropic, and compatible endpoints +- [Advanced Usage](advanced.md) - persistence, async calls, and custom configuration -## Quick Links +## What The Library Provides -**For C++ Developers:** -- Start with [Getting Started](getting-started.md) -- See [C++ API Guide](cpp-api.md) for full API reference -- Check [Examples](examples.md) for common patterns - -## Features Overview - -- **C++23 Modules** - Modern module system -- **Auto-Save History** - Automatic conversation management -- **Type-Safe Streaming** - Concept-constrained callbacks -- **Fluent Interface** - Chainable API design -- **Provider Agnostic** - OpenAI, Poe, and custom endpoints +- C++23 modules via `import mcpplibs.llmapi` +- Typed chat messages and multimodal content structs +- Provider concepts for sync, async, streaming, and embeddings +- Built-in OpenAI and Anthropic providers +- OpenAI-compatible endpoint support through configurable base URLs +- Conversation save/load helpers for local session persistence ## License diff --git a/docs/advanced.md b/docs/advanced.md index 604bd72..c3170e9 100644 --- a/docs/advanced.md +++ b/docs/advanced.md @@ -1,292 +1,104 @@ # Advanced Usage -Advanced features and patterns. +Advanced patterns using the current API surface. -## Conversation History Management +## Conversation Persistence -### Auto-Save Behavior - -Responses are automatically saved to conversation history: +Each `chat()` and `chat_stream()` call appends the user message and the assistant response to the in-memory `Conversation`. ```cpp -client.user("Question 1").request(); -// Assistant reply auto-saved ✅ - -client.user("Question 2").request([](auto chunk) { - std::print("{}", chunk); +auto client = Client(Config{ + .apiKey = std::getenv("OPENAI_API_KEY"), + .model = "gpt-4o-mini", }); -// Streamed reply auto-saved ✅ - -client.user("Question 3"); // Can reference previous context -``` - -### Manual History Manipulation - -```cpp -// Add few-shot examples -client.system("You are a translator.") - .user("Hello") - .assistant("你好") - .user("Goodbye") - .assistant("再见") - .user("Thank you"); // Will use examples - -// Clear history -client.clear(); // Start fresh conversation -``` -### Get Conversation History +client.system("You are helpful."); +client.chat("Remember that I prefer concise answers."); +client.save_conversation("session.json"); -```cpp -// Get last answer -std::string answer = client.getAnswer(); - -// Get all messages -auto messages = client.getMessages(); -for (const auto& msg : messages) { - std::println("{}: {}", msg["role"], msg["content"]); -} - -// Get message count -int count = client.getMessageCount(); -std::println("Total: {} messages", count); +auto restored = Client(Config{ + .apiKey = std::getenv("OPENAI_API_KEY"), + .model = "gpt-4o-mini", +}); +restored.load_conversation("session.json"); ``` -## Streaming Patterns +## Manual Message Control -### Basic Streaming +You can seed few-shot history or inject tool results directly: ```cpp -client.user("Tell a story").request([](std::string_view chunk) { - std::print("{}", chunk); - std::cout.flush(); -}); +client.clear(); +client.add_message(Message::system("Translate English to Chinese.")); +client.add_message(Message::user("hello")); +client.add_message(Message::assistant("你好")); +auto resp = client.chat("goodbye"); ``` -### Collecting Stream Data +## Streaming -```cpp -std::string full_response; -client.user("Question").request([&](std::string_view chunk) { - full_response += chunk; - std::print("{}", chunk); - std::cout.flush(); -}); - -// After streaming, can use full_response -// Or just use client.getAnswer() -``` - -### Stream with Statistics +Streaming returns a full `ChatResponse` after the callback finishes, so you can combine live output with post-processing. ```cpp -struct Stats { - int chunks = 0; - size_t bytes = 0; - auto start = std::chrono::steady_clock::now(); -}; - -Stats stats; -client.user("Generate text").request([&](std::string_view chunk) { - stats.chunks++; - stats.bytes += chunk.size(); - std::print("{}", chunk); +std::string collected; +auto resp = client.chat_stream("Tell a story about templates.", [&](std::string_view chunk) { + collected += chunk; + std::cout << chunk; }); -auto duration = std::chrono::steady_clock::now() - stats.start; -std::println("\nStats: {} chunks, {} bytes in {}ms", - stats.chunks, - stats.bytes, - std::chrono::duration_cast(duration).count() -); +std::cout << "\nstop reason=" << static_cast(resp.stopReason) << '\n'; ``` -### Conditional Stream Processing +## Async API ```cpp -bool first_chunk = true; -client.user("Question").request([&](std::string_view chunk) { - if (first_chunk) { - std::print("Answer: "); - first_chunk = false; - } - std::print("{}", chunk); - std::cout.flush(); -}); -std::println(""); +auto task = client.chat_async("Explain coroutines briefly."); +auto resp = task.get(); +std::cout << resp.text() << '\n'; ``` -## Non-Streaming with JSON +## Tool Calling Loop -### Access Full Response +The provider surfaces requested tools via `ChatResponse::tool_calls()`. You then append a tool result and continue the conversation. ```cpp -auto response = client.user("Question").request(); - -// Model info -std::println("Model: {}", response["model"]); - -// Usage stats -auto usage = response["usage"]; -std::println("Tokens - prompt: {}, completion: {}, total: {}", - usage["prompt_tokens"], - usage["completion_tokens"], - usage["total_tokens"] -); - -// Content (or use getAnswer()) -std::string content = response["choices"][0]["message"]["content"]; -``` - -### Save Full Response - -```cpp -auto response = client.user("Question").request(); - -// Save to file -std::ofstream file("response.json"); -file << response.dump(2); // Pretty print with 2-space indent -``` - -## Error Handling Patterns - -### Try-Catch Pattern - -```cpp -try { - client.user("Question").request(); - std::println("{}", client.getAnswer()); - -} catch (const std::runtime_error& e) { - std::println("Runtime error: {}", e.what()); -} catch (const std::invalid_argument& e) { - std::println("Invalid argument: {}", e.what()); -} catch (const std::exception& e) { - std::println("Error: {}", e.what()); -} -``` - -### Graceful Degradation - -```cpp -std::optional safe_request(Client& client, std::string_view query) { - try { - client.user(query).request(); - return client.getAnswer(); - } catch (...) { - return std::nullopt; - } -} - -// Usage -if (auto answer = safe_request(client, "Question")) { - std::println("{}", *answer); -} else { - std::println("Request failed, using fallback"); -} -``` - -### Retry Logic +auto params = ChatParams{ + .tools = std::vector{{ + .name = "get_weather", + .description = "Return weather for a city", + .inputSchema = R"({"type":"object","properties":{"city":{"type":"string"}},"required":["city"]})", + }}, + .toolChoice = ToolChoice::Auto, +}; -```cpp -std::string request_with_retry(Client& client, std::string_view query, int max_retries = 3) { - for (int i = 0; i < max_retries; i++) { - try { - client.user(query).request(); - return client.getAnswer(); - } catch (const std::exception& e) { - if (i == max_retries - 1) throw; - std::println("Retry {}/{}", i + 1, max_retries); - std::this_thread::sleep_for(std::chrono::seconds(1 << i)); // Exponential backoff - } - } - throw std::runtime_error("Max retries exceeded"); +auto first = client.chat("What's the weather in Tokyo?", params); +for (const auto& call : first.tool_calls()) { + client.add_message(Message{ + .role = Role::Tool, + .content = std::vector{ + ToolResultContent{ + .toolUseId = call.id, + .content = R"({"temperature":"22C","condition":"sunny"})", + }, + }, + }); } -``` - -## Multi-Client Pattern - -```cpp -// Different clients for different purposes -Client translator(api_key, URL::Poe); -translator.model("gpt-5").system("You are a translator."); -Client summarizer(api_key, URL::Poe); -summarizer.model("gpt-5").system("You are a text summarizer."); - -Client coder(api_key, URL::Poe); -coder.model("gpt-5").system("You are a coding assistant."); - -// Use each for specific tasks -translator.user("Translate 'hello' to Chinese").request(); -summarizer.user("Summarize this article...").request(); -coder.user("Write a function to...").request(); +auto final = client.provider().chat(client.conversation().messages, params); ``` -## Session Management +## Compatible Endpoints ```cpp -struct Session { - Client client; - std::string session_id; - std::chrono::steady_clock::time_point created_at; - - Session(std::string_view key, std::string_view url) - : client(key, url), - session_id(generate_uuid()), - created_at(std::chrono::steady_clock::now()) {} - - void save_history(const std::filesystem::path& path) { - std::ofstream file(path); - file << client.getMessages().dump(2); - } -}; -``` - -## Callback Patterns - -### Lambda Capture - -```cpp -std::string buffer; -int line_count = 0; - -client.user("Generate list").request([&](std::string_view chunk) { - buffer += chunk; - if (chunk.find('\n') != std::string_view::npos) { - line_count++; - } - std::print("{}", chunk); +auto provider = openai::OpenAI({ + .apiKey = std::getenv("DEEPSEEK_API_KEY"), + .baseUrl = std::string(URL::DeepSeek), + .model = "deepseek-chat", }); - -std::println("\nGenerated {} lines", line_count); -``` - -### Function Objects - -```cpp -struct StreamPrinter { - std::ostream& out; - - StreamPrinter(std::ostream& o) : out(o) {} - - void operator()(std::string_view chunk) { - out << chunk; - out.flush(); - } -}; - -// Usage -StreamPrinter printer(std::cout); -client.user("Question").request(printer); - -// Or to file -std::ofstream file("output.txt"); -StreamPrinter file_printer(file); -client.user("Question").request(file_printer); ``` ## See Also - [C++ API Reference](cpp-api.md) - [Examples](examples.md) +- [Providers](providers.md) diff --git a/docs/cpp-api.md b/docs/cpp-api.md index 49017c9..cd60ef1 100644 --- a/docs/cpp-api.md +++ b/docs/cpp-api.md @@ -1,6 +1,6 @@ # C++ API Reference -Complete reference for the C++ API. +Reference for the current public module interface. ## Namespace @@ -9,260 +9,226 @@ import mcpplibs.llmapi; using namespace mcpplibs::llmapi; ``` -## Client Class +## Exported Modules -### Constructor +- `mcpplibs.llmapi` +- `mcpplibs.llmapi:types` +- `mcpplibs.llmapi:url` +- `mcpplibs.llmapi:coro` +- `mcpplibs.llmapi:provider` +- `mcpplibs.llmapi:client` +- `mcpplibs.llmapi:openai` +- `mcpplibs.llmapi:anthropic` +- `mcpplibs.llmapi:errors` -```cpp -Client(std::string_view apiKey, std::string_view baseUrl = URL::OpenAI) -Client(const char* apiKey, std::string_view baseUrl = URL::OpenAI) -``` - -**Parameters:** -- `apiKey` - API key (can be from `std::getenv()`) -- `baseUrl` - Base URL (see [Providers](providers.md)) - -**Example:** -```cpp -Client client(std::getenv("OPENAI_API_KEY"), URL::Poe); -``` - -### Configuration Methods - -All configuration methods return `Client&` for chaining. +## Core Types -#### model() +Important exported structs and enums: -```cpp -Client& model(std::string_view model) -``` +- `Role` +- `Message` +- `TextContent`, `ImageContent`, `AudioContent` +- `ToolDef`, `ToolCall`, `ToolUseContent`, `ToolResultContent` +- `ChatParams` +- `ChatResponse` +- `EmbeddingResponse` +- `Conversation` +- `Usage` +- `ResponseFormat` -Set the model name. +## Provider Concepts -**Example:** ```cpp -client.model("gpt-5"); +template +concept Provider = requires(P p, const std::vector& messages, const ChatParams& params) { + { p.name() } -> std::convertible_to; + { p.chat(messages, params) } -> std::same_as; + { p.chat_async(messages, params) } -> std::same_as>; +}; ``` -### Message Methods - -All message methods return `Client&` for chaining. - -#### user() - ```cpp -Client& user(std::string_view content) +template +concept StreamableProvider = Provider

&& requires( + P p, + const std::vector& messages, + const ChatParams& params, + std::function cb +) { + { p.chat_stream(messages, params, cb) } -> std::same_as; + { p.chat_stream_async(messages, params, cb) } -> std::same_as>; +}; ``` -Add a user message. - -**Example:** ```cpp -client.user("What is C++?"); +template +concept EmbeddableProvider = Provider

&& requires( + P p, + const std::vector& inputs, + std::string_view model +) { + { p.embed(inputs, model) } -> std::same_as; +}; ``` -#### system() - -```cpp -Client& system(std::string_view content) -``` +## `Client

` -Add a system message (usually for initial instructions). +`Client` is a class template that owns a provider instance and a `Conversation`. -**Example:** ```cpp -client.system("You are a helpful assistant."); +template +class Client; ``` -#### assistant() +### Construction ```cpp -Client& assistant(std::string_view content) +auto client = Client(Config{ + .apiKey = std::getenv("OPENAI_API_KEY"), + .model = "gpt-4o-mini", +}); ``` -Add an assistant message (usually for few-shot examples or manual history). - -**Note:** Normally not needed - responses are auto-saved to history. - -**Example:** -```cpp -client.assistant("I understand."); -``` +`Config` is an exported alias for `openai::Config`, so the default path uses an OpenAI-style provider without explicitly writing `openai::OpenAI`. -#### add_message() +### Configuration ```cpp -Client& add_message(std::string_view role, std::string_view content) +Client& default_params(ChatParams params) ``` -Add a message with custom role. - -**Example:** -```cpp -client.add_message("user", "Hello"); -``` +Stores default parameters used by `chat()`, `chat_async()`, and `chat_stream()`. -#### clear() +### Conversation Management ```cpp +Client& system(std::string_view content) +Client& user(std::string_view content) +Client& add_message(Message msg) Client& clear() ``` -Clear all conversation history. - -**Example:** -```cpp -client.clear(); -``` - -### Request Methods - -#### request() - Non-Streaming +### Synchronous Chat ```cpp -Json request() +ChatResponse chat(std::string_view userMessage) +ChatResponse chat(std::string_view userMessage, ChatParams params) ``` -Execute a non-streaming request. Returns full JSON response. **Automatically saves assistant reply to history.** +`chat()` appends the user message, sends the full conversation, stores the assistant text response, and returns the parsed `ChatResponse`. -**Returns:** `nlohmann::json` object with full API response +### Async Chat -**Example:** ```cpp -auto response = client.user("Hello").request(); -std::println("{}", response["choices"][0]["message"]["content"]); +Task chat_async(std::string_view userMessage) ``` -#### request(callback) - Streaming +### Streaming Chat ```cpp -template -void request(Callback&& callback) +ChatResponse chat_stream( + std::string_view userMessage, + std::function callback +) ``` -Execute a streaming request. **Automatically saves complete assistant reply to history.** - -**Parameters:** -- `callback` - Function accepting `std::string_view` (each content chunk) +Available only when `P` satisfies `StreamableProvider`. -**Example:** ```cpp -client.user("Tell me a story").request([](std::string_view chunk) { - std::print("{}", chunk); - std::cout.flush(); -}); +Task chat_stream_async( + std::string_view userMessage, + std::function callback +) ``` -### Getter Methods - -#### getAnswer() +### Embeddings ```cpp -std::string getAnswer() const +EmbeddingResponse embed(const std::vector& inputs, std::string_view model) ``` -Get the last assistant reply from conversation history. +Available only when `P` satisfies `EmbeddableProvider`. -**Returns:** Last assistant message content, or empty string if none +### Accessors -**Example:** ```cpp -client.request(); -std::string answer = client.getAnswer(); -std::println("Last answer: {}", answer); +const Conversation& conversation() const +Conversation& conversation() +void save_conversation(std::string_view filePath) const +void load_conversation(std::string_view filePath) +const P& provider() const +P& provider() ``` -#### getMessages() +## Provider Config Types ```cpp -Json getMessages() const -``` - -Get full conversation history as JSON array. - -**Returns:** JSON array of all messages - -**Example:** -```cpp -auto history = client.getMessages(); -for (const auto& msg : history) { - std::println("{}: {}", msg["role"], msg["content"]); +openai::Config { + std::string apiKey; + std::string baseUrl { "https://api.openai.com/v1" }; + std::string model; + std::string organization; + std::optional proxy; + std::map customHeaders; } ``` -#### getMessageCount() - -```cpp -int getMessageCount() const -``` - -Get total number of messages in conversation history. - -**Returns:** Number of messages - -**Example:** -```cpp -std::println("Messages: {}", client.getMessageCount()); -``` - -#### getApiKey() - ```cpp -std::string_view getApiKey() const +anthropic::Config { + std::string apiKey; + std::string baseUrl { "https://api.anthropic.com/v1" }; + std::string model; + std::string version { "2023-06-01" }; + int defaultMaxTokens { 4096 }; + std::optional proxy; + std::map customHeaders; +} ``` -Get the API key. - -#### getBaseUrl() +## `ChatParams` ```cpp -std::string_view getBaseUrl() const +struct ChatParams { + std::optional temperature; + std::optional topP; + std::optional maxTokens; + std::optional> stop; + std::optional> tools; + std::optional toolChoice; + std::optional responseFormat; + std::optional extraJson; +}; ``` -Get the base URL. - -#### getModel() +## `ChatResponse` ```cpp -std::string_view getModel() const -``` +struct ChatResponse { + std::string id; + std::string model; + std::vector content; + StopReason stopReason; + Usage usage; -Get the current model name. - -## StreamCallback Concept - -```cpp -template -concept StreamCallback = std::invocable && - std::same_as, void>; + std::string text() const; + std::vector tool_calls() const; +}; ``` -Type constraint for streaming callbacks. Accepts any callable that: -- Takes `std::string_view` parameter -- Returns `void` +## `Conversation` -**Valid callbacks:** ```cpp -// Lambda -[](std::string_view chunk) { std::print("{}", chunk); } - -// Function -void my_callback(std::string_view chunk) { /* ... */ } +struct Conversation { + std::vector messages; -// Functor -struct Printer { - void operator()(std::string_view chunk) { /* ... */ } + void push(Message msg); + void clear(); + int size() const; + void save(std::string_view filePath) const; + static Conversation load(std::string_view filePath); }; ``` -## JSON Type - -```cpp -using Json = nlohmann::json; -``` - -The library uses [nlohmann/json](https://github.com/nlohmann/json) for JSON handling. - ## Complete Example ```cpp @@ -271,46 +237,36 @@ import std; int main() { using namespace mcpplibs::llmapi; - - Client client(std::getenv("OPENAI_API_KEY"), URL::Poe); - - // Configure - client.model("gpt-5") - .system("You are a helpful assistant."); - - // First question (non-streaming) - client.user("What is C++?"); - client.request(); - std::println("Answer 1: {}", client.getAnswer()); - - // Follow-up (streaming) - uses conversation history - client.user("Tell me more"); - std::print("Answer 2: "); - client.request([](std::string_view chunk) { - std::print("{}", chunk); - std::cout.flush(); + + auto client = Client(Config{ + .apiKey = std::getenv("OPENAI_API_KEY"), + .model = "gpt-4o-mini", + }); + + client.default_params(ChatParams{ + .temperature = 0.2, }); - std::println("\n"); - - // Check history - std::println("Total messages: {}", client.getMessageCount()); - + + client.system("Be concise."); + + auto resp1 = client.chat("What is C++?"); + std::cout << resp1.text() << '\n'; + + auto resp2 = client.chat_stream("Give me one example of a C++23 feature.", [](std::string_view chunk) { + std::cout << chunk; + }); + std::cout << "\nmessages=" << client.conversation().size() << '\n'; + return 0; } ``` ## Error Handling -All methods may throw exceptions: -- `std::runtime_error` - API errors, network errors -- `std::invalid_argument` - Invalid parameters -- `nlohmann::json::exception` - JSON parsing errors - -**Example:** ```cpp try { - client.user("Hello").request(); + auto resp = client.chat("Hello"); } catch (const std::runtime_error& e) { - std::println("Error: {}", e.what()); + std::cerr << "Error: " << e.what() << '\n'; } ``` diff --git a/docs/examples.md b/docs/examples.md index 493845e..8934db0 100644 --- a/docs/examples.md +++ b/docs/examples.md @@ -1,12 +1,8 @@ # Examples -Practical examples for common use cases. +Practical examples using the current `Client` API. -## C++ Examples - -### Hello World - -Minimal streaming example: +## Minimal Chat ```cpp import mcpplibs.llmapi; @@ -14,24 +10,19 @@ import std; int main() { using namespace mcpplibs::llmapi; - - Client client(std::getenv("OPENAI_API_KEY"), URL::Poe); - - client.model("gpt-5") - .system("You are a helpful assistant.") - .user("In one sentence, introduce modern C++.") - .request([](std::string_view chunk) { - std::print("{}", chunk); - std::cout.flush(); - }); - - return 0; + + auto client = Client(Config{ + .apiKey = std::getenv("OPENAI_API_KEY"), + .model = "gpt-4o-mini", + }); + + client.system("You are a helpful assistant."); + auto resp = client.chat("In one sentence, explain C++23 modules."); + std::cout << resp.text() << '\n'; } ``` -### Chat Application - -Interactive CLI chat: +## Streaming Response ```cpp import mcpplibs.llmapi; @@ -39,40 +30,22 @@ import std; int main() { using namespace mcpplibs::llmapi; - - Client client(std::getenv("OPENAI_API_KEY"), URL::Poe); - client.model("gpt-5").system("You are a helpful assistant."); - - std::println("AI Chat CLI - Type 'quit' to exit\n"); - - std::string input; - while (true) { - std::print("You: "); - if (!std::getline(std::cin, input) || input == "quit" || input == "q") { - std::println("\nBye!"); - break; - } - - if (input.empty()) continue; - - client.user(input); - std::print("\nAI: "); - - client.request([](std::string_view chunk) { - std::print("{}", chunk); - std::cout.flush(); - }); - - std::println("\n"); - } - return 0; + auto client = Client(Config{ + .apiKey = std::getenv("OPENAI_API_KEY"), + .model = "gpt-4o-mini", + }); + + std::string streamed; + client.chat_stream("Write a 3-line poem about templates.", [&](std::string_view chunk) { + streamed += chunk; + std::cout << chunk; + }); + std::cout << "\n\nCollected " << streamed.size() << " bytes\n"; } ``` -### Multi-Turn Conversation - -Using conversation history: +## Multi-Turn Conversation ```cpp import mcpplibs.llmapi; @@ -80,32 +53,24 @@ import std; int main() { using namespace mcpplibs::llmapi; - - Client client(std::getenv("OPENAI_API_KEY"), URL::Poe); - client.model("gpt-5").system("You are a helpful assistant."); - - // First question - client.user("What is the capital of France?"); - client.request(); - std::println("Q1 Answer: {}\n", client.getAnswer()); - - // Follow-up (uses history) - client.user("What's its population?"); - client.request(); - std::println("Q2 Answer: {}\n", client.getAnswer()); - - // Another follow-up - client.user("Translate the above to Chinese."); - client.request(); - std::println("Q3 Answer: {}\n", client.getAnswer()); - - std::println("Total messages: {}", client.getMessageCount()); - - return 0; + + auto client = Client(Config{ + .apiKey = std::getenv("OPENAI_API_KEY"), + .model = "gpt-4o-mini", + }); + + client.system("Reply briefly."); + + auto resp1 = client.chat("What is the capital of France?"); + auto resp2 = client.chat("What is its population roughly?"); + + std::cout << resp1.text() << '\n'; + std::cout << resp2.text() << '\n'; + std::cout << "Messages stored: " << client.conversation().size() << '\n'; } ``` -### Non-Streaming with JSON Response +## Save And Load Conversation ```cpp import mcpplibs.llmapi; @@ -113,24 +78,27 @@ import std; int main() { using namespace mcpplibs::llmapi; - - Client client(std::getenv("OPENAI_API_KEY"), URL::Poe); - client.model("gpt-5").user("What is 2+2?"); - - auto response = client.request(); - - // Access full JSON response - std::println("Model: {}", response["model"]); - std::println("Content: {}", response["choices"][0]["message"]["content"]); - - // Or use getAnswer() - std::println("Answer: {}", client.getAnswer()); - - return 0; + + auto client = Client(Config{ + .apiKey = std::getenv("OPENAI_API_KEY"), + .model = "gpt-4o-mini", + }); + + client.chat("Remember that my favorite language is C++."); + client.save_conversation("conversation.json"); + + auto restored = Client(Config{ + .apiKey = std::getenv("OPENAI_API_KEY"), + .model = "gpt-4o-mini", + }); + restored.load_conversation("conversation.json"); + + auto resp = restored.chat("What language do I like?"); + std::cout << resp.text() << '\n'; } ``` -### Translation Chain +## Tool Calling ```cpp import mcpplibs.llmapi; @@ -138,35 +106,29 @@ import std; int main() { using namespace mcpplibs::llmapi; - - Client client(std::getenv("OPENAI_API_KEY"), URL::Poe); - client.model("gpt-5"); - - // Generate English story - client.system("You are a creative writer.") - .user("Write a short story about C++ (50 words)"); - - std::print("Story: "); - client.request([](std::string_view chunk) { - std::print("{}", chunk); - std::cout.flush(); - }); - std::println("\n"); - - // Translate to Chinese (uses history) - client.user("请把上面的故事翻译成中文。"); - std::print("Translation: "); - client.request([](std::string_view chunk) { - std::print("{}", chunk); - std::cout.flush(); + + auto client = Client(Config{ + .apiKey = std::getenv("OPENAI_API_KEY"), + .model = "gpt-4o-mini", }); - std::println(""); - - return 0; + + auto params = ChatParams{ + .tools = std::vector{{ + .name = "get_temperature", + .description = "Get the temperature for a city", + .inputSchema = R"({"type":"object","properties":{"city":{"type":"string"}},"required":["city"]})", + }}, + .toolChoice = ToolChoice::Auto, + }; + + auto resp = client.chat("What's the temperature in Tokyo?", params); + for (const auto& call : resp.tool_calls()) { + std::cout << call.name << ": " << call.arguments << '\n'; + } } ``` -### Error Handling +## Embeddings ```cpp import mcpplibs.llmapi; @@ -174,23 +136,19 @@ import std; int main() { using namespace mcpplibs::llmapi; - - try { - Client client(std::getenv("OPENAI_API_KEY"), URL::Poe); - - client.model("gpt-5") - .user("Hello") - .request(); - - std::println("{}", client.getAnswer()); - - } catch (const std::runtime_error& e) { - std::println("Runtime error: {}", e.what()); - } catch (const std::exception& e) { - std::println("Error: {}", e.what()); - } - - return 0; + + auto client = Client(Config{ + .apiKey = std::getenv("OPENAI_API_KEY"), + .model = "gpt-4o-mini", + }); + + auto embedding = client.embed( + {"hello world", "modern c++"}, + "text-embedding-3-small" + ); + + std::cout << "vectors: " << embedding.embeddings.size() << '\n'; + std::cout << "dimension: " << embedding.embeddings[0].size() << '\n'; } ``` diff --git a/docs/getting-started.md b/docs/getting-started.md index 9e67d1c..098c4c8 100644 --- a/docs/getting-started.md +++ b/docs/getting-started.md @@ -4,7 +4,7 @@ - **C++ Compiler**: GCC 14+, Clang 18+, or MSVC 2022+ with C++23 support - **Build System**: [xmake](https://xmake.io/) 3.0.0+ -- **Dependencies**: libcurl 8.11.0+ (automatically managed by xmake) +- **Dependencies**: `mbedtls` is resolved automatically by xmake ## Installation @@ -14,10 +14,12 @@ Add to your `xmake.lua`: ```lua add_repositories("mcpplibs-index git@github.com:mcpplibs/mcpplibs-index.git") -add_requires("llmapi 0.0.1") +add_requires("llmapi 0.0.2") target("myapp") set_kind("binary") + set_languages("c++23") + set_policy("build.c++.modules", true) add_files("src/*.cpp") add_packages("llmapi") ``` @@ -25,14 +27,10 @@ target("myapp") ### Building from Source ```bash -# Clone repository -git clone https://github.com/mcpplibs/openai.git -cd openai +git clone https://github.com/mcpplibs/llmapi.git +cd llmapi -# Build xmake - -# Run examples xmake run hello_mcpp xmake run basic xmake run chat @@ -40,25 +38,30 @@ xmake run chat ## First Example -Create `hello.cpp`: +Create `main.cpp`: ```cpp import mcpplibs.llmapi; import std; int main() { - using namespace mcpplibs; - - llmapi::Client client(std::getenv("OPENAI_API_KEY"), llmapi::URL::Poe); - - client.model("gpt-5") - .system("You are a helpful assistant.") - .user("Hello, introduce yourself in one sentence.") - .request([](std::string_view chunk) { - std::print("{}", chunk); - std::cout.flush(); - }); + using namespace mcpplibs::llmapi; + + auto apiKey = std::getenv("OPENAI_API_KEY"); + if (!apiKey) { + std::cerr << "OPENAI_API_KEY not set\n"; + return 1; + } + + auto client = Client(Config{ + .apiKey = apiKey, + .model = "gpt-4o-mini", + }); + client.system("You are a helpful assistant."); + auto resp = client.chat("Hello, introduce yourself in one sentence."); + + std::cout << resp.text() << '\n'; return 0; } ``` @@ -67,24 +70,51 @@ Build and run: ```bash xmake -xmake run hello +xmake run hello_mcpp ``` ## Environment Setup -Set your API key: +Set the provider-specific API key you plan to use: ```bash -# OpenAI export OPENAI_API_KEY="sk-..." +export ANTHROPIC_API_KEY="sk-ant-..." +export DEEPSEEK_API_KEY="..." +``` + +## Switching Providers -# Poe -export OPENAI_API_KEY="your-poe-api-key" +OpenAI: + +```cpp +auto client = Client(Config{ + .apiKey = std::getenv("OPENAI_API_KEY"), + .model = "gpt-4o-mini", +}); +``` + +Anthropic: + +```cpp +auto client = Client(anthropic::Anthropic({ + .apiKey = std::getenv("ANTHROPIC_API_KEY"), + .model = "claude-sonnet-4-20250514", +})); +``` + +Compatible endpoint through the OpenAI provider: + +```cpp +auto client = Client(Config{ + .apiKey = std::getenv("DEEPSEEK_API_KEY"), + .baseUrl = std::string(URL::DeepSeek), + .model = "deepseek-chat", +}); ``` ## Next Steps - [C++ API Guide](cpp-api.md) - Learn the full C++ API -- [C API Guide](c-api.md) - Learn the full C API - [Examples](examples.md) - See more examples -- [Providers](providers.md) - Configure different LLM providers +- [Providers](providers.md) - Configure different providers diff --git a/docs/providers.md b/docs/providers.md index 89bf6b1..c9e8352 100644 --- a/docs/providers.md +++ b/docs/providers.md @@ -1,152 +1,103 @@ # Providers Configuration -How to use different LLM providers. +How to configure the providers that exist in the current codebase. -## Built-in Provider URLs +## URL Constants -The library includes predefined constants for popular providers: +`mcpplibs::llmapi::URL` exposes common base URLs: ```cpp -// C++ import mcpplibs.llmapi; using namespace mcpplibs::llmapi; -URL::OpenAI // "https://api.openai.com/v1" -URL::Poe // "https://api.poe.com/v1" -URL::DeepSeek // Add more as needed +URL::OpenAI +URL::Anthropic +URL::DeepSeek +URL::OpenRouter +URL::Poe ``` ## OpenAI -Official OpenAI API. +Use `openai::OpenAI` for OpenAI chat, streaming, tool calls, and embeddings. -**C++:** ```cpp -Client client(std::getenv("OPENAI_API_KEY"), URL::OpenAI); -client.model("gpt-4"); // or gpt-4-turbo, gpt-3.5-turbo, etc. +auto client = Client(Config{ + .apiKey = std::getenv("OPENAI_API_KEY"), + .model = "gpt-4o-mini", +}); ``` +- Get keys from [OpenAI Platform](https://platform.openai.com/api-keys) +- Set `OPENAI_API_KEY` -**Get API Key:** -- Visit [OpenAI Platform](https://platform.openai.com/api-keys) -- Create an API key -- Set environment variable: `export OPENAI_API_KEY="sk-..."` +## Anthropic -## Poe +Use `anthropic::Anthropic` for Anthropic chat and streaming. -Poe platform (supports multiple models including GPT-4, Claude, etc.). - -**C++:** ```cpp -Client client(std::getenv("OPENAI_API_KEY"), URL::Poe); -client.model("gpt-5"); // or claude-3-opus, etc. +auto client = Client(anthropic::Anthropic({ + .apiKey = std::getenv("ANTHROPIC_API_KEY"), + .model = "claude-sonnet-4-20250514", +})); ``` -**Get API Key:** -- Visit [Poe API](https://poe.com/api_key) -- Create an API key -- Set environment variable: `export OPENAI_API_KEY="your-poe-key"` - -**Available Models:** -- `gpt-5` - Latest GPT model -- `claude-3-opus` - Anthropic Claude -- `gemini-pro` - Google Gemini -- And more... +- Get keys from [Anthropic Console](https://console.anthropic.com/) +- Set `ANTHROPIC_API_KEY` -## DeepSeek +## OpenAI-Compatible Endpoints -DeepSeek AI platform. +The OpenAI provider accepts a custom `baseUrl`, so DeepSeek, OpenRouter, Poe, local gateways, and self-hosted OpenAI-compatible services can all reuse `openai::OpenAI`. -**C++:** ```cpp -Client client(apiKey, URL::DeepSeek); -client.model("deepseek-chat"); +auto client = Client(openai::OpenAI({ + .apiKey = std::getenv("DEEPSEEK_API_KEY"), + .baseUrl = std::string(URL::DeepSeek), + .model = "deepseek-chat", +})); ``` -## Custom Provider - -Any OpenAI-compatible endpoint. +You can also use a literal endpoint: -**C++:** ```cpp -Client client(apiKey, "https://your-endpoint.com/v1"); -client.model("your-model"); +auto client = Client(openai::OpenAI({ + .apiKey = std::getenv("OPENROUTER_API_KEY"), + .baseUrl = "https://openrouter.ai/api/v1", + .model = "openai/gpt-4o-mini", +})); ``` -## Azure OpenAI +## Custom Headers And Proxy -**C++:** ```cpp -// Azure endpoint format: https://{resource}.openai.azure.com/openai/deployments/{deployment} -Client client( - azureApiKey, - "https://your-resource.openai.azure.com/openai/deployments/your-deployment" -); -``` - -## Local Models (Ollama, LM Studio, etc.) - -Any local server with OpenAI-compatible API. - -**C++:** -```cpp -// Ollama example -Client client("dummy-key", "http://localhost:11434/v1"); -client.model("llama2"); +auto provider = openai::OpenAI({ + .apiKey = std::getenv("OPENAI_API_KEY"), + .baseUrl = std::string(URL::OpenAI), + .model = "gpt-4o-mini", + .proxy = "http://127.0.0.1:7890", + .customHeaders = { + {"X-Trace-Id", "demo-request"}, + }, +}); ``` ## Environment Variables -Recommended approach for API keys: +Typical setup: ```bash -# OpenAI export OPENAI_API_KEY="sk-..." - -# Poe -export OPENAI_API_KEY="your-poe-key" - -# Azure -export AZURE_OPENAI_KEY="..." - -# Custom -export CUSTOM_API_KEY="..." +export ANTHROPIC_API_KEY="sk-ant-..." +export DEEPSEEK_API_KEY="..." +export OPENROUTER_API_KEY="..." ``` -Then in code: +## Provider Capabilities -**C++:** -```cpp -Client client(std::getenv("OPENAI_API_KEY"), URL::OpenAI); -``` - -## Provider Comparison - -| Provider | Streaming | Models | Notes | -|----------|-----------|--------|-------| -| OpenAI | ✅ | GPT-4, GPT-3.5 | Official API | -| Poe | ✅ | GPT, Claude, Gemini | Multiple models | -| DeepSeek | ✅ | DeepSeek-Chat | Chinese AI | -| Azure | ✅ | GPT models | Enterprise | -| Ollama | ✅ | Local models | Free, local | - -## Switching Providers - -Easy to switch between providers: - -```cpp -// Development - use Poe -Client dev_client(std::getenv("POE_KEY"), URL::Poe); -dev_client.model("gpt-5"); - -// Production - use OpenAI -Client prod_client(std::getenv("OPENAI_API_KEY"), URL::OpenAI); -prod_client.model("gpt-4"); - -// Local testing - use Ollama -Client test_client("dummy", "http://localhost:11434/v1"); -test_client.model("llama2"); -``` +| Provider | Chat | Streaming | Embeddings | Notes | +|----------|------|-----------|------------|-------| +| `openai::OpenAI` | yes | yes | yes | Also works with compatible endpoints | +| `anthropic::Anthropic` | yes | yes | no | Anthropic Messages API | ## Troubleshooting @@ -154,30 +105,33 @@ test_client.model("llama2"); ```cpp try { - client.user("test").request(); + auto resp = client.chat("test"); } catch (const std::runtime_error& e) { - std::println("Network error: {}", e.what()); - // Check: internet connection, firewall, proxy settings + std::cerr << "Network error: " << e.what() << '\n'; } ``` ### Authentication Errors ```cpp -// Check API key is set auto key = std::getenv("OPENAI_API_KEY"); if (!key || std::string(key).empty()) { - std::println("Error: API key not set"); + std::cerr << "OPENAI_API_KEY not set\n"; } ``` ### Model Not Found ```cpp -// Verify model name is correct for your provider -client.model("gpt-4"); // OpenAI -client.model("gpt-5"); // Poe -client.model("deepseek-chat"); // DeepSeek +auto openaiClient = Client(openai::OpenAI({ + .apiKey = std::getenv("OPENAI_API_KEY"), + .model = "gpt-4o-mini", +})); + +auto anthropicClient = Client(anthropic::Anthropic({ + .apiKey = std::getenv("ANTHROPIC_API_KEY"), + .model = "claude-sonnet-4-20250514", +})); ``` ## See Also diff --git a/examples/basic.cpp b/examples/basic.cpp index 2e9282a..3c58ae1 100644 --- a/examples/basic.cpp +++ b/examples/basic.cpp @@ -2,50 +2,60 @@ import mcpplibs.llmapi; import std; +#include "print.hpp" + using namespace mcpplibs::llmapi; int main() { auto apiKey = std::getenv("OPENAI_API_KEY"); if (!apiKey) { - std::println("Error: OPENAI_API_KEY not set"); + println("Error: OPENAI_API_KEY not set"); return 1; } - auto client = Client(openai::OpenAI({ + auto client = Client(Config{ .apiKey = apiKey, .model = "gpt-4o-mini", - })); + }); client.system("You are a helpful assistant."); - std::println("=== llmapi Basic Usage Demo ===\n"); + println("=== llmapi Basic Usage Demo ==="); + println(); try { // Example 1: Non-streaming request - std::println("[Example 1] Non-streaming mode:"); - std::println("Question: What is the capital of China?\n"); + println("[Example 1] Non-streaming mode:"); + println("Question: What is the capital of China?"); + println(); auto resp = client.chat("What is the capital of China?"); - std::println("Answer: {}\n", resp.text()); + println("Answer: ", resp.text()); + println(); // Example 2: Streaming request - std::println("[Example 2] Streaming mode:"); - std::println("Question: Convince me to use modern C++ (100 words)\n"); + println("[Example 2] Streaming mode:"); + println("Question: Convince me to use modern C++ (100 words)"); + println(); client.clear(); client.system("You are a helpful assistant."); - std::print("Answer: "); + print("Answer: "); auto resp2 = client.chat_stream("Convince me to use modern C++ (100 words)", [](std::string_view chunk) { - std::print("{}", chunk); + print(chunk); }); - std::println("\n"); - std::println("[Verification] Answer length: {} chars\n", resp2.text().size()); + println(); + println(); + println("[Verification] Answer length: ", resp2.text().size(), " chars"); + println(); } catch (const std::exception& e) { - std::println("\nError: {}\n", e.what()); + println(); + println("Error: ", e.what()); + println(); return 1; } - std::println("=== Demo Complete ==="); + println("=== Demo Complete ==="); return 0; } diff --git a/examples/chat.cpp b/examples/chat.cpp index 9081f12..4c1a97a 100644 --- a/examples/chat.cpp +++ b/examples/chat.cpp @@ -2,44 +2,51 @@ import mcpplibs.llmapi; import std; +#include "print.hpp" + using namespace mcpplibs::llmapi; int main() { auto apiKey = std::getenv("OPENAI_API_KEY"); if (!apiKey) { - std::println("Error: OPENAI_API_KEY not set"); + println("Error: OPENAI_API_KEY not set"); return 1; } - auto client = Client(openai::OpenAI({ + auto client = Client(Config{ .apiKey = apiKey, .model = "gpt-4o-mini", - })); + }); client.system("You are a helpful assistant."); - std::println("AI Chat CLI - Type 'quit' to exit\n"); + println("AI Chat CLI - Type 'quit' to exit"); + println(); while (true) { - std::print("You: "); + print("You: "); std::string input; std::getline(std::cin, input); if (input == "quit" || input == "q") { - std::println("\nBye!"); + println(); + println("Bye!"); break; } if (input.empty()) continue; try { - std::print("\nAI: "); + print("\nAI: "); client.chat_stream(input, [](std::string_view chunk) { - std::print("{}", chunk); + print(chunk); }); - std::println("\n"); + println(); + println(); } catch (const std::exception& e) { - std::println("\nError: {}\n", e.what()); + println(); + println("Error: ", e.what()); + println(); } } diff --git a/examples/hello_mcpp.cpp b/examples/hello_mcpp.cpp index 33462f2..ed350c4 100644 --- a/examples/hello_mcpp.cpp +++ b/examples/hello_mcpp.cpp @@ -2,22 +2,24 @@ import mcpplibs.llmapi; import std; +#include "print.hpp" + using namespace mcpplibs::llmapi; int main() { auto apiKey = std::getenv("OPENAI_API_KEY"); if (!apiKey) { - std::println("Error: OPENAI_API_KEY not set"); + println("Error: OPENAI_API_KEY not set"); return 1; } - auto client = Client(openai::OpenAI({ + auto client = Client(Config{ .apiKey = apiKey, .model = "gpt-4o-mini", - })); + }); auto resp = client.chat("Hello! In one sentence, introduce modern C++."); - std::println("{}", resp.text()); + println(resp.text()); return 0; } diff --git a/examples/print.hpp b/examples/print.hpp new file mode 100644 index 0000000..0e88031 --- /dev/null +++ b/examples/print.hpp @@ -0,0 +1,12 @@ +#pragma once + +template +inline void print(Args&&... args) { + (std::cout << ... << std::forward(args)); +} + +template +inline void println(Args&&... args) { + print(std::forward(args)...); + std::cout << '\n'; +} diff --git a/examples/xmake.lua b/examples/xmake.lua index 0dee7bf..523e8ff 100644 --- a/examples/xmake.lua +++ b/examples/xmake.lua @@ -1,20 +1,14 @@ --- TODO: fix build on macOS (12.3) libc++/println issue -if not is_host("macosx") then - -- xmake f --toolchain=llvm --sdk=/opt/homebrew/opt/llvm@20 - --add_ldflags("-L/opt/homebrew/opt/llvm@20/lib/c++ -L/opt/homebrew/opt/llvm@20/lib/unwind -lunwind") +target("hello_mcpp") + set_kind("binary") + add_files("hello_mcpp.cpp") + add_deps("llmapi") - target("hello_mcpp") - set_kind("binary") - add_files("hello_mcpp.cpp") - add_deps("llmapi") +target("basic") + set_kind("binary") + add_files("basic.cpp") + add_deps("llmapi") - target("basic") - set_kind("binary") - add_files("basic.cpp") - add_deps("llmapi") - - target("chat") - set_kind("binary") - add_files("chat.cpp") - add_deps("llmapi") -end +target("chat") + set_kind("binary") + add_files("chat.cpp") + add_deps("llmapi") diff --git a/src/client.cppm b/src/client.cppm index c46e4ed..02f5f38 100644 --- a/src/client.cppm +++ b/src/client.cppm @@ -3,6 +3,8 @@ export module mcpplibs.llmapi:client; import :types; import :provider; import :coro; +import :openai; +import :anthropic; import std; export namespace mcpplibs::llmapi { @@ -16,6 +18,12 @@ private: public: explicit Client(P provider) : provider_(std::move(provider)) {} + explicit Client(openai::Config config) + requires std::same_as + : provider_(openai::OpenAI(std::move(config))) {} + explicit Client(anthropic::Config config) + requires std::same_as + : provider_(anthropic::Anthropic(std::move(config))) {} // Config (chainable) Client& default_params(ChatParams params) { @@ -107,4 +115,7 @@ public: P& provider() { return provider_; } }; +Client(openai::Config) -> Client; +Client(anthropic::Config) -> Client; + } // namespace mcpplibs::llmapi diff --git a/src/llmapi.cppm b/src/llmapi.cppm index 64d642b..a8766e4 100644 --- a/src/llmapi.cppm +++ b/src/llmapi.cppm @@ -15,6 +15,9 @@ import mcpplibs.llmapi.nlohmann.json; namespace mcpplibs::llmapi { export using OpenAI = openai::OpenAI; + export using Config = openai::Config; + export using Anthropic = anthropic::Anthropic; + export using AnthropicConfig = anthropic::Config; export using URL = llmapi::URL; export using Json = nlohmann::json; -} // namespace mcpplibs::llmapi \ No newline at end of file +} // namespace mcpplibs::llmapi diff --git a/tests/llmapi/test_anthropic_serialize.cpp b/tests/llmapi/test_anthropic_serialize.cpp index fb310b1..6bca4ba 100644 --- a/tests/llmapi/test_anthropic_serialize.cpp +++ b/tests/llmapi/test_anthropic_serialize.cpp @@ -21,10 +21,10 @@ int main() { assert(provider.name() == "anthropic"); // Test 3: Client compiles - auto client = Client(anthropic::Anthropic({ + auto client = Client(AnthropicConfig{ .apiKey = "test", .model = "claude-sonnet-4-20250514", - })); + }); assert(client.provider().name() == "anthropic"); println("test_anthropic_serialize: ALL PASSED"); diff --git a/tests/llmapi/test_openai_serialize.cpp b/tests/llmapi/test_openai_serialize.cpp index 994323f..6accbe0 100644 --- a/tests/llmapi/test_openai_serialize.cpp +++ b/tests/llmapi/test_openai_serialize.cpp @@ -26,10 +26,10 @@ int main() { assert(provider.name() == "openai"); // Test 3: Client compiles - auto client = Client(openai::OpenAI({ + auto client = Client(Config{ .apiKey = "test", .model = "gpt-4o", - })); + }); // Test 4: provider() access assert(client.provider().name() == "openai");