|
1 | 1 | # BlockRun LLM SDK |
2 | 2 |
|
3 | | -Pay-per-request access to GPT-5.2, Claude 4, Gemini 2.5, Grok, and more via x402 micropayments on Base. |
| 3 | +Pay-per-request access to GPT-5.2, Claude 4, Gemini 2.5, Grok, and more via x402 micropayments. |
4 | 4 |
|
5 | 5 | **BlockRun assumes Claude Code as the agent runtime.** |
6 | 6 |
|
7 | | -**Networks:** |
8 | | -- **Mainnet:** Base (Chain ID: 8453) - Production with real USDC |
9 | | -- **Testnet:** Base Sepolia (Chain ID: 84532) - Developer testing with testnet USDC |
| 7 | +## Supported Chains |
10 | 8 |
|
11 | | -**Payment:** USDC |
12 | | -**Protocol:** x402 v2 (CDP Facilitator) |
| 9 | +| Chain | Network | Payment | Status | |
| 10 | +|-------|---------|---------|--------| |
| 11 | +| **Base** | Base Mainnet (Chain ID: 8453) | USDC | ✅ Primary | |
| 12 | +| **Base Testnet** | Base Sepolia (Chain ID: 84532) | Testnet USDC | ✅ Development | |
| 13 | +| **XRPL** | XRP Ledger Mainnet | RLUSD | ✅ New | |
| 14 | + |
| 15 | +**Protocol:** x402 v2 |
13 | 16 |
|
14 | 17 | ## Installation |
15 | 18 |
|
@@ -267,6 +270,49 @@ client = LLMClient(api_url="https://testnet.blockrun.ai/api") |
267 | 270 | response = client.chat("openai/gpt-oss-20b", "Hello!") |
268 | 271 | ``` |
269 | 272 |
|
| 273 | +## XRPL Chain (RLUSD Payments) |
| 274 | + |
| 275 | +BlockRun now supports payments with RLUSD on the XRP Ledger. Same models, same API - just a different payment rail. |
| 276 | + |
| 277 | +```python |
| 278 | +from blockrun_llm import xrpl_client |
| 279 | + |
| 280 | +# Create XRPL client (pays with RLUSD) |
| 281 | +client = xrpl_client() # Uses BLOCKRUN_WALLET_KEY |
| 282 | + |
| 283 | +# Chat with any model |
| 284 | +response = client.chat("openai/gpt-4o", "Hello!") |
| 285 | +print(response) |
| 286 | + |
| 287 | +# Check RLUSD balance |
| 288 | +balance = client.get_balance() |
| 289 | +print(f"RLUSD: ${balance:.4f}") |
| 290 | +``` |
| 291 | + |
| 292 | +### Async XRPL Usage |
| 293 | + |
| 294 | +```python |
| 295 | +import asyncio |
| 296 | +from blockrun_llm import async_xrpl_client |
| 297 | + |
| 298 | +async def main(): |
| 299 | + async with async_xrpl_client() as client: |
| 300 | + response = await client.chat("openai/gpt-4o", "Hello!") |
| 301 | + print(response) |
| 302 | + |
| 303 | +asyncio.run(main()) |
| 304 | +``` |
| 305 | + |
| 306 | +### Manual XRPL Configuration |
| 307 | + |
| 308 | +```python |
| 309 | +from blockrun_llm import LLMClient |
| 310 | + |
| 311 | +# Or configure manually |
| 312 | +client = LLMClient(api_url="https://xrpl.blockrun.ai/api") |
| 313 | +response = client.chat("openai/gpt-4o", "Hello!") |
| 314 | +``` |
| 315 | + |
270 | 316 | ## Environment Variables |
271 | 317 |
|
272 | 318 | | Variable | Description | Required | |
|
0 commit comments