Skip to content

kiku-jw/reliapi

Repository files navigation

ReliAPI

Reliability layer for API calls: retries, caching, dedup, circuit breakers.

npm version PyPI version Docker

Features

  • Retries with Backoff - Automatic retries with exponential backoff
  • Circuit Breaker - Prevent cascading failures
  • Caching - TTL cache for GET requests and LLM responses
  • Idempotency - Request coalescing with idempotency keys
  • Rate Limiting - Built-in rate limiting per tier
  • LLM Proxy - Unified interface for OpenAI, Anthropic, Mistral
  • Cost Control - Budget caps and cost estimation
  • Self-Service Onboarding - Automated API key generation
  • Paddle Payments - Subscription management

Project Structure

reliapi/
├── reliapi/              # Importable Python package
│   ├── app/              # FastAPI application and routes
│   ├── core/             # Reliability primitives
│   ├── adapters/         # Provider adapters
│   ├── config/           # Configuration loader and schema
│   ├── integrations/     # RapidAPI, RouteLLM, framework adapters
│   └── metrics/          # Prometheus metrics
├── cli/                  # CLI package
├── action/               # GitHub Action
├── scripts/              # OpenAPI / SDK / release helpers
├── sdk/                  # SDK generation templates
├── examples/             # Code examples
├── openapi/              # OpenAPI specs
├── postman/              # Postman collection
└── tests/                # Test suite

Quick Start

Using RapidAPI (No Installation)

Try ReliAPI directly on RapidAPI.

Self-Hosting with Docker

docker run -d -p 8000:8000 \
  -e REDIS_URL="redis://localhost:6379/0" \
  -e RELIAPI_CONFIG_PATH=/app/config.yaml \
  kikudoc/reliapi:latest

Local Development

# Clone repository
git clone https://github.com/kiku-jw/reliapi.git
cd reliapi

# Create virtual environment
python3 -m venv venv
source venv/bin/activate

# Install dependencies
pip install -r requirements.txt

# Start Redis
docker run -d -p 6379:6379 redis:7-alpine

# Run server
export REDIS_URL=redis://localhost:6379/0
export RELIAPI_CONFIG_PATH=config.yaml
uvicorn reliapi.app.main:app --host 0.0.0.0 --port 8000 --reload

Configuration

Create config.yaml:

targets:
  openai:
    base_url: https://api.openai.com/v1
    llm:
      provider: openai
      default_model: gpt-4o-mini
      soft_cost_cap_usd: 0.10
      hard_cost_cap_usd: 0.50
    cache:
      enabled: true
      ttl_s: 3600
    circuit:
      error_threshold: 5
      cooldown_s: 60
    auth:
      type: bearer_env
      env_var: OPENAI_API_KEY

API Endpoints

Core Proxy

Endpoint Method Description
/proxy/http POST Proxy any HTTP API with reliability
/proxy/llm POST Proxy LLM requests with cost control
/healthz GET Health check
/metrics GET Prometheus metrics

Business Routes

Endpoint Method Description
/paddle/plans GET List subscription plans
/paddle/checkout POST Create checkout session
/paddle/webhook POST Handle Paddle webhooks
/onboarding/start POST Generate API key
/onboarding/quick-start GET Get quick start guide
/onboarding/verify POST Verify integration
/calculators/pricing POST Calculate pricing
/calculators/roi POST Calculate ROI
/dashboard/metrics GET Usage metrics

Environment Variables

# Required
REDIS_URL=redis://localhost:6379/0

# Optional
RELIAPI_CONFIG_PATH=config.yaml
RELIAPI_API_KEY=your-api-key
CORS_ORIGINS=*
LOG_LEVEL=INFO

# LLM Providers
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
MISTRAL_API_KEY=...

# Paddle (for payments)
PADDLE_API_KEY=...
PADDLE_VENDOR_ID=...
PADDLE_WEBHOOK_SECRET=...
PADDLE_ENVIRONMENT=sandbox

SDK Usage

Python

from reliapi_sdk import ReliAPI

client = ReliAPI(
    base_url="https://reliapi.kikuai.dev",
    api_key="your-api-key"
)

# HTTP proxy
response = client.proxy_http(
    target="my-api",
    method="GET",
    path="/users/123",
    cache=300
)

# LLM proxy
llm_response = client.proxy_llm(
    target="openai",
    model="gpt-4o-mini",
    messages=[{"role": "user", "content": "Hello!"}],
    idempotency_key="unique-key-123"
)

JavaScript

import { ReliAPI } from 'reliapi-sdk';

const client = new ReliAPI({
  baseUrl: 'https://reliapi.kikuai.dev',
  apiKey: 'your-api-key'
});

const response = await client.proxyLlm({
  target: 'openai',
  model: 'gpt-4o-mini',
  messages: [{ role: 'user', content: 'Hello!' }]
});

Testing

# Run tests
pytest

# With coverage
pytest --cov=reliapi --cov-report=html

Release Tooling

  • make openapi regenerates the OpenAPI schema from the FastAPI app
  • make postman rebuilds the Postman collection
  • make sdk-js and make sdk-py regenerate SDK packages
  • make release-patch|minor|major bumps version metadata and prepares a tagged release
  • make cli installs the local CLI package for smoke testing

See docs/release.md and docs/SECRETS_SETUP.md for release ops.

Documentation

Support

License

AGPL-3.0. Copyright (c) 2025 KikuAI Lab

About

Small reliability layer for HTTP APIs and LLM calls. Idempotent HTTP/LLM proxy with retries, cache, circuit breaker and predictable AI costs.

Topics

Resources

License

Stars

Watchers

Forks

Contributors

Languages