Reliability layer for API calls: retries, caching, dedup, circuit breakers.
- Retries with Backoff - Automatic retries with exponential backoff
- Circuit Breaker - Prevent cascading failures
- Caching - TTL cache for GET requests and LLM responses
- Idempotency - Request coalescing with idempotency keys
- Rate Limiting - Built-in rate limiting per tier
- LLM Proxy - Unified interface for OpenAI, Anthropic, Mistral
- Cost Control - Budget caps and cost estimation
- Self-Service Onboarding - Automated API key generation
- Paddle Payments - Subscription management
reliapi/
├── reliapi/ # Importable Python package
│ ├── app/ # FastAPI application and routes
│ ├── core/ # Reliability primitives
│ ├── adapters/ # Provider adapters
│ ├── config/ # Configuration loader and schema
│ ├── integrations/ # RapidAPI, RouteLLM, framework adapters
│ └── metrics/ # Prometheus metrics
├── cli/ # CLI package
├── action/ # GitHub Action
├── scripts/ # OpenAPI / SDK / release helpers
├── sdk/ # SDK generation templates
├── examples/ # Code examples
├── openapi/ # OpenAPI specs
├── postman/ # Postman collection
└── tests/ # Test suite
Try ReliAPI directly on RapidAPI.
docker run -d -p 8000:8000 \
-e REDIS_URL="redis://localhost:6379/0" \
-e RELIAPI_CONFIG_PATH=/app/config.yaml \
kikudoc/reliapi:latest# Clone repository
git clone https://github.com/kiku-jw/reliapi.git
cd reliapi
# Create virtual environment
python3 -m venv venv
source venv/bin/activate
# Install dependencies
pip install -r requirements.txt
# Start Redis
docker run -d -p 6379:6379 redis:7-alpine
# Run server
export REDIS_URL=redis://localhost:6379/0
export RELIAPI_CONFIG_PATH=config.yaml
uvicorn reliapi.app.main:app --host 0.0.0.0 --port 8000 --reloadCreate config.yaml:
targets:
openai:
base_url: https://api.openai.com/v1
llm:
provider: openai
default_model: gpt-4o-mini
soft_cost_cap_usd: 0.10
hard_cost_cap_usd: 0.50
cache:
enabled: true
ttl_s: 3600
circuit:
error_threshold: 5
cooldown_s: 60
auth:
type: bearer_env
env_var: OPENAI_API_KEY| Endpoint | Method | Description |
|---|---|---|
/proxy/http |
POST | Proxy any HTTP API with reliability |
/proxy/llm |
POST | Proxy LLM requests with cost control |
/healthz |
GET | Health check |
/metrics |
GET | Prometheus metrics |
| Endpoint | Method | Description |
|---|---|---|
/paddle/plans |
GET | List subscription plans |
/paddle/checkout |
POST | Create checkout session |
/paddle/webhook |
POST | Handle Paddle webhooks |
/onboarding/start |
POST | Generate API key |
/onboarding/quick-start |
GET | Get quick start guide |
/onboarding/verify |
POST | Verify integration |
/calculators/pricing |
POST | Calculate pricing |
/calculators/roi |
POST | Calculate ROI |
/dashboard/metrics |
GET | Usage metrics |
# Required
REDIS_URL=redis://localhost:6379/0
# Optional
RELIAPI_CONFIG_PATH=config.yaml
RELIAPI_API_KEY=your-api-key
CORS_ORIGINS=*
LOG_LEVEL=INFO
# LLM Providers
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
MISTRAL_API_KEY=...
# Paddle (for payments)
PADDLE_API_KEY=...
PADDLE_VENDOR_ID=...
PADDLE_WEBHOOK_SECRET=...
PADDLE_ENVIRONMENT=sandboxfrom reliapi_sdk import ReliAPI
client = ReliAPI(
base_url="https://reliapi.kikuai.dev",
api_key="your-api-key"
)
# HTTP proxy
response = client.proxy_http(
target="my-api",
method="GET",
path="/users/123",
cache=300
)
# LLM proxy
llm_response = client.proxy_llm(
target="openai",
model="gpt-4o-mini",
messages=[{"role": "user", "content": "Hello!"}],
idempotency_key="unique-key-123"
)import { ReliAPI } from 'reliapi-sdk';
const client = new ReliAPI({
baseUrl: 'https://reliapi.kikuai.dev',
apiKey: 'your-api-key'
});
const response = await client.proxyLlm({
target: 'openai',
model: 'gpt-4o-mini',
messages: [{ role: 'user', content: 'Hello!' }]
});# Run tests
pytest
# With coverage
pytest --cov=reliapi --cov-report=htmlmake openapiregenerates the OpenAPI schema from the FastAPI appmake postmanrebuilds the Postman collectionmake sdk-jsandmake sdk-pyregenerate SDK packagesmake release-patch|minor|majorbumps version metadata and prepares a tagged releasemake cliinstalls the local CLI package for smoke testing
See docs/release.md and docs/SECRETS_SETUP.md for release ops.
- GitHub Issues: https://github.com/kiku-jw/reliapi/issues
- Email: dev@kikuai.dev
AGPL-3.0. Copyright (c) 2025 KikuAI Lab