| title | Errors |
|---|---|
| description | How Edgee API responds when errors occur. |
| icon | circle-x |
Edgee uses conventional HTTP response codes to indicate the success or failure of an API request. In general: Codes in the 2xx range indicate success. Codes in the 4xx range indicate an error that failed given the information provided (e.g., a required parameter was omitted, authentication failed, etc.). Codes in the 5xx range indicate an error with Edgee's servers.
When an error occurs, the API returns a JSON object with an error field containing details about what went wrong.
Below is a summary of the HTTP status codes that Edgee API uses.
| HTTP Code | Status | Description |
|---|---|---|
| 200 | OK | Everything worked as expected. |
| 400 | Bad Request | The request was unacceptable, often due to missing a required parameter, invalid model ID, model not found, or provider not supported. |
| 401 | Unauthorized | No valid API key provided, or the Authorization header is missing or malformed. |
| 403 | Forbidden | The API key doesn't have permissions to perform the request. This can occur if the key is inactive, expired, or the requested model is not allowed for this key. |
| 404 | Not Found | The requested resource doesn't exist. |
| 429 | Too Many Requests | Too many requests hit the API too quickly, or usage limit exceeded. We recommend an exponential backoff of your requests. |
| 500, 502, 503, 504 | Server Errors | Something went wrong on Edgee's end. (These are rare.) |
bad_model_id: The model ID format is invalidmodel_not_found: The requested model does not exist or is not availableprovider_not_supported: The requested provider is not supported for the specified modelstreaming_not_supported: Streaming is only supported when using Anthropic provider for Messages API
{
"error": {
"code": "model_not_found",
"message": "Model 'openai/gpt-1' not found"
}
}{
"error": {
"code": "provider_not_supported",
"message": "Provider 'anthropic' is not supported for model 'openai/gpt-4o'"
}
}{
"error": {
"code": "streaming_not_supported",
"message": "Streaming for Messages API is only supported when the model uses the Anthropic provider"
}
}{
"error": {
"code": "unauthorized",
"message": "Invalid Authorization header format"
}
}{
"error": {
"code": "unauthorized",
"message": "Failed to retrieve API key: <error details>"
}
}{
"error": {
"code": "forbidden",
"message": "API key has expired"
}
}{
"error": {
"code": "forbidden",
"message": "Model 'openai/gpt-4o' is not allowed for this API key"
}
}{
"error": {
"code": "usage_limit_exceeded",
"message": "Organization has no credits remaining"
}
}When a server error occurs, the API may return a generic error response. These errors are rare and typically indicate an issue on Edgee's side.
```json Server Error { "error": { "code": "internal_error", "message": "An internal error occurred. Please try again later." } } ```When you receive an error response:
- Check the HTTP status code to understand the general category of the error
- Read the error code (
error.code) to understand the specific issue - Review the error message (
error.message) for additional context - Take appropriate action:
- 400 errors: Fix the request parameters and retry
- 401 errors: Check your API key and authentication headers
- 403 errors: Verify your API key permissions and status
- 429 errors: Implement exponential backoff and retry logic
- 5xx errors: Retry after a delay, or contact support if the issue persists
If you exceed the rate limits, you will receive a 429 Too Many Requests response. We recommend implementing exponential backoff when you encounter rate limit errors:
- Wait for the time specified in the
Retry-Afterheader (if present) - Retry the request with exponential backoff
- Reduce the rate of requests to stay within limits