Errors
Error responses follow standard formats for both the OpenAI-compatible and Anthropic-compatible APIs.
OpenAI Error Format
{
"error": {
"message": "Description of the error",
"type": "error_type",
"code": "error_code"
}
}Error Fields
| Field | Type | Description |
|---|---|---|
| error.message | string | Human-readable error description |
| error.type | string | Error category (e.g., invalid_request_error, authentication_error) |
| error.code | string | Machine-readable error code |
Anthropic Error Format
{
"type": "error",
"error": {
"type": "invalid_request_error",
"message": "Description of the error"
}
}Error Fields
| Field | Type | Description |
|---|---|---|
| type | string | Always "error" |
| error.type | string | Error category |
| error.message | string | Human-readable error description |
HTTP Status Codes
| Status Code | Meaning | Description |
|---|---|---|
| 400 | Bad Request | Invalid request body, missing required fields, validation errors |
| 401 | Unauthorized | Missing or invalid API key |
| 413 | Payload Too Large | Request body exceeds size limit |
| 429 | Too Many Requests | Rate limit exceeded (RPM or TPM) |
| 502 | Bad Gateway | Upstream provider error |
| 503 | Service Unavailable | Auth service or provider temporarily unavailable |
| 504 | Gateway Timeout | Request timed out waiting for upstream provider |
Common Error Examples
400 — Invalid Model
{
"error": {
"message": "model not found: invalid-model",
"type": "invalid_request_error",
"code": "model_not_found"
}
}400 — Missing Required Field
{
"error": {
"message": "messages is required",
"type": "invalid_request_error",
"code": "invalid_request"
}
}401 — Invalid API Key
{
"error": {
"message": "Invalid API key",
"type": "authentication_error",
"code": "invalid_api_key"
}
}429 — Rate Limited
{
"error": {
"message": "Rate limit exceeded",
"type": "rate_limit_error",
"code": "rate_limit_exceeded"
}
}502 — Provider Error
{
"error": {
"message": "Upstream provider returned an error",
"type": "server_error",
"code": "provider_error"
}
}503 — Service Unavailable
{
"error": {
"message": "Service temporarily unavailable",
"type": "server_error",
"code": "service_unavailable"
}
}Streaming Errors
In OpenAI format, mid-stream errors appear as a chunk with an error field:
data: {"id":"chatcmpl-abc","object":"chat.completion.chunk","model":"anthropic/claude-sonnet-4","choices":[],"error":{"code":502,"message":"downstream call failed"}}
data: [DONE]In Anthropic format, errors appear as an error event:
event: error
data: {"type":"error","error":{"type":"server_error","message":"downstream call failed"}}Always handle mid-stream errors gracefully. The connection may close unexpectedly, so implement reconnection logic.
Retry Guidance
| Status | Retryable | Recommended Action |
|---|---|---|
| 400 | No | Fix the request — check fields and validation |
| 401 | No | Check your API key |
| 413 | No | Reduce request size |
| 429 | Yes | Back off and retry after a delay (exponential backoff) |
| 502 | Yes | Retry with exponential backoff |
| 503 | Yes | Retry after a short delay |
| 504 | Yes | Retry with a longer timeout or reduce request complexity |
For retryable errors, use exponential backoff with jitter. Start with a 1-second delay and double on each retry, up to a maximum of 30 seconds.