Rate Limits
Auva APIs enforce rate limits to ensure fair usage and system stability. When you exceed the limit, you'll receive a 429 Too Many Requests response.
Default Limits
| Endpoint Category | Rate Limit | Window |
|---|---|---|
| Auth (login, register) | 10 requests | per minute |
| Auth (refresh, token) | 30 requests | per minute |
| Auva Go (read) | 100 requests | per minute |
| Auva Go (write) | 30 requests | per minute |
| Auva Mail (create inbox) | 20 requests | per minute |
| Auva Mail (read messages) | 60 requests | per minute |
| User API (profile ops) | 30 requests | per minute |
Rate Limit Headers
Every API response includes rate limit information:
text
X-RateLimit-Limit: 100
X-RateLimit-Remaining: 87
X-RateLimit-Reset: 1710500400
| Header | Description |
|---|---|
X-RateLimit-Limit | Maximum requests allowed in the current window |
X-RateLimit-Remaining | Requests remaining in the current window |
X-RateLimit-Reset | Unix timestamp when the window resets |
When You Hit the Limit
text
{
"error": "Too Many Requests",
"message": "Rate limit exceeded. Try again in 42 seconds.",
"retryAfter": 42
}
The response also includes a Retry-After header with the number of seconds to wait.
Retry Strategy
Implement exponential backoff when you encounter rate limits:
text
async function fetchWithRetry(url: string, options: RequestInit, maxRetries = 3) {
for (let attempt = 0; attempt < maxRetries; attempt++) {
const response = await fetch(url, options);
if (response.status !== 429) {
return response;
}
const retryAfter = parseInt(
response.headers.get('Retry-After') || '5'
);
const delay = retryAfter * 1000 * Math.pow(2, attempt);
await new Promise(resolve => setTimeout(resolve, delay));
}
throw new Error('Max retries exceeded');
}
Do not loop-retry without delays. Aggressive retry patterns may result in your account being temporarily suspended.
Best Practices
- Cache responses: Don't re-fetch data you already have
- Batch operations: Use bulk endpoints when available
- Monitor headers: Check
X-RateLimit-Remainingto proactively slow down - Use webhooks: For real-time data, prefer webhooks over polling