n8n Troubleshooting: API Rate Limiting
Retry logic, exponential backoff, batch processing for high-volume workflows.
n8n Troubleshooting: API Rate Limiting
Most n8n workflows fail because of API
This guide shows you exactly how to handle rate limiting in n8n. You'll learn retry configurations that actually work, batch processing patterns for high-volume operations, and monitoring setups that catch problems before they cascade.
Understanding Rate Limit Response Codes
Before you build retry logic, know what you're catching. APIs
HTTP 429 (Too Many Requests): Standard rate limit response. Most APIs
HTTP 503 (Service Unavailable): Some APIs
HTTP 403 (Forbidden): Occasionally used for rate limits, especially by older APIs
Check the response headers. Look for:
X-RateLimit-Limit: Total requests allowed per windowX-RateLimit-Remaining: Requests left in current windowX-RateLimit-Reset: Unix timestamp when the limit resetsRetry-After: Seconds to wait before retrying
Example from Salesforce:
X-RateLimit-Limit: 15000
X-RateLimit-Remaining: 142
X-RateLimit-Reset: 1704067200
You have 142 requests left before the limit resets at that timestamp.
Configuring Retry Logic in n8n
n8n's built-in retry system handles transient failures. Here's the exact configuration that works for most APIs
Step 1: Open the HTTP Request node hitting the rate limit.
Step 2: Click the gear icon, scroll to "Retry On Fail".
Step 3: Enable retry and configure:
- Max Tries: Set to 5 (initial attempt + 4 retries)
- Wait Between Tries (ms): Start with 2000
- Use Exponential Backoff: Enable this
- Backoff Multiplier: Set to 2
Step 4: Under "Continue On Fail", enable it and set "Error Output" to "Include Error Details".
This configuration produces the following retry pattern:
Attempt 1: Immediate
Attempt 2: 2 seconds wait
Attempt 3: 4 seconds wait
Attempt 4: 8 seconds wait
Attempt 5: 16 seconds wait
Total time before final failure: 30 seconds.
Critical detail: n8n only retries on specific error codes. By default, it retries 429, 503, and network timeouts. If your API
Custom Retry Logic with Function Nodes
When built-in retry isn't enough, build custom logic. This pattern works for APIs
Step 1: Add a Function node after your HTTP Request node.
Step 2: Paste this code:
const maxRetries = 5;
const baseDelay = 2000;
for (let attempt = 0; attempt < maxRetries; attempt++) {
try {
const response = await $http.request({
method: 'GET',
url: 'https://api.example.com/data',
headers: {
'Authorization': `Bearer ${$node["Credentials"].json.token}`
}
});
return response;
} catch (error) {
const statusCode = error.response?.status;
const remaining = parseInt(error.response?.headers['x-ratelimit-remaining'] || '0');
if (statusCode === 429 || statusCode === 403 || remaining === 0) {
if (attempt < maxRetries - 1) {
const delay = baseDelay * Math.pow(2, attempt);
console.log(`Rate limited. Retry ${attempt + 1}/${maxRetries} after ${delay}ms`);
await new Promise(resolve => setTimeout(resolve, delay));
continue;
}
}
throw error;
}
}
This code checks both status codes and the X-RateLimit-Remaining header. It implements exponential backoff manually and logs each retry attempt.
Step 3: Replace the URL, method, and headers with your API
Step 4: Test with a deliberately low rate limit to verify retry behavior.
Batch Processing for High-Volume Workflows
Batching reduces API
Example scenario: Updating 500 contacts in HubSpot. HubSpot allows batch updates of 100 contacts per request.
Step 1: Add a Code node before your HTTP Request node.
Step 2: Use this batching logic:
const items = $input.all();
const batchSize = 100;
const batches = [];
for (let i = 0; i < items.length; i += batchSize) {
batches.push(items.slice(i, i + batchSize));
}
return batches.map((batch, index) => ({
json: {
batchNumber: index + 1,
totalBatches: batches.length,
items: batch.map(item => item.json)
}
}));
Step 3: Add a Loop Over Items node set to "Run Once for Each Item".
Step 4: Inside the loop, add your HTTP Request node. Configure it to send the batch:
{
"inputs": [
{
"properties": {
"email": "=`{{$json.items[0].email}}`",
"firstname": "=`{{$json.items[0].firstname}}`"
}
}
]
}
Map all items in $json.items to your API
Step 5: Add a Wait node after the HTTP Request with a 1-second delay between batches.
This pattern processes 500 items in 5 batches with 1-second pauses, taking 5 seconds instead of potentially triggering rate limits with rapid-fire requests.
Rate Limit Headers Monitoring
Build proactive monitoring to catch rate limit issues before they cause failures.
Step 1: After your HTTP Request node, add a Function node named "Check Rate Limits".
Step 2: Insert this monitoring code:
const response = $input.first().json;
const headers = $node["HTTP Request"].context.response.headers;
const limit = parseInt(headers['x-ratelimit-limit'] || '0');
const remaining = parseInt(headers['x-ratelimit-remaining'] || '0');
const reset = parseInt(headers['x-ratelimit-reset'] || '0');
const percentUsed = ((limit - remaining) / limit) * 100;
const resetDate = new Date(reset * 1000);
if (percentUsed > 80) {
return [{
json: {
alert: true,
message: `Rate limit at ${percentUsed.toFixed(1)}% capacity`,
remaining: remaining,
resetTime: resetDate.toISOString(),
data: response
}
}];
}
return [{ json: { alert: false, data: response } }];
Step 3: Add an IF node checking {{$json.alert}}.
Step 4: On the true branch, add a Slack or email notification node with this message:
⚠️ Rate Limit Warning
API: [Your API Name]
Usage: `{{$json.message}}`
Remaining: `{{$json.remaining}}` requests
Resets: `{{$json.resetTime}}`
This alerts you when you've used 80% of your rate limit, giving you time to throttle requests or wait for the reset.
Handling Retry-After Headers
Some APIsRetry-After header.
Step 1: In your Function node retry logic, check for the header:
const retryAfter = error.response?.headers['retry-after'];
if (retryAfter) {
const delay = parseInt(retryAfter) * 1000; // Convert seconds to milliseconds
console.log(`API requested ${retryAfter}s wait. Pausing...`);
await new Promise(resolve => setTimeout(resolve, delay));
continue;
}
Step 2: If Retry-After is present, use that value instead of exponential backoff.
This respects the API
Queue-Based Rate Limiting
For workflows processing thousands of items daily, implement a queue system.
Step 1: Create a Google Sheet or Airtable base as your queue. Columns: ID, Status, Data, Retry_Count, Last_Attempt.
Step 2: Build a workflow that adds items to the queue instead of processing immediately.
Step 3: Create a second workflow triggered every 5 minutes:
- Fetch items with
Status = PendingandRetry_Count < 5 - Process up to 50 items per run
- Update
StatustoCompleteor incrementRetry_Counton failure - Update
Last_Attempttimestamp
Step 4: Add rate limit checking in the processing workflow. If you hit a limit, stop processing and wait for the next scheduled run.
This pattern distributes load over time and prevents rate limit cascades.
Testing Your Rate Limit Handling
Don't wait for production failures to test retry logic.
Method 1: Use a rate limit testing APIhttpbin.org/status/429 to simulate 429 responses.
Method 2: Temporarily lower your API
Method 3: Add artificial rate limit triggers in development:
const testRateLimit = true; // Set to false in production
if (testRateLimit && Math.random() > 0.7) {
throw {
response: {
status: 429,
headers: { 'retry-after': '5' }
}
};
}
Run your workflow 20 times and verify retry behavior appears in execution logs.
Common Mistakes to Avoid
Mistake 1: Setting retry delays too short. A 100ms retry on a 60-second rate limit window wastes all retry attempts in seconds.
Mistake 2: Not logging retry attempts. Always log to execution data so you can diagnose patterns.
Mistake 3: Retrying non-rate-limit errors. Check status codes explicitly. Don't retry 401 (authentication) or 404 (not found) errors.
Mistake 4: Ignoring rate limit headers. If the API
Mistake 5: Processing items sequentially when you could batch. Check API
Rate Limit Specifications by Platform
Salesforce: 15,000 requests per 24 hours (varies by license). Use composite API
HubSpot: 100 requests per 10 seconds. Batch endpoints accept 100 records per request.
Google Workspace: 1,500 requests per 100 seconds per user. Use batch requests for up to 1,000 operations.
Stripe: 100 read requests per second, 100 write requests per second. No official batch endpoint.
Airtable: 5 requests per second per base. No batch operations.
Always check current documentation. Rate limits change.

Reviewed by Revenue Institute
This guide is actively maintained and reviewed by the implementation experts at Revenue Institute. As the creators of The AI Workforce Playbook, we test and deploy these exact frameworks for professional services firms scaling without new headcount.
Revenue Institute
Need help turning this guide into reality? Revenue Institute builds and implements the AI workforce for professional services firms.