n8n Troubleshooting: Webhook Timeout
Splitting long workflows into async processing chains.
n8n Troubleshooting: Webhook Timeout
Webhook
The root cause: n8n's webhook
This guide shows you how to split long workflows into async processing chains that respond instantly while handling heavy work in the background.
When Webhook WebhookClick to read the full definition in our AI & Automation Glossary. Timeouts Happen
You'll hit timeouts in three scenarios:
API rate limiting and retries. You're calling Clio, QuickBooks, or NetSuite APIs
Bulk data operations. Importing 500 client records from a CSV, enriching each with data from Clearbit, then writing to your CRM
PDF generation and document processing. Generating engagement letters with Docusign or PandaDoc, especially when merging data from multiple sources. A single complex PDF can take 15-20 seconds.
The Async Pattern: Respond First, Process Later
The solution: split your workflow into two parts. The webhook
Here's the architecture:
Webhook
- Receives the webhookwebhookClick to read the full definition in our AI & Automation Glossary.
- Validates the payload
- Writes the job to a queue (database row, Redis, or n8n's built-in queue)
- Returns a 200 OK response with a job ID
Processing Workflow (runs async, no timeout):
- Polls the queue or triggers on new queue items
- Processes the job
- Updates job status
- Sends completion notification
Step-by-Step Implementation
Step 1: Set Up Your Queue Table
Create a PostgreSQL table to track jobs. If you're using Supabase, Airtable, or another database, adapt accordingly.
CREATE TABLE workflow_jobs (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
status VARCHAR(20) DEFAULT 'pending',
payload JSONB NOT NULL,
result JSONB,
error_message TEXT,
created_at TIMESTAMP DEFAULT NOW(),
started_at TIMESTAMP,
completed_at TIMESTAMP,
retry_count INTEGER DEFAULT 0
);
CREATE INDEX idx_jobs_status ON workflow_jobs(status);
CREATE INDEX idx_jobs_created ON workflow_jobs(created_at);
Step 2: Build the Webhook WebhookClick to read the full definition in our AI & Automation Glossary. Workflow
Node 1: Webhook
- Set HTTP Method to POST
- Path:
/api/process-client-intake - Authentication: Header Auth (set a secret token)
Node 2: Validate Input Add a Code node to validate the payload:
// Validate required fields
const required = ['client_name', 'email', 'matter_type'];
const missing = required.filter(field => !$input.item.json[field]);
if (missing.length > 0) {
throw new Error(`Missing required fields: ${missing.join(', ')}`);
}
// Return validated data
return {
json: {
client_name: $input.item.json.client_name,
email: $input.item.json.email,
matter_type: $input.item.json.matter_type,
metadata: $input.item.json.metadata || {}
}
};
Node 3: Insert Job to Queue Use a Postgres node (or your database of choice):
- Operation: Insert
- Table:
workflow_jobs - Columns to Send:
payload - Payload value:
{{ $json }}
Node 4: Respond to Webhook
- Response Code: 200
- Response Body:
{
"status": "accepted",
"job_id": "`{{ $('Insert Job').item.json.id }}`",
"message": "Your request is being processed. You'll receive an email when complete."
}
This workflow completes in under 500ms. The webhook
Step 3: Build the Processing Workflow
Node 1: Schedule Trigger
- Trigger Interval: Every 30 seconds
- Or use a Postgres Trigger node if your database supports it
Node 2: Fetch Pending Jobs Postgres node:
- Operation: Select
- Table:
workflow_jobs - WHERE clause:
status = 'pending' AND retry_count < 3 - LIMIT: 10
- ORDER BY:
created_at ASC
Node 3: Update Job Status to Processing For each job, update its status:
- Operation: Update
- WHERE:
id = {{ $json.id }} - SET:
status = 'processing', started_at = NOW()
Node 4: Do the Actual Work This is where your long-running operations go. Example for client intake:
// Extract job payload
const payload = $input.item.json.payload;
// Call external APIs
const clioClient = await fetch('https://app.clio.com/api/v4/contacts.json', {
method: 'POST',
headers: {
'Authorization': 'Bearer YOUR_TOKEN',
'Content-Type': 'application/json'
},
body: JSON.stringify({
data: {
name: payload.client_name,
email: payload.email
}
})
});
const clioData = await clioClient.json();
// Return result
return {
json: {
job_id: $input.item.json.id,
clio_contact_id: clioData.data.id,
status: 'completed'
}
};
Node 5: Update Job Status to Completed Postgres node:
- Operation: Update
- WHERE:
id = {{ $json.job_id }} - SET:
status = 'completed', completed_at = NOW(), result = {{ $json }}
Node 6: Send Notification Use an Email node or Slack node to notify the user:
- To:
{{ $('Fetch Pending Jobs').item.json.payload.email }} - Subject: "Your client intake is complete"
- Body: Include the job ID and any relevant results
Step 4: Add Error Handling
Wrap your processing nodes in an Error Trigger workflow.
Error Workflow:
Node 1: Error Trigger Catches errors from the processing workflow.
Node 2: Update Job Status to Failed Postgres node:
- Operation: Update
- WHERE:
id = {{ $json.job_id }} - SET:
status = 'failed', error_message = {{ $json.error }}, retry_count = retry_count + 1
Node 3: Check Retry Count IF node:
- Condition:
{{ $json.retry_count }} < 3 - True: Reset status to 'pending' for retry
- False: Send alert to operations team
Node 4: Alert on Permanent Failure Slack or email notification with full error details.
Monitoring Job Status
Build a simple status check endpoint:
Webhook
- Path:
/api/job-status/:job_id - Method: GET
Postgres Query:
SELECT id, status, created_at, completed_at, error_message
FROM workflow_jobs
WHERE id = :job_id
Return the job status as JSON. Your frontend can poll this endpoint to show progress.
Advanced: Priority Queues
Add a priority column to handle urgent jobs first:
ALTER TABLE workflow_jobs ADD COLUMN priority INTEGER DEFAULT 5;
CREATE INDEX idx_jobs_priority ON workflow_jobs(priority DESC, created_at ASC);
Update your fetch query:
SELECT * FROM workflow_jobs
WHERE status = 'pending'
ORDER BY priority DESC, created_at ASC
LIMIT 10
Set priority in the webhook
Performance Tuning
Batch processing. Instead of processing one job at a time, fetch 10 jobs and use a Loop node to process them in parallel (set Max Parallel to 3-5 to avoid rate limits).
Separate workflows by job type. If you're processing both client intakes and document generation, create separate processing workflows. Use a job_type column to route jobs to the right workflow.
Scale the polling interval. If your queue is usually empty, poll every 60 seconds. If you're processing hundreds of jobs per hour, poll every 10 seconds or use database triggers for instant processing.
Real-World Example: Engagement Letter Generation
A mid-sized law firm was timing out when generating engagement letters. The workflow called Clio for client data, merged it into a Docusign template, sent for signature, and logged the activity. Total time: 45-60 seconds.
After implementing async processing:
- Webhookworkflow: 200ms (writes job to Postgres)WebhookClick to read the full definition in our AI & Automation Glossary.
- Processing workflow: 50 seconds (runs in background)
- Client sees "Your engagement letter is being prepared" immediately
- Email arrives 60 seconds later with the Docusign link
Error rate dropped from 15% to under 1%. Support tickets related to "form submission failed" disappeared entirely.
Bottom Line
Stop fighting webhook
Set up the queue table today. Convert your slowest webhook

Reviewed by Revenue Institute
This guide is actively maintained and reviewed by the implementation experts at Revenue Institute. As the creators of The AI Workforce Playbook, we test and deploy these exact frameworks for professional services firms scaling without new headcount.
Revenue Institute
Need help turning this guide into reality? Revenue Institute builds and implements the AI workforce for professional services firms.