Back to n8n Fundamentals
n8n Fundamentals

n8n Troubleshooting: Webhook Timeout

Splitting long workflows into async processing chains.

n8n Troubleshooting: Webhook Timeout

Webhook

timeouts kill n8n workflows. Your client submits a form, your workflow triggers, and 30 seconds later: timeout error. The workflow might still be running in the background, but the client sees a failure message. Data gets duplicated. Support tickets pile up.

The root cause: n8n's webhook

nodes wait for the entire workflow to complete before sending a response. If your workflow takes longer than 29 seconds (the default timeout for most hosting environments), the connection drops.

This guide shows you how to split long workflows into async processing chains that respond instantly while handling heavy work in the background.

When Webhook
Timeouts Happen

You'll hit timeouts in three scenarios:

API rate limiting and retries. You're calling Clio, QuickBooks, or NetSuite APIs

that throttle requests. Your workflow needs to wait 5 seconds between calls, process 20 records, and suddenly you're at 100+ seconds total execution time.

Bulk data operations. Importing 500 client records from a CSV, enriching each with data from Clearbit, then writing to your CRM

. Each record takes 2 seconds. That's 1,000 seconds for the full batch.

PDF generation and document processing. Generating engagement letters with Docusign or PandaDoc, especially when merging data from multiple sources. A single complex PDF can take 15-20 seconds.

The Async Pattern: Respond First, Process Later

The solution: split your workflow into two parts. The webhook

workflow responds immediately (under 1 second). A separate workflow handles the actual processing.

Here's the architecture:

Webhook

Workflow (responds in <1 second):

  1. Receives the webhook
  2. Validates the payload
  3. Writes the job to a queue (database row, Redis, or n8n's built-in queue)
  4. Returns a 200 OK response with a job ID

Processing Workflow (runs async, no timeout):

  1. Polls the queue or triggers on new queue items
  2. Processes the job
  3. Updates job status
  4. Sends completion notification

Step-by-Step Implementation

Step 1: Set Up Your Queue Table

Create a PostgreSQL table to track jobs. If you're using Supabase, Airtable, or another database, adapt accordingly.

CREATE TABLE workflow_jobs (
  id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
  status VARCHAR(20) DEFAULT 'pending',
  payload JSONB NOT NULL,
  result JSONB,
  error_message TEXT,
  created_at TIMESTAMP DEFAULT NOW(),
  started_at TIMESTAMP,
  completed_at TIMESTAMP,
  retry_count INTEGER DEFAULT 0
);

CREATE INDEX idx_jobs_status ON workflow_jobs(status);
CREATE INDEX idx_jobs_created ON workflow_jobs(created_at);

Step 2: Build the Webhook
Workflow

Node 1: Webhook

Trigger

  • Set HTTP Method to POST
  • Path: /api/process-client-intake
  • Authentication: Header Auth (set a secret token)

Node 2: Validate Input Add a Code node to validate the payload:

// Validate required fields
const required = ['client_name', 'email', 'matter_type'];
const missing = required.filter(field => !$input.item.json[field]);

if (missing.length > 0) {
  throw new Error(`Missing required fields: ${missing.join(', ')}`);
}

// Return validated data
return {
  json: {
    client_name: $input.item.json.client_name,
    email: $input.item.json.email,
    matter_type: $input.item.json.matter_type,
    metadata: $input.item.json.metadata || {}
  }
};

Node 3: Insert Job to Queue Use a Postgres node (or your database of choice):

  • Operation: Insert
  • Table: workflow_jobs
  • Columns to Send: payload
  • Payload value: {{ $json }}

Node 4: Respond to Webhook

Add a Respond to Webhook
node:

  • Response Code: 200
  • Response Body:
{
  "status": "accepted",
  "job_id": "`{{ $('Insert Job').item.json.id }}`",
  "message": "Your request is being processed. You'll receive an email when complete."
}

This workflow completes in under 500ms. The webhook

caller gets an immediate response.

Step 3: Build the Processing Workflow

Node 1: Schedule Trigger

  • Trigger Interval: Every 30 seconds
  • Or use a Postgres Trigger node if your database supports it

Node 2: Fetch Pending Jobs Postgres node:

  • Operation: Select
  • Table: workflow_jobs
  • WHERE clause: status = 'pending' AND retry_count < 3
  • LIMIT: 10
  • ORDER BY: created_at ASC

Node 3: Update Job Status to Processing For each job, update its status:

  • Operation: Update
  • WHERE: id = {{ $json.id }}
  • SET: status = 'processing', started_at = NOW()

Node 4: Do the Actual Work This is where your long-running operations go. Example for client intake:

// Extract job payload
const payload = $input.item.json.payload;

// Call external APIs
const clioClient = await fetch('https://app.clio.com/api/v4/contacts.json', {
  method: 'POST',
  headers: {
    'Authorization': 'Bearer YOUR_TOKEN',
    'Content-Type': 'application/json'
  },
  body: JSON.stringify({
    data: {
      name: payload.client_name,
      email: payload.email
    }
  })
});

const clioData = await clioClient.json();

// Return result
return {
  json: {
    job_id: $input.item.json.id,
    clio_contact_id: clioData.data.id,
    status: 'completed'
  }
};

Node 5: Update Job Status to Completed Postgres node:

  • Operation: Update
  • WHERE: id = {{ $json.job_id }}
  • SET: status = 'completed', completed_at = NOW(), result = {{ $json }}

Node 6: Send Notification Use an Email node or Slack node to notify the user:

  • To: {{ $('Fetch Pending Jobs').item.json.payload.email }}
  • Subject: "Your client intake is complete"
  • Body: Include the job ID and any relevant results

Step 4: Add Error Handling

Wrap your processing nodes in an Error Trigger workflow.

Error Workflow:

Node 1: Error Trigger Catches errors from the processing workflow.

Node 2: Update Job Status to Failed Postgres node:

  • Operation: Update
  • WHERE: id = {{ $json.job_id }}
  • SET: status = 'failed', error_message = {{ $json.error }}, retry_count = retry_count + 1

Node 3: Check Retry Count IF node:

  • Condition: {{ $json.retry_count }} < 3
  • True: Reset status to 'pending' for retry
  • False: Send alert to operations team

Node 4: Alert on Permanent Failure Slack or email notification with full error details.

Monitoring Job Status

Build a simple status check endpoint:

Webhook

Workflow:

  • Path: /api/job-status/:job_id
  • Method: GET

Postgres Query:

SELECT id, status, created_at, completed_at, error_message
FROM workflow_jobs
WHERE id = :job_id

Return the job status as JSON. Your frontend can poll this endpoint to show progress.

Advanced: Priority Queues

Add a priority column to handle urgent jobs first:

ALTER TABLE workflow_jobs ADD COLUMN priority INTEGER DEFAULT 5;
CREATE INDEX idx_jobs_priority ON workflow_jobs(priority DESC, created_at ASC);

Update your fetch query:

SELECT * FROM workflow_jobs
WHERE status = 'pending'
ORDER BY priority DESC, created_at ASC
LIMIT 10

Set priority in the webhook

workflow based on matter type or client tier.

Performance Tuning

Batch processing. Instead of processing one job at a time, fetch 10 jobs and use a Loop node to process them in parallel (set Max Parallel to 3-5 to avoid rate limits).

Separate workflows by job type. If you're processing both client intakes and document generation, create separate processing workflows. Use a job_type column to route jobs to the right workflow.

Scale the polling interval. If your queue is usually empty, poll every 60 seconds. If you're processing hundreds of jobs per hour, poll every 10 seconds or use database triggers for instant processing.

Real-World Example: Engagement Letter Generation

A mid-sized law firm was timing out when generating engagement letters. The workflow called Clio for client data, merged it into a Docusign template, sent for signature, and logged the activity. Total time: 45-60 seconds.

After implementing async processing:

  • Webhook
    workflow: 200ms (writes job to Postgres)
  • Processing workflow: 50 seconds (runs in background)
  • Client sees "Your engagement letter is being prepared" immediately
  • Email arrives 60 seconds later with the Docusign link

Error rate dropped from 15% to under 1%. Support tickets related to "form submission failed" disappeared entirely.

Bottom Line

Stop fighting webhook

timeouts. Respond instantly, process async. Your queue table becomes your source of truth. Your users get immediate feedback. Your workflows become bulletproof.

Set up the queue table today. Convert your slowest webhook

workflow tomorrow. You'll never go back to synchronous processing.

Revenue Institute

Reviewed by Revenue Institute

This guide is actively maintained and reviewed by the implementation experts at Revenue Institute. As the creators of The AI Workforce Playbook, we test and deploy these exact frameworks for professional services firms scaling without new headcount.

Revenue Institute

Need help turning this guide into reality? Revenue Institute builds and implements the AI workforce for professional services firms.

RevenueInstitute.com