Back to n8n Fundamentals
n8n Fundamentals

n8n Backup & Disaster Recovery Guide

How to export, backup, and restore n8n workflows and credentials. Automated backup schedule.

n8n Backup & Disaster Recovery Guide

Your n8n instance contains the automation logic that runs critical business processes. Lose it, and you're rebuilding workflows from memory while partners wait for invoices and clients wonder where their reports went. This guide shows you exactly how to protect your n8n data with manual exports, automated backups, and tested recovery procedures.

Manual Workflow Export

Start with the basics. Manual exports work for small instances (under 20 workflows) or when you need a quick snapshot before major changes.

Step 1: Access the Workflows Panel

Log into your n8n instance. Click "Workflows" in the left sidebar. You'll see your complete workflow list.

Step 2: Select Workflows for Export

Click the checkbox next to each workflow you want to back up. For a full backup, click the top checkbox to select all workflows at once.

Step 3: Download the Export File

Click "Download" in the top toolbar. n8n generates a JSON file containing all selected workflows. The file downloads immediately to your browser's default location.

Step 4: Store the Backup Securely

Move the JSON file to your backup location within 24 hours. Options:

  • AWS S3 bucket with versioning enabled
  • Google Drive folder with restricted access
  • Network-attached storage with daily snapshots
  • Encrypted external drive stored off-site

Name the file with a timestamp: n8n-workflows-2024-01-15.json. This prevents accidental overwrites and makes point-in-time recovery straightforward.

Manual Credential Export

Credentials are separate from workflows. Export them independently and store them with extra security.

Step 1: Navigate to Credentials

Click "Credentials" in the left sidebar. You'll see all stored API keys, OAuth tokens, and database connections.

Step 2: Select Credentials to Export

Check the box next to each credential. Note: n8n encrypts credentials in the export file using your instance's encryption key. Without that key, the export is useless.

Step 3: Download Credential Export

Click "Download" in the toolbar. Save the resulting JSON file immediately.

Step 4: Secure the Credential File

Store credential exports separately from workflow exports. Use:

  • Password manager vault (1Password, Bitwarden)
  • Encrypted cloud storage (Tresorit, SpiderOak)
  • Hardware security module for enterprise deployments

Never store credential exports in the same location as workflow exports. If someone gains access to your backup location, they shouldn't get both pieces.

Manual exports miss execution history, variables, and system settings. For production instances, back up the entire database.

Step 1: Identify Your Database Type

Check your n8n configuration. Most installations use PostgreSQL or SQLite. Find your database connection string in:

  • Docker: docker-compose.yml under DB_TYPE and DB_POSTGRESDB_* variables
  • Self-hosted: .env file in your n8n directory
  • Cloud: Provider dashboard settings

Step 2: Create Database Backup Script

For PostgreSQL:

#!/bin/bash
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
BACKUP_DIR="/opt/backups/n8n"
DB_NAME="n8n"
DB_USER="n8n_user"
DB_HOST="localhost"

mkdir -p $BACKUP_DIR

pg_dump -h $DB_HOST -U $DB_USER -d $DB_NAME -F c -f "$BACKUP_DIR/n8n_db_$TIMESTAMP.dump"

# Keep only last 30 days of backups
find $BACKUP_DIR -name "n8n_db_*.dump" -mtime +30 -delete

echo "Database backup completed: n8n_db_$TIMESTAMP.dump"

For SQLite:

#!/bin/bash
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
BACKUP_DIR="/opt/backups/n8n"
SQLITE_DB="/root/.n8n/database.sqlite"

mkdir -p $BACKUP_DIR

sqlite3 $SQLITE_DB ".backup '$BACKUP_DIR/n8n_db_$TIMESTAMP.sqlite'"

find $BACKUP_DIR -name "n8n_db_*.sqlite" -mtime +30 -delete

echo "SQLite backup completed: n8n_db_$TIMESTAMP.sqlite"

Step 3: Make Script Executable

chmod +x /opt/scripts/n8n_backup.sh

Step 4: Test the Backup

Run the script manually:

/opt/scripts/n8n_backup.sh

Check the backup directory. Verify the file size is reasonable (should match your database size). For PostgreSQL, test the dump:

pg_restore --list /opt/backups/n8n/n8n_db_20240115_020000.dump

Automated Backup Schedule

Manual backups fail. You forget, you're busy, or you're on vacation when the system crashes. Automate it.

Step 1: Set Backup Frequency

Choose based on workflow change frequency:

  • High-change environments (daily workflow edits): Every 6 hours
  • Standard operations (weekly changes): Daily at 2 AM
  • Stable production (monthly changes): Daily at 2 AM with weekly full backups

Step 2: Configure Cron Job

Edit your crontab:

crontab -e

Add the backup schedule:

# Daily backup at 2 AM
0 2 * * * /opt/scripts/n8n_backup.sh >> /var/log/n8n_backup.log 2>&1

# Weekly full backup on Sunday at 3 AM
0 3 * * 0 /opt/scripts/n8n_full_backup.sh >> /var/log/n8n_backup.log 2>&1

Step 3: Set Up Off-Site Replication

Local backups don't protect against building fires or ransomware. Copy backups off-site within 4 hours of creation.

For AWS S3:

#!/bin/bash
BACKUP_DIR="/opt/backups/n8n"
S3_BUCKET="s3://your-company-n8n-backups"

aws s3 sync $BACKUP_DIR $S3_BUCKET --storage-class STANDARD_IA --delete

Add to crontab 2 hours after backup:

0 4 * * * /opt/scripts/sync_to_s3.sh >> /var/log/n8n_s3_sync.log 2>&1

Step 4: Monitor Backup Success

Create a monitoring workflow in n8n itself:

  1. Schedule trigger: Daily at 5 AM
  2. Read file node: Check for today's backup file
  3. Conditional node: If file exists and size > 1MB, success path
  4. Slack/email notification: Alert if backup missing or suspiciously small

Disaster Recovery Procedure

Test your recovery process quarterly. Here's the exact procedure.

Step 1: Restore Database Backup

For PostgreSQL:

# Stop n8n
docker-compose down

# Drop existing database (if recovering from corruption)
psql -U postgres -c "DROP DATABASE n8n;"
psql -U postgres -c "CREATE DATABASE n8n OWNER n8n_user;"

# Restore from backup
pg_restore -h localhost -U n8n_user -d n8n /opt/backups/n8n/n8n_db_20240115_020000.dump

# Start n8n
docker-compose up -d

For SQLite:

# Stop n8n
docker-compose down

# Replace database file
cp /opt/backups/n8n/n8n_db_20240115_020000.sqlite /root/.n8n/database.sqlite

# Start n8n
docker-compose up -d

Step 2: Verify Workflow Functionality

Log into n8n. Check:

  • Workflow count matches pre-disaster count
  • Recent executions appear in history
  • Credentials are accessible (test one workflow that uses external APIs
    )

Step 3: Test Critical Workflows

Manually trigger your three most important workflows. Verify they execute successfully and produce expected outputs.

Step 4: Document Recovery Time

Record how long the recovery took. If it exceeded 1 hour, identify bottlenecks and update your procedure.

Encryption Key Backup

n8n encrypts credentials using an encryption key stored in your environment variables. Lose this key, and your credential backups are worthless.

Step 1: Locate Your Encryption Key

Check your n8n configuration:

# Docker
docker-compose exec n8n env | grep N8N_ENCRYPTION_KEY

# Self-hosted
cat /root/.n8n/.env | grep N8N_ENCRYPTION_KEY

Step 2: Store Key Securely

Save the encryption key in your password manager as a secure note. Label it "n8n Production Encryption Key" with the instance URL.

Step 3: Test Key Recovery

On a test system, restore a credential backup using the stored encryption key. Verify you can decrypt and use the credentials.

Recovery Time Objective

Set a clear recovery target. For professional services firms, aim for:

  • Database restore: 15 minutes
  • Full system verification: 30 minutes
  • Critical workflows operational: 45 minutes

Document your actual recovery time during quarterly tests. If you consistently miss targets, simplify your backup architecture or add automation.

Revenue Institute

Reviewed by Revenue Institute

This guide is actively maintained and reviewed by the implementation experts at Revenue Institute. As the creators of The AI Workforce Playbook, we test and deploy these exact frameworks for professional services firms scaling without new headcount.

Revenue Institute

Need help turning this guide into reality? Revenue Institute builds and implements the AI workforce for professional services firms.

RevenueInstitute.com