n8n Self-Hosting Setup Guide (GCP)
Same as above tailored for Google Cloud Platform.
n8n Self-Hosting Setup Guide (GCP)
This guide walks you through deploying n8n on Google Cloud Platform using Cloud SQL (PostgreSQL), Compute Engine, and proper security configurations. You'll have a production-ready automation server running in approximately 45 minutes.
What You Need Before Starting
GCP Account Requirements:
- Active GCP project with billing enabled
- Project Editor or Owner role
- Ability to create Compute Engine instances, Cloud SQL databases, and VPC firewall rules
Local Machine Requirements:
- gcloud CLI installed and authenticated
- SSH client (built into macOS/Linux, use PuTTY on Windows)
- Text editor for configuration files
Knowledge Prerequisites:
- Basic Linux command line navigation
- Understanding of environment variables
- Familiarity with PostgreSQL connection strings
Step 1: Create and Configure Your GCP Project
1. Set up the project:
# Create a new project (or use existing)
gcloud projects create n8n-automation-prod --name="n8n Production"
# Set as active project
gcloud config set project n8n-automation-prod
# Enable billing (replace BILLING_ACCOUNT_ID with your actual ID)
gcloud beta billing projects link n8n-automation-prod --billing-account=BILLING_ACCOUNT_ID
2. Enable required APIs
gcloud services enable compute.googleapis.com
gcloud services enable sqladmin.googleapis.com
gcloud services enable storage-api.googleapis.com
gcloud services enable servicenetworking.googleapis.com
This takes 2-3 minutes. Verify with gcloud services list --enabled.
Step 2: Deploy Cloud SQL PostgreSQL Instance
1. Create the database instance:
gcloud sql instances create n8n-db \
--database-version=POSTGRES_15 \
--tier=db-custom-2-7680 \
--region=us-central1 \
--network=default \
--no-assign-ip \
--database-flags=max_connections=200
Configuration breakdown:
db-custom-2-7680: 2 vCPUs, 7.5GB RAM (handles 50-100 concurrent workflows)--no-assign-ip: Private IP only for securitymax_connections=200: Supports multiple n8n worker processes
2. Create the database and user:
# Create database
gcloud sql databases create n8n --instance=n8n-db
# Create dedicated user (not root)
gcloud sql users create n8n_app \
--instance=n8n-db \
--password=YOUR_SECURE_PASSWORD_HERE
Replace YOUR_SECURE_PASSWORD_HERE with a 32+ character password. Store it in your password manager immediately.
3. Get the connection details:
gcloud sql instances describe n8n-db --format="value(connectionName)"
Save this output. It looks like: n8n-automation-prod:us-central1:n8n-db
Step 3: Create Storage Bucket for Backups
gcloud storage buckets create gs://n8n-backups-prod-12345 \
--location=us-central1 \
--uniform-bucket-level-access
# Enable versioning for backup protection
gcloud storage buckets update gs://n8n-backups-prod-12345 --versioning
Replace 12345 with random digits to ensure global uniqueness.
Step 4: Deploy Compute Engine VM
1. Create a static IP address:
gcloud compute addresses create n8n-static-ip \
--region=us-central1
# Get the IP address
gcloud compute addresses describe n8n-static-ip \
--region=us-central1 \
--format="value(address)"
Note this IP address for DNS configuration.
2. Create firewall rules:
# Allow HTTPS traffic
gcloud compute firewall-rules create allow-n8n-https \
--allow=tcp:443 \
--source-ranges=0.0.0.0/0 \
--target-tags=n8n-server
# Allow SSH (restrict to your IP in production)
gcloud compute firewall-rules create allow-n8n-ssh \
--allow=tcp:22 \
--source-ranges=YOUR_IP_ADDRESS/32 \
--target-tags=n8n-server
3. Create the VM instance:
gcloud compute instances create n8n-server \
--zone=us-central1-a \
--machine-type=e2-standard-2 \
--image-family=debian-11 \
--image-project=debian-cloud \
--boot-disk-size=50GB \
--boot-disk-type=pd-ssd \
--tags=n8n-server \
--address=n8n-static-ip \
--scopes=https://www.googleapis.com/auth/cloud-platform
Machine type rationale:
e2-standard-2: 2 vCPUs, 8GB RAM- Handles 20-30 concurrent workflow executions
- Upgrade to
e2-standard-4if running 50+ workflows simultaneously
Step 5: Install Docker and Cloud SQL Proxy
1. SSH into the instance:
gcloud compute ssh n8n-server --zone=us-central1-a
2. Install Docker:
# Update package list
sudo apt-get update
# Install dependencies
sudo apt-get install -y apt-transport-https ca-certificates curl gnupg lsb-release
# Add Docker GPG key
curl -fsSL https://download.docker.com/linux/debian/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
# Add Docker repository
echo "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/debian $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
# Install Docker
sudo apt-get update
sudo apt-get install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin
# Add current user to docker group
sudo usermod -aG docker $USER
# Log out and back in for group changes to take effect
exit
SSH back in: gcloud compute ssh n8n-server --zone=us-central1-a
3. Install Cloud SQL Proxy:
# Download Cloud SQL Proxy
wget https://dl.google.com/cloudsql/cloud_sql_proxy.linux.amd64 -O cloud_sql_proxy
# Make executable
chmod +x cloud_sql_proxy
# Move to system path
sudo mv cloud_sql_proxy /usr/local/bin/
Step 6: Configure and Deploy n8n
1. Create directory structure:
sudo mkdir -p /opt/n8n/{data,config}
sudo chown -R $USER:$USER /opt/n8n
2. Create environment file:
nano /opt/n8n/config/n8n.env
Paste this configuration (replace placeholders):
# Database Configuration
DB_TYPE=postgresdb
DB_POSTGRESDB_DATABASE=n8n
DB_POSTGRESDB_HOST=localhost
DB_POSTGRESDB_PORT=5432
DB_POSTGRESDB_USER=n8n_app
DB_POSTGRESDB_PASSWORD=YOUR_SECURE_PASSWORD_HERE
# n8n Configuration
N8N_HOST=0.0.0.0
N8N_PORT=5678
N8N_PROTOCOL=https
WEBHOOK_URL=https://YOUR_DOMAIN_OR_IP
N8N_ENCRYPTION_KEY=GENERATE_32_CHAR_RANDOM_STRING
# Execution Settings
EXECUTIONS_PROCESS=main
EXECUTIONS_MODE=regular
EXECUTIONS_TIMEOUT=300
EXECUTIONS_TIMEOUT_MAX=3600
# Timezone
GENERIC_TIMEZONE=America/New_York
Generate encryption key: openssl rand -hex 16
3. Create docker-compose.yml:
nano /opt/n8n/docker-compose.yml
version: '3.8'
services:
cloud-sql-proxy:
image: gcr.io/cloudsql-docker/gce-proxy:latest
command:
- "/cloud_sql_proxy"
- "-instances=n8n-automation-prod:us-central1:n8n-db=tcp:0.0.0.0:5432"
restart: unless-stopped
network_mode: host
n8n:
image: n8nio/n8n:latest
restart: unless-stopped
ports:
- "5678:5678"
env_file:
- /opt/n8n/config/n8n.env
volumes:
- /opt/n8n/data:/home/node/.n8n
depends_on:
- cloud-sql-proxy
network_mode: host
Replace the Cloud SQL instance connection name with yours from Step 2.
4. Start the services:
cd /opt/n8n
docker compose up -d
5. Verify deployment:
# Check container status
docker compose ps
# View logs
docker compose logs -f n8n
You should see "Editor is now accessible via: http://localhost:5678/"
Step 7: Configure SSL with Caddy
1. Install Caddy:
sudo apt install -y debian-keyring debian-archive-keyring apt-transport-https
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/gpg.key' | sudo gpg --dearmor -o /usr/share/keyrings/caddy-stable-archive-keyring.gpg
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/debian.deb.txt' | sudo tee /etc/apt/sources.list.d/caddy-stable.list
sudo apt update
sudo apt install caddy
2. Configure Caddy:
sudo nano /etc/caddy/Caddyfile
your-domain.com {
reverse_proxy localhost:5678
encode gzip
header {
Strict-Transport-Security "max-age=31536000; includeSubDomains; preload"
X-Frame-Options "SAMEORIGIN"
X-Content-Type-Options "nosniff"
}
}
Replace your-domain.com with your actual domain.
3. Restart Caddy:
sudo systemctl restart caddy
sudo systemctl enable caddy
Caddy automatically provisions Let's Encrypt SSL certificates.
Step 8: Set Up Automated Backups
1. Create backup script:
sudo nano /usr/local/bin/backup-n8n.sh
#!/bin/bash
BACKUP_DATE=$(date +%Y%m%d_%H%M%S)
BACKUP_FILE="/tmp/n8n_backup_${BACKUP_DATE}.sql"
# Export database
PGPASSWORD=YOUR_SECURE_PASSWORD_HERE pg_dump -h localhost -U n8n_app -d n8n > $BACKUP_FILE
# Upload to Cloud Storage
gsutil cp $BACKUP_FILE gs://n8n-backups-prod-12345/
# Clean up local file
rm $BACKUP_FILE
# Delete backups older than 30 days
gsutil -m rm gs://n8n-backups-prod-12345/n8n_backup_$(date -d '30 days ago' +%Y%m%d)*.sql
2. Make executable and schedule:
sudo chmod +x /usr/local/bin/backup-n8n.sh
# Add to crontab (daily at 2 AM)
(crontab -l 2>/dev/null; echo "0 2 * * * /usr/local/bin/backup-n8n.sh") | crontab -
Step 9: Initial n8n Configuration
1. Access n8n:
Navigate to https://your-domain.com in your browser.
2. Create owner account:
Set a strong password (20+ characters). This is your primary admin account.
3. Configure credentials encryption:
Go to Settings > Security. Verify the encryption key matches your environment file.
4. Test database connection:
Create a simple workflow with a Schedule trigger and Execute Command node. Run it manually to confirm database writes.
Production Checklist
Before going live, verify:
- [ ] SSL certificate is active (check browser padlock)
- [ ] Database backups run successfully (check Cloud Storage bucket)
- [ ] Firewall rules restrict SSH to your IP only
- [ ] n8n encryption key is stored in password manager
- [ ] Cloud SQL instance has automated backups enabled
- [ ] VM instance has automatic restart enabled
- [ ] Monitoring alerts configured in GCP Console
Cost estimate: This setup runs approximately $120-150/month for moderate usage (2-3 million workflow executions).
Your n8n instance is now production-ready on GCP with enterprise-grade security, automated backups, and SSL encryption.

Reviewed by Revenue Institute
This guide is actively maintained and reviewed by the implementation experts at Revenue Institute. As the creators of The AI Workforce Playbook, we test and deploy these exact frameworks for professional services firms scaling without new headcount.
Revenue Institute
Need help turning this guide into reality? Revenue Institute builds and implements the AI workforce for professional services firms.