How to Set Up a Virtual Machine & AI Server
A step-by-step guide to provisioning a virtual machine, configuring a production server for n8n and AI workloads, and running local AI models - covering DigitalOcean, AWS, and GCP setup with NGINX, Docker, and Ollama.
How to Set Up a Virtual Machine & AI Server
Most AI workflow infrastructure for professional services firms runs on a single virtual machine - a rented server in the cloud that costs $12–20/month and handles n8n automation, AI model API
Prerequisites
- An account on DigitalOcean, AWS (EC2), or Google Cloud (Compute Engine)
- A domain name (optional but recommended for HTTPS)
- SSH client (built into macOS/Linux terminal; Windows users use PuTTY or WSL)
Recommended minimum specs:
- 2 vCPUs, 4GB RAM - sufficient for n8n with standard workflow volume (up to ~50k executions/month)
- 4 vCPUs, 8GB RAM - for n8n plus local small LLMs(3B–7B parameter models)LLMsClick to read the full definition in our AI & Automation Glossary.
- GPU instance - required for running 13B+ parameter models locally
Step 1: Provision the Virtual Machine
DigitalOcean (Recommended for Simplicity)
- Log in to DigitalOcean → Create → Droplets
- Choose Ubuntu 22.04 LTS as the OS
- Select plan: Basic, Regular, $18/month (2 vCPUs, 4GB RAM, 80GB SSD) - adequate for most professional services firms
- Choose a datacenter region closest to your primary office
- Under Authentication, select SSH Key and add your public key (safer than password auth)
- Click Create Droplet
Your server IP address will appear on the dashboard within 60 seconds.
AWS EC2
- EC2 → Launch Instance
- AMI: Ubuntu Server 22.04 LTS
- Instance type: t3.medium (2 vCPU, 4GB RAM)
- Key pair: Create new or select existing
- Security group: Allow SSH (port 22), HTTP (port 80), HTTPS (port 443)
- Launch Instance
Google Cloud Compute Engine
See the dedicated n8n GCP setup guide for GCP-specific provisioning steps.
Step 2: Initial Server Configuration
SSH into your server:
ssh root@YOUR_SERVER_IP
Update the system:
apt update && apt upgrade -y
Create a non-root user:
adduser deploy
usermod -aG sudo deploy
rsync --archive --chown=deploy:deploy ~/.ssh /home/deploy
Configure the firewall:
ufw allow OpenSSH
ufw allow 80
ufw allow 443
ufw enable
Switch to your new user:
su - deploy
Step 3: Install Docker
Docker is the recommended deployment method for n8n and most AI tools on a VPS.
# Install Docker
curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh
# Add your user to the docker group
sudo usermod -aG docker $USER
# Apply group change without logging out
newgrp docker
# Verify
docker --version
Step 4: Deploy n8n with Docker
# Create a directory for n8n data
mkdir ~/n8n-data
# Run n8n
docker run -d \
--name n8n \
--restart unless-stopped \
-p 5678:5678 \
-e N8N_BASIC_AUTH_ACTIVE=true \
-e N8N_BASIC_AUTH_USER=admin \
-e N8N_BASIC_AUTH_PASSWORD=your_secure_password \
-e N8N_HOST=your.domain.com \
-e N8N_PROTOCOL=https \
-e WEBHOOK_URL=https://your.domain.com/ \
-v ~/n8n-data:/home/node/.n8n \
n8nio/n8n
Replace your.domain.com with your domain (or the server IP for initial testing). Replace your_secure_password with a strong password.
Step 5: Configure NGINX as a Reverse Proxy
NGINX sits in front of n8n and handles HTTPS termination.
sudo apt install nginx -y
Create the NGINX config:
sudo nano /etc/nginx/sites-available/n8n
Paste:
server {
listen 80;
server_name your.domain.com;
location / {
proxy_pass http://localhost:5678;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_cache_bypass $http_upgrade;
}
}
Enable the config and test:
sudo ln -s /etc/nginx/sites-available/n8n /etc/nginx/sites-enabled/
sudo nginx -t
sudo systemctl restart nginx
Add HTTPS with Let's Encrypt (requires a domain pointed to your server IP):
sudo apt install certbot python3-certbot-nginx -y
sudo certbot --nginx -d your.domain.com
Follow the prompts. Certbot automatically configures NGINX for HTTPS and sets up auto-renewal.
Step 6: (Optional) Install Ollama for Local AI Models
Ollama runs open source LLMs
curl -fsSL https://ollama.com/install.sh | sh
Pull a model:
# Lightweight, fast (good for structured extraction tasks)
ollama pull llama3.1:8b
# More capable (requires 8GB+ RAM)
ollama pull llama3.1:70b
Verify it runs:
ollama run llama3.1:8b "Summarize this in one sentence: The quick brown fox jumped over the lazy dog."
Connecting Ollama to n8n:
In n8n, when configuring an AI/LLMhttp://localhost:11434. n8n will call your local Ollama instance rather than an external API
For model capability benchmarks and selection guidance, see The Best LLM Models: Proprietary vs. Open Source.
Maintenance Checklist
| Task | Frequency | Command |
|---|---|---|
| OS security updates | Monthly | sudo apt update && sudo apt upgrade -y |
| n8n version update | Monthly | docker pull n8nio/n8n && docker restart n8n |
| SSL cert renewal | Automatic | sudo certbot renew --dry-run (verify) |
| Database backup | Weekly | See n8n Backup Guide |
| Disk space check | Monthly | df -h |
For production hardening - rate limiting, IP allowlists, encryption at rest - see the n8n Security Hardening Guide.

Reviewed by Revenue Institute
This guide is actively maintained and reviewed by the implementation experts at Revenue Institute. As the creators of The AI Workforce Playbook, we test and deploy these exact frameworks for professional services firms scaling without new headcount.
Get the Book
Related Reading
Getting Started
n8n Self-Hosting Setup Guide (DigitalOcean)
Getting Started
n8n Self-Hosting Setup Guide (GCP)
Getting Started
n8n Self-Hosting Setup Guide (Azure)
Security & Compliance
n8n Security Hardening Guide
Platform Comparisons
The Best LLM Models: Proprietary vs. Open Source
Need help turning this guide into reality?
Revenue Institute builds and implements the AI workforce for professional services firms.
Work with Revenue Institute