How to Use the AI/LLM Node in n8n (OpenAI)
Configuring the OpenAI node: API key, system message, user message, model selection, JSON output.
How to Use the AI/LLM LLMClick to read the full definition in our AI & Automation Glossary. Node in n8n (OpenAI)
The OpenAI node in n8n connects your workflows directly to GPT models. This guide covers the exact configuration steps, model selection criteria, and three production-ready implementations you can deploy today.
Get Your OpenAI API Key
You need an API
- Go to platform.openai.com and create an account
- Navigate to APIKeys in the left sidebarAPIClick to read the full definition in our AI & Automation Glossary.
- Click "Create new secret key"
- Name it "n8n-production" or similar
- Copy the key immediately (it only displays once)
- Store it in your password manager
Cost warning: OpenAI charges per token. Set a monthly spending limit at platform.openai.com/account/billing/limits before running any workflows. Start with $10 to avoid surprise bills.
Configure the OpenAI Node
Add the OpenAI node to your workflow canvas. You'll configure five critical fields.
1. API APIClick to read the full definition in our AI & Automation Glossary. Key Setup
In the OpenAI node, click the "Credential to connect with" dropdown and select "Create New Credential."
Enter your API
Security note: Never hardcode API
2. System Message (The Control Layer)
The system message defines the AI's role, constraints, and output format. This is where you control quality.
Bad system message (vague, no constraints):
You are a helpful assistant.
Good system message (specific role, clear constraints):
You are a legal document analyzer for mid-market law firms. Extract key contract terms and flag non-standard clauses. Output must be valid JSON with fields: contract_type, parties, term_length_months, termination_clauses, red_flags. Use null for missing data. Never add commentary outside the JSON structure.
The system message stays constant across all executions. It's your quality control mechanism.
3. User Message (The Variable Input)
The user message contains the specific request that changes with each workflow execution. Reference data from previous nodes using n8n expressions.
Example with dynamic data:
Analyze this contract and extract terms:
`{{ $json.contract_text }}`
Focus on payment terms, liability caps, and termination rights.
The {{ $json.contract_text }} expression pulls data from the previous node's output. You can reference any field from upstream nodes.
Pro tip: Keep user messages under 2,000 words for GPT-3.5-turbo. For longer documents, split them across multiple nodes or use GPT-4-turbo with its 128k token context window.
4. Model Selection (Performance vs. Cost)
n8n's OpenAI node supports current models. Here's when to use each:
gpt-4-turbo-preview
- Cost: $0.01/1k input tokens, $0.03/1k output tokens
- Use for: Complex analysis, multi-step reasoning, code generation
- Speed: 20-40 seconds for 500-word outputs
- Context: 128k tokens (roughly 96,000 words)
gpt-3.5-turbo
- Cost: $0.0005/1k input tokens, $0.0015/1k output tokens
- Use for: Simple extraction, classification, formatting
- Speed: 3-8 seconds for 500-word outputs
- Context: 16k tokens (roughly 12,000 words)
Decision framework: Start with gpt-3.5-turbo. If output quality is inconsistent or the task requires multi-step reasoning, upgrade to gpt-4-turbo-preview. The 20x cost difference matters at scale.
Model deprecation: OpenAI retires models regularly. Check platform.openai.com/docs/deprecations quarterly and update your workflows. The node will fail when a model is deprecated.
5. JSON Output Configuration
Enable "JSON Mode" in the node settings to force structured output. This prevents the model from returning plain text when you need parseable data.
Without JSON Mode:
The contract is a Master Services Agreement between Acme Corp and Widget Inc...
With JSON Mode enabled:
{
"contract_type": "Master Services Agreement",
"parties": ["Acme Corp", "Widget Inc"],
"term_length_months": 24,
"termination_clauses": ["30-day notice", "Material breach"],
"red_flags": ["Unlimited liability", "Auto-renewal without notice"]
}
Critical requirement: When JSON Mode is enabled, your system message MUST explicitly request JSON output. The model will error without this instruction.
Three Production Workflows
Workflow 1: Client Intake Form Processing
Use case: Extract structured data from unstructured client intake responses.
Nodes:
- Webhook (trigger) - receives form submission
- OpenAI node - extracts structured data
- Airtable node - writes to client database
OpenAI node configuration:
- Model: gpt-3.5-turbo
- System message:
Extract client information from intake form responses. Return valid JSON with fields: company_name, industry, employee_count (number), primary_contact_name, primary_contact_email, services_interested (array), estimated_budget_usd (number), urgency (low/medium/high). Use null for missing data.
- User message:
`{{ $json.form_response }}`
Expected output:
{
"company_name": "Riverside Manufacturing",
"industry": "Industrial Equipment",
"employee_count": 450,
"primary_contact_name": "Sarah Chen",
"primary_contact_email": "schen@riverside-mfg.com",
"services_interested": ["Tax Planning", "Audit Services"],
"estimated_budget_usd": 75000,
"urgency": "medium"
}
Cost per execution: $0.002-0.005 (under a penny)
Workflow 2: Contract Clause Extraction
Use case: Pull specific clauses from 20-page service agreements for compliance review.
Nodes:
- Google Drive trigger - monitors "Contracts/New" folder
- Extract from File node - converts PDF to text
- OpenAI node - extracts clauses
- Google Sheets node - logs results
- Slack node - alerts legal team if red flags found
OpenAI node configuration:
- Model: gpt-4-turbo-preview (complex reasoning required)
- System message:
You are a contract analyst for professional services firms. Extract these specific clauses: limitation of liability, indemnification, termination rights, payment terms, confidentiality obligations. Return valid JSON with each clause type as a key and the exact contract language as the value. If a clause is missing, use "NOT FOUND". Add a red_flags array listing any unusual or unfavorable terms.
- User message:
`{{ $json.contract_text }}`
Expected output:
{
"limitation_of_liability": "Provider's total liability shall not exceed fees paid in the 12 months preceding the claim.",
"indemnification": "Client agrees to indemnify Provider against third-party claims arising from Client's use of deliverables.",
"termination_rights": "Either party may terminate with 60 days written notice. Client pays for work completed through termination date.",
"payment_terms": "Net 30 from invoice date. 1.5% monthly interest on overdue amounts.",
"confidentiality_obligations": "Both parties agree to 5-year confidentiality period for proprietary information.",
"red_flags": [
"Indemnification clause is one-sided (only client indemnifies provider)",
"No cap on indemnification liability"
]
}
Cost per execution: $0.15-0.30 for a 20-page contract
Workflow 3: Meeting Notes to Action Items
Use case: Convert rambling meeting transcripts into structured action items with owners and deadlines.
Nodes:
- Webhooktrigger - receives transcript from Otter.ai or similarWebhookClick to read the full definition in our AI & Automation Glossary.
- OpenAI node - extracts action items
- ClickUp node - creates tasks
- Email node - sends summary to attendees
OpenAI node configuration:
- Model: gpt-3.5-turbo
- System message:
Extract action items from meeting transcripts. Return valid JSON array where each item has: task (string), owner (string, use "Unassigned" if unclear), deadline (YYYY-MM-DD format, use null if not mentioned), priority (high/medium/low based on context). Only include explicit action items, not general discussion points.
- User message:
Meeting date: `{{ $json.meeting_date }}`
Attendees: `{{ $json.attendees }}`
Transcript:
`{{ $json.transcript }}`
Expected output:
[
{
"task": "Draft Q4 budget proposal with 3 scenarios",
"owner": "Michael",
"deadline": "2024-03-15",
"priority": "high"
},
{
"task": "Schedule client review meetings for top 10 accounts",
"owner": "Jennifer",
"deadline": "2024-03-08",
"priority": "medium"
},
{
"task": "Research new project management tools and present options",
"owner": "Unassigned",
"deadline": null,
"priority": "low"
}
]
Cost per execution: $0.01-0.03 for a 1-hour meeting transcript
Common Configuration Mistakes
Mistake 1: Not setting max tokens Set "Max Tokens" to 1000-2000 for most tasks. Without a limit, the model may generate excessive output and spike your costs.
Mistake 2: Ignoring temperature settings Temperature controls randomness. Use 0.1-0.3 for extraction tasks (consistent output). Use 0.7-0.9 for creative tasks (varied output). Default is 0.7.
Mistake 3: No error handling
Add an IF node after the OpenAI node to check for errors. Route failures to a Slack notification or error log. The OpenAI API
Mistake 4: Sending PII without review
OpenAI's API
Testing Your Configuration
Before deploying to production:
- Run the workflow manually with test data
- Check the OpenAI node's output tab for the raw JSON response
- Verify the downstream nodes receive correctly formatted data
- Test with edge cases (missing data, unusual formats, very long inputs)
- Monitor the execution time and cost in n8n's execution log
Set up a separate "test" workflow that mirrors your production workflow but uses a different OpenAI credential with a $5 spending limit. This prevents test runs from consuming your production budget.

Reviewed by Revenue Institute
This guide is actively maintained and reviewed by the implementation experts at Revenue Institute. As the creators of The AI Workforce Playbook, we test and deploy these exact frameworks for professional services firms scaling without new headcount.
Revenue Institute
Need help turning this guide into reality? Revenue Institute builds and implements the AI workforce for professional services firms.