Back to n8n Fundamentals
n8n Fundamentals

How to Use the AI/LLM Node in n8n (OpenAI)

Configuring the OpenAI node: API key, system message, user message, model selection, JSON output.

How to Use the AI/LLM
Node in n8n (OpenAI)

The OpenAI node in n8n connects your workflows directly to GPT models. This guide covers the exact configuration steps, model selection criteria, and three production-ready implementations you can deploy today.

Get Your OpenAI API Key

You need an API

key before configuring any OpenAI node.

  1. Go to platform.openai.com and create an account
  2. Navigate to API
    Keys in the left sidebar
  3. Click "Create new secret key"
  4. Name it "n8n-production" or similar
  5. Copy the key immediately (it only displays once)
  6. Store it in your password manager

Cost warning: OpenAI charges per token. Set a monthly spending limit at platform.openai.com/account/billing/limits before running any workflows. Start with $10 to avoid surprise bills.

Configure the OpenAI Node

Add the OpenAI node to your workflow canvas. You'll configure five critical fields.

1. API
Key Setup

In the OpenAI node, click the "Credential to connect with" dropdown and select "Create New Credential."

Enter your API

key in the "API
Key" field. Click "Save" to store it securely in n8n's credential system.

Security note: Never hardcode API

keys in workflow JSON exports. Always use n8n's credential system.

2. System Message (The Control Layer)

The system message defines the AI's role, constraints, and output format. This is where you control quality.

Bad system message (vague, no constraints):

You are a helpful assistant.

Good system message (specific role, clear constraints):

You are a legal document analyzer for mid-market law firms. Extract key contract terms and flag non-standard clauses. Output must be valid JSON with fields: contract_type, parties, term_length_months, termination_clauses, red_flags. Use null for missing data. Never add commentary outside the JSON structure.

The system message stays constant across all executions. It's your quality control mechanism.

3. User Message (The Variable Input)

The user message contains the specific request that changes with each workflow execution. Reference data from previous nodes using n8n expressions.

Example with dynamic data:

Analyze this contract and extract terms:

`{{ $json.contract_text }}`

Focus on payment terms, liability caps, and termination rights.

The {{ $json.contract_text }} expression pulls data from the previous node's output. You can reference any field from upstream nodes.

Pro tip: Keep user messages under 2,000 words for GPT-3.5-turbo. For longer documents, split them across multiple nodes or use GPT-4-turbo with its 128k token context window.

4. Model Selection (Performance vs. Cost)

n8n's OpenAI node supports current models. Here's when to use each:

gpt-4-turbo-preview

  • Cost: $0.01/1k input tokens, $0.03/1k output tokens
  • Use for: Complex analysis, multi-step reasoning, code generation
  • Speed: 20-40 seconds for 500-word outputs
  • Context: 128k tokens (roughly 96,000 words)

gpt-3.5-turbo

  • Cost: $0.0005/1k input tokens, $0.0015/1k output tokens
  • Use for: Simple extraction, classification, formatting
  • Speed: 3-8 seconds for 500-word outputs
  • Context: 16k tokens (roughly 12,000 words)

Decision framework: Start with gpt-3.5-turbo. If output quality is inconsistent or the task requires multi-step reasoning, upgrade to gpt-4-turbo-preview. The 20x cost difference matters at scale.

Model deprecation: OpenAI retires models regularly. Check platform.openai.com/docs/deprecations quarterly and update your workflows. The node will fail when a model is deprecated.

5. JSON Output Configuration

Enable "JSON Mode" in the node settings to force structured output. This prevents the model from returning plain text when you need parseable data.

Without JSON Mode:

The contract is a Master Services Agreement between Acme Corp and Widget Inc...

With JSON Mode enabled:

{
  "contract_type": "Master Services Agreement",
  "parties": ["Acme Corp", "Widget Inc"],
  "term_length_months": 24,
  "termination_clauses": ["30-day notice", "Material breach"],
  "red_flags": ["Unlimited liability", "Auto-renewal without notice"]
}

Critical requirement: When JSON Mode is enabled, your system message MUST explicitly request JSON output. The model will error without this instruction.

Three Production Workflows

Workflow 1: Client Intake Form Processing

Use case: Extract structured data from unstructured client intake responses.

Nodes:

  1. Webhook (trigger) - receives form submission
  2. OpenAI node - extracts structured data
  3. Airtable node - writes to client database

OpenAI node configuration:

  • Model: gpt-3.5-turbo
  • System message:
Extract client information from intake form responses. Return valid JSON with fields: company_name, industry, employee_count (number), primary_contact_name, primary_contact_email, services_interested (array), estimated_budget_usd (number), urgency (low/medium/high). Use null for missing data.
  • User message:
`{{ $json.form_response }}`

Expected output:

{
  "company_name": "Riverside Manufacturing",
  "industry": "Industrial Equipment",
  "employee_count": 450,
  "primary_contact_name": "Sarah Chen",
  "primary_contact_email": "schen@riverside-mfg.com",
  "services_interested": ["Tax Planning", "Audit Services"],
  "estimated_budget_usd": 75000,
  "urgency": "medium"
}

Cost per execution: $0.002-0.005 (under a penny)

Workflow 2: Contract Clause Extraction

Use case: Pull specific clauses from 20-page service agreements for compliance review.

Nodes:

  1. Google Drive trigger - monitors "Contracts/New" folder
  2. Extract from File node - converts PDF to text
  3. OpenAI node - extracts clauses
  4. Google Sheets node - logs results
  5. Slack node - alerts legal team if red flags found

OpenAI node configuration:

  • Model: gpt-4-turbo-preview (complex reasoning required)
  • System message:
You are a contract analyst for professional services firms. Extract these specific clauses: limitation of liability, indemnification, termination rights, payment terms, confidentiality obligations. Return valid JSON with each clause type as a key and the exact contract language as the value. If a clause is missing, use "NOT FOUND". Add a red_flags array listing any unusual or unfavorable terms.
  • User message:
`{{ $json.contract_text }}`

Expected output:

{
  "limitation_of_liability": "Provider's total liability shall not exceed fees paid in the 12 months preceding the claim.",
  "indemnification": "Client agrees to indemnify Provider against third-party claims arising from Client's use of deliverables.",
  "termination_rights": "Either party may terminate with 60 days written notice. Client pays for work completed through termination date.",
  "payment_terms": "Net 30 from invoice date. 1.5% monthly interest on overdue amounts.",
  "confidentiality_obligations": "Both parties agree to 5-year confidentiality period for proprietary information.",
  "red_flags": [
    "Indemnification clause is one-sided (only client indemnifies provider)",
    "No cap on indemnification liability"
  ]
}

Cost per execution: $0.15-0.30 for a 20-page contract

Workflow 3: Meeting Notes to Action Items

Use case: Convert rambling meeting transcripts into structured action items with owners and deadlines.

Nodes:

  1. Webhook
    trigger - receives transcript from Otter.ai or similar
  2. OpenAI node - extracts action items
  3. ClickUp node - creates tasks
  4. Email node - sends summary to attendees

OpenAI node configuration:

  • Model: gpt-3.5-turbo
  • System message:
Extract action items from meeting transcripts. Return valid JSON array where each item has: task (string), owner (string, use "Unassigned" if unclear), deadline (YYYY-MM-DD format, use null if not mentioned), priority (high/medium/low based on context). Only include explicit action items, not general discussion points.
  • User message:
Meeting date: `{{ $json.meeting_date }}`
Attendees: `{{ $json.attendees }}`

Transcript:
`{{ $json.transcript }}`

Expected output:

[
  {
    "task": "Draft Q4 budget proposal with 3 scenarios",
    "owner": "Michael",
    "deadline": "2024-03-15",
    "priority": "high"
  },
  {
    "task": "Schedule client review meetings for top 10 accounts",
    "owner": "Jennifer",
    "deadline": "2024-03-08",
    "priority": "medium"
  },
  {
    "task": "Research new project management tools and present options",
    "owner": "Unassigned",
    "deadline": null,
    "priority": "low"
  }
]

Cost per execution: $0.01-0.03 for a 1-hour meeting transcript

Common Configuration Mistakes

Mistake 1: Not setting max tokens Set "Max Tokens" to 1000-2000 for most tasks. Without a limit, the model may generate excessive output and spike your costs.

Mistake 2: Ignoring temperature settings Temperature controls randomness. Use 0.1-0.3 for extraction tasks (consistent output). Use 0.7-0.9 for creative tasks (varied output). Default is 0.7.

Mistake 3: No error handling Add an IF node after the OpenAI node to check for errors. Route failures to a Slack notification or error log. The OpenAI API

fails occasionally due to rate limits or service issues.

Mistake 4: Sending PII without review OpenAI's API

terms allow them to use your data for model training unless you opt out. For client data, complete the opt-out form at platform.openai.com/docs/models/data-usage-policies or use Azure OpenAI Service (enterprise agreement required).

Testing Your Configuration

Before deploying to production:

  1. Run the workflow manually with test data
  2. Check the OpenAI node's output tab for the raw JSON response
  3. Verify the downstream nodes receive correctly formatted data
  4. Test with edge cases (missing data, unusual formats, very long inputs)
  5. Monitor the execution time and cost in n8n's execution log

Set up a separate "test" workflow that mirrors your production workflow but uses a different OpenAI credential with a $5 spending limit. This prevents test runs from consuming your production budget.

Revenue Institute

Reviewed by Revenue Institute

This guide is actively maintained and reviewed by the implementation experts at Revenue Institute. As the creators of The AI Workforce Playbook, we test and deploy these exact frameworks for professional services firms scaling without new headcount.

Revenue Institute

Need help turning this guide into reality? Revenue Institute builds and implements the AI workforce for professional services firms.

RevenueInstitute.com