How to Use the AI/LLM Node in n8n (Claude/Anthropic)
Same as above for Anthropic node.
How to Use the AI/LLM LLMClick to read the full definition in our AI & Automation Glossary. Node in n8n (Claude/Anthropic)
The Anthropic node in n8n gives you direct access to Claude models inside your workflows. This means you can automate client intake forms, generate case summaries, draft engagement letters, or analyze contracts without leaving your automation stack.
This guide shows you exactly how to configure the node, which parameters matter, and how to build production-ready workflows that won't waste tokens or produce garbage output.
What You Need Before Starting
Required:
- Active n8n instance (cloud or self-hosted version 1.0+)
- Anthropic API key from console.anthropic.com
- Basic understanding of n8n workflow canvas
Get your API
- Sign up at console.anthropic.com
- Navigate to APIKeys sectionAPIClick to read the full definition in our AI & Automation Glossary.
- Click "Create Key"
- Copy the key immediately (it only displays once)
- Set usage limits under Settings > Billing to avoid surprise charges
Step 1: Add and Configure the Anthropic Node
Add the node to your workflow:
- Open your n8n workflow canvas
- Click the + button to add a node
- Search for "Anthropic" or "Claude"
- Select "Anthropic Chat Model" (not the legacy "Anthropic" node)
Connect your API
- Click the "Credential to connect with" dropdown
- Select "Create New Credential"
- Name it something memorable like "Anthropic Production Key"
- Click "Save"
The credential is now available across all workflows in your n8n instance.
Step 2: Configure Core Node Parameters
The Anthropic node has six parameters that control output quality and cost. Here's what each one does and when to adjust it.
Model Selection:
- claude-3-5-sonnet-20241022: Best balance of speed, cost, and quality. Use this for 90% of tasks.
- claude-3-opus-20240229: Highest quality, slowest, most expensive. Use for complex legal analysis or high-stakes client communications.
- claude-3-haiku-20240307: Fastest and cheapest. Use for simple classification, data extraction, or high-volume tasks.
Prompt (required): This is your instruction to Claude. Be specific. Bad: "Summarize this." Good: "Extract client name, matter type, and key deadlines from this intake form. Return as JSON."
You can reference data from previous nodes using expressions: {{ $json.email_body }}
Max Tokens:
- Controls maximum response length
- 1 token ≈ 4 characters in English
- Default is 1024 (about 750 words)
- Set to 4096 for long-form content
- Set to 256 for short classifications or extractions
Temperature (0.0 to 1.0):
- Controls randomness and creativity
- 0.0 = deterministic, consistent output (use for data extraction, classification)
- 0.7 = balanced creativity (use for drafting emails, summaries)
- 1.0 = maximum variation (use for brainstorming, creative content)
Top P (0.0 to 1.0):
- Alternative to temperature for controlling randomness
- 0.9 is a safe default
- Don't adjust both temperature and top_p simultaneously
Stop Sequences:
- Optional array of strings that halt generation
- Example:
["---END---", "\n\n\n"] - Useful for structured output or preventing runaway responses
Step 3: Build Your First Working Workflow
Here's a complete workflow that processes client intake emails and extracts structured data.
Workflow structure:
- Email Trigger (Gmail, Outlook, or IMAP)
- Anthropic Chat Model node
- Set node (to structure the output)
- Airtable/Google Sheets node (to store results)
Configure the Anthropic node:
Prompt:
Extract the following information from this client intake email:
- Client full name
- Company name (if mentioned)
- Matter type (litigation, M&A, employment, real estate, other)
- Urgency level (high, medium, low)
- Key dates mentioned
- Budget mentioned (if any)
Email content:
`{{ $json.body }}`
Return your response as valid JSON with these exact keys: client_name, company, matter_type, urgency, dates, budget. If information is not present, use null.
Settings:
- Model: claude-3-5-sonnet-20241022
- Max Tokens: 512
- Temperature: 0.2
- Top P: 0.9
Expected output:
{
"client_name": "Sarah Chen",
"company": "TechStart Inc",
"matter_type": "M&A",
"urgency": "high",
"dates": ["2024-03-15 board meeting", "2024-03-30 closing deadline"],
"budget": "$50,000-75,000"
}
Step 4: Handle Common Output Issues
Problem: Claude returns markdown formatting instead of clean JSON
Solution: Add this to your prompt:
Return ONLY the JSON object. Do not include markdown code blocks, explanations, or any text outside the JSON structure.
Problem: Inconsistent field names or structure
Solution: Provide an example in your prompt:
Example output format:
{
"client_name": "John Smith",
"matter_type": "litigation",
"urgency": "medium"
}
Problem: Response gets cut off mid-sentence
Solution: Increase max_tokens or add a completion check. Insert a Code node after Anthropic:
if ($json.finish_reason !== 'end_turn') {
throw new Error('Response truncated - increase max_tokens');
}
return $input.all();
Production-Ready Use Cases
Use Case 1: Contract Clause Extraction
Workflow: PDF → Extract Text → Anthropic → Database
Anthropic Configuration:
- Model: claude-3-5-sonnet-20241022
- Max Tokens: 2048
- Temperature: 0.1
Prompt:
Analyze this contract and extract:
1. Termination clauses (section and exact text)
2. Liability caps (amounts and conditions)
3. Indemnification provisions
4. Governing law and jurisdiction
5. Notice requirements
Contract text:
`{{ $json.contract_text }}`
Format as JSON with arrays for each category. Include section references.
Use Case 2: Client Email Triage and Routing
Workflow: Email Trigger → Anthropic → Switch Node → Route to Teams
Anthropic Configuration:
- Model: claude-3-haiku-20240307 (fast and cheap for classification)
- Max Tokens: 128
- Temperature: 0.0
Prompt:
Classify this email into exactly one category:
- URGENT_LITIGATION (active lawsuit, court deadline, emergency motion)
- URGENT_COMPLIANCE (regulatory deadline, audit request)
- NEW_MATTER (new client, new engagement)
- EXISTING_MATTER (ongoing work, routine update)
- BILLING (invoice question, payment issue)
- ADMINISTRATIVE (scheduling, general inquiry)
Email subject: `{{ $json.subject }}`
Email body: `{{ $json.body }}`
Return only the category name, nothing else.
Use Case 3: Engagement Letter Generator
Workflow: Form Submission → Anthropic → Google Docs → Email
Anthropic Configuration:
- Model: claude-3-5-sonnet-20241022
- Max Tokens: 3072
- Temperature: 0.4
Prompt:
Draft an engagement letter for a law firm with these details:
Client: `{{ $json.client_name }}`
Matter: `{{ $json.matter_description }}`
Scope: `{{ $json.scope_of_work }}`
Fee Structure: `{{ $json.fee_arrangement }}`
Key Team Members: `{{ $json.team_members }}`
Include:
1. Scope of representation (specific and limited)
2. Fee arrangement and billing terms
3. Client responsibilities
4. Conflicts disclosure
5. Termination provisions
6. Standard disclaimers
Use professional but accessible language. Format with clear section headers.
Cost Management and Token Optimization
Estimate costs before deploying:
- Claude 3.5 Sonnet: $3 per million input tokens, $15 per million output tokens
- Average client email (500 words) = ~650 tokens input
- Average extraction response = ~200 tokens output
- Cost per email processed: ~$0.005
Reduce token usage:
- Truncate input text to relevant sections only
- Use Haiku for simple tasks (10x cheaper than Opus)
- Set conservative max_tokens limits
- Cache system prompts when using the same instructions repeatedly
Add usage monitoring: Insert a Code node after Anthropic to log token usage:
const usage = $json.usage;
const cost = (usage.input_tokens * 0.000003) + (usage.output_tokens * 0.000015);
return [{
json: {
workflow_id: $workflow.id,
tokens_used: usage.input_tokens + usage.output_tokens,
estimated_cost: cost,
timestamp: new Date().toISOString()
}
}];
Send this data to a Google Sheet or database for monthly cost tracking.
Error Handling and Reliability
Add retry logic for API
- Click the Anthropic node settings (gear icon)
- Enable "Retry On Fail"
- Set "Max Tries" to 3
- Set "Wait Between Tries" to 5000ms
Handle rate limits: Anthropic enforces rate limits based on your tier. If you hit limits, add a Wait node before the Anthropic node:
Wait Time: 1000ms (1 second between requests)
Validate output structure: Add a Code node after Anthropic to verify JSON structure:
const response = $json.response;
let parsed;
try {
parsed = JSON.parse(response);
} catch (e) {
throw new Error('Invalid JSON response from Claude');
}
const required = ['client_name', 'matter_type', 'urgency'];
for (const field of required) {
if (!parsed[field]) {
throw new Error(`Missing required field: ${field}`);
}
}
return [{ json: parsed }];
Bottom Line
The Anthropic node transforms n8n from a simple automation tool into an intelligent document processor. Start with the Sonnet model for general tasks, use Haiku for high-volume classification, and reserve Opus for complex analysis where accuracy is critical.
Your first workflow should be simple: email in, structured data out. Once that works reliably, expand to contract analysis, document generation, and client communication drafting.
Monitor your token usage religiously. A poorly configured workflow can burn through your API

Reviewed by Revenue Institute
This guide is actively maintained and reviewed by the implementation experts at Revenue Institute. As the creators of The AI Workforce Playbook, we test and deploy these exact frameworks for professional services firms scaling without new headcount.
Revenue Institute
Need help turning this guide into reality? Revenue Institute builds and implements the AI workforce for professional services firms.