The 12 Plays
Play 7Intermediate~20 min read

On-Demand Email Assistant

Generate a CRM-aware first draft of any client email in 30 seconds, with automated flagging of errors and tone issues before you send.

The business case

High-stakes client emails are easy to get wrong. The wrong tone with the wrong client at the wrong moment can damage a relationship that took years to build. A factual mistake in a proposal thread can cost you a deal. An internal note accidentally forwarded to the client it was about has ended more than one long-standing relationship. But the bigger issue is less dramatic and more constant: drafting careful replies to difficult client situations takes mental energy and time that your most senior people genuinely don't have. At 20 meaningful client emails per week per partner, drafting alone consumes 3 - 4 hours per week. This Play cuts that by 85% while simultaneously catching the high-risk errors before they leave the outbox.

What this play does

On demand, a partner triggers a draft request on any inbound email thread - typically via an email add-in button or a monitored draft-folder workflow. The system pulls the full CRM record for the recipient: relationship status, open issues, billing history, communication preferences, any sensitivity flags. That context plus the thread content goes to the AI, which returns a complete draft reply within 30 seconds - not a generic professional reply, but one calibrated to this specific client. A second pass runs a flag check: tone alignment against the client's documented preferences, factual consistency against the CRM, and a scan for red-flag patterns (internal pricing language, placeholder text, another client's name in the thread). The partner reviews the draft and flags, adjusts what they want, and sends.

Before and after

Before

An important client sends a complicated email about a billing dispute. The partner is in back-to-back meetings until 4 PM, finally sits down at the end of the day, tired and under pressure. The reply goes out in 12 minutes. It's technically accurate but the tone is slightly off for this particular client, and it references an invoice number that's since been superseded. The client notices both things. The conversation gets harder than it needed to be.

After

The same email arrives. When the partner sits down at 4 PM, they click the draft button. Within 30 seconds, they have a complete draft that addresses the billing concern, references the correct invoice number from the CRM, and is calibrated to this client's formal communication preference. The flag report surfaces one note: a previous thread had an internal comment about a billing adjustment that was under discussion. The partner removes that sentence, adjusts one paragraph, and sends. Total time: 2.5 minutes.

Business impact

Two impact categories. First, mistake prevention: a single wrong email to the right client at the wrong moment can cost more than a year's worth of operating this system. The insurance value is real even if it doesn't appear in a spreadsheet. Second, time recovery: senior partners sending 20 meaningful client emails per week, each taking 10 minutes of real focus, represents 3+ hours per week per partner in drafting time. Cut that by 85% across 5 key partners at $400/hour, and you recover roughly $312,000 per year in senior capacity.

ROI Calculator

Coming Soon

Prerequisites

Complete these before opening n8n. Skipping prerequisites is how you end up rebuilding workflows.

1

Define which email threads warrant a draft assist

Starting too broad creates noise. Start narrow - active client threads above a certain deal value, or any email related to billing, proposals, or service issues. Expand based on usage patterns after the first 30 days.

2

Map the CRM context you track

The draft quality is a direct function of the context quality. Know what communication preferences, relationship health signals, open issues, and sensitivity flags exist in your CRM before building. If your CRM records are sparse, fix the most important fields first.

3

Define your red flag list

Internal pricing language, placeholder text, another client's name in a forwarded thread - write your firm's specific red flag list before building the detection prompts. These become the patterns the system watches for.

4

Decide how partners will trigger the draft request

An Outlook or Gmail add-in is the most seamless option. A monitored draft-folder workflow (create a folder called 'Draft Request' - emails moved there trigger n8n) is a simpler alternative. Pick one before building.

5

Name the person who will tune flags in the first 30 days

Flags that are consistently wrong or irrelevant will be ignored. Assign someone to review flag accuracy for the first month and adjust detection prompts based on what they find.

Step-by-step implementation

The steps below are the full build guide. Each step includes configuration notes and exact AI prompts where applicable.

1

Set up the trigger mechanism

The simplest implementation uses a monitored draft folder in Gmail or Outlook. Create a folder named "Draft Assist" or "Draft Request." When a partner moves an email thread to this folder, n8n's email polling detects it and triggers the draft workflow. For a more seamless experience, build or install an email add-in. Gmail add-ins can be built in Google Apps Script and published for your organization. Outlook add-ins use the Office Add-ins framework. Both display a "Draft Assist" button in the email compose window that calls an n8n webhook URL with the thread content. For either approach, the trigger sends the same payload to n8n: the full email thread, the sender's email address, and any additional context the partner provides (optional instructions like "keep this formal" or "focus on the pricing objection").

2

Build the CRM context pull

After the trigger, n8n looks up the CRM record for the email recipient. Pull: contact record (communication preferences, relationship status, account tier), company record (account summary, active opportunities, recent activity), last 5 logged activities with summaries, any open service issues or complaints, and any sensitivity flags on the account. Compile this into a structured context block that the AI can reference. The more specific and current the CRM data, the more calibrated the draft will be. A draft generated from a CRM with "preferred communication: formal, no jargon; billing concern logged 3 weeks ago; relationship rated 'at risk'" will be dramatically different from one generated from a sparse record.

3

Build the draft generation prompt

Pass the thread content, CRM context, and any partner instructions to the AI. Use the draft prompt below. The AI returns a complete reply draft - not a template with blanks, but a finished draft that the partner can edit or send as-is. After the draft is generated, run a second AI call for the flag check. This is a separate prompt that focuses specifically on catching problems: tone mismatches, factual inconsistencies, and red-flag patterns. Return the flag report as a separate, structured output alongside the draft. Deliver both to the partner: the draft in an editable format (inline reply in the email client if using an add-in, or forwarded to a draft-delivery email address), and the flag report as a brief summary below the draft.

AI Prompt

You are a senior client communications specialist for a professional services firm. Your job is to write a first-draft reply to a client email, calibrated to the specific client relationship.

Email thread to reply to:
{{email_thread}}

Client CRM context:
- Contact: {{contact_name}}, {{contact_title}} at {{company_name}}
- Communication preferences: {{communication_preferences}}
- Relationship status: {{relationship_status}}
- Recent activity summary: {{recent_activity_summary}}
- Open issues: {{open_issues}}
- Sensitivity flags: {{sensitivity_flags}}

Partner instructions (if any): {{partner_instructions}}

Firm tone guidelines: {{firm_tone_guidelines}}

Write a complete reply that:
1. Opens appropriately for this client's communication style (formal or conversational based on preferences)
2. Addresses every question or point raised in the last email
3. Is accurate - only references information that is in the thread or CRM context provided
4. Uses the client's name naturally (not repeatedly)
5. Closes with a clear next step or expectation
6. Matches the relationship status - if it's "at risk," be especially careful and warm; if it's "strong," you can be more direct

Return ONLY the email body text (no subject line). Do not add any explanation or meta-commentary.
4

Build the flag check

Run this as a separate AI call after draft generation. The flag check looks for specific problems, not general quality. It checks three categories: 1. Tone flags: does the draft match the client's documented communication preferences? If the client is documented as "formal" and the draft uses casual language, flag it. 2. Factual flags: does the draft reference any specific numbers, dates, invoice numbers, or client claims that can be checked against the CRM? Cross-reference any specific facts in the draft against the CRM data provided. Flag any discrepancy. 3. Red-flag patterns: scan the draft for your predefined red-flag list - internal pricing language, placeholder text that wasn't filled in, another client's name appearing in the draft (this happens when pulling from past email templates), overly promotional language that sounds like marketing copy, and any language that could create legal exposure. Return flag results as: flag type, severity (low/medium/high), the specific problematic text, and a suggested correction or note to the reviewer.

AI Prompt

You are a quality control specialist reviewing a draft email before it's sent to a client. Check for specific problems only.

Draft to review:
{{draft_text}}

Client CRM data for fact-checking:
{{crm_context}}

Red flag patterns to check for:
{{red_flag_list}}

Client's documented communication preferences:
{{communication_preferences}}

Perform three checks:

1. TONE CHECK: Does the draft's tone, formality level, and word choice match the documented communication preferences? 

2. FACTUAL CHECK: Do any specific claims, numbers, dates, names, or invoice references in the draft conflict with the CRM data provided?

3. RED FLAG CHECK: Does the draft contain any of the defined red flag patterns?

Return ONLY a JSON object:
{
  "flags": [
    {
      "type": "tone" | "factual" | "red_flag",
      "severity": "low" | "medium" | "high",
      "description": "What the issue is",
      "problematic_text": "The exact text in the draft that's flagged",
      "suggestion": "How to fix it or what to double-check"
    }
  ],
  "overall_assessment": "clean" | "minor_flags" | "review_required",
  "flag_count": number
}

If there are no flags, return an empty flags array with overall_assessment: "clean".

Week-by-week rollout plan

Week 1Setup and Trigger
  • Set up trigger mechanism (draft folder or add-in).
  • Build CRM context pull. Test that it's returning accurate, useful data.
  • Define red flag list and communication preference tracking in CRM.
Week 2Draft Generation
  • Build draft generation prompt. Test against 10 real email threads.
  • Have 2 - 3 partners review the drafts. Collect honest feedback on quality and calibration.
  • Refine prompt based on feedback.
Week 3Flag Check and Delivery
  • Build flag check. Test with threads that have known tone issues and factual discrepancies.
  • Build draft delivery mechanism (formatted email reply or add-in display).
  • Pilot with 2 - 3 partners for 2 weeks before broader rollout.
Weeks 4 - 6Pilot and Refine
  • Pilot partners use daily. Flag quality reviewer actively tunes prompts.
  • Track: how often are drafts used as-is vs. edited significantly?
  • Week 6: Roll out to all partners.

Success benchmarks

These are the specific, measurable signals that confirm the play is working. Check against each benchmark at the 30-, 60-, and 90-day mark.

Average time to send a meaningful client email reduced by 75%+ for pilot partners
Draft acceptance rate (used with minor edits or less) above 70% after 30 days
Zero factual errors in sent emails that can be attributed to AI draft
Flag acceptance rate (flags that turn out to be valid) above 60%
Partner satisfaction with system rated 4/5 or higher at 30-day check-in

Common mistakes

Building a generic draft prompt

If the AI doesn't have access to CRM context and firm tone guidelines, it produces generic professional replies that sound like every other AI-written email. The quality of the draft is entirely dependent on the quality of the context you feed the system.

Deploying to everyone at launch

Start with 2 - 3 partners who will give honest, direct feedback on draft quality and flag accuracy. Tune the system for 4 - 6 weeks before rolling out firm-wide. A partner who uses a bad draft once and has to rewrite it from scratch will not use the system again.

Skipping flag tuning

Flags that are consistently wrong or irrelevant create noise that partners learn to ignore - and once they start ignoring flags, the safety layer is gone. Someone must own flag accuracy in the first 30 days and actively adjust the detection prompts.

Not logging draft activity to the CRM

The draft request log tells you which accounts generate the most complex correspondence, which flag types trigger most often, and where your communication patterns might have systematic issues. Build activity logging into the system from day one.

Related plays

Revenue Institute

Want someone to build this play for your firm? Revenue Institute implements the full AI Workforce Playbook system as part of every engagement.

RevenueInstitute.com