The 12 Plays
Play 4Intermediate~22 min read

RFP First Draft Generator

Turn a 40-hour RFP response process into a 5-hour one by generating 70 - 80% complete first drafts from your wins library.

The business case

RFP and RFQ responses are expensive in the most literal sense: they consume time from the people in your firm whose time is worth the most. A serious response to a complex RFP can run 20 - 40 hours of leadership and senior staff time before it goes out the door. Conversion rates range from 5 - 15% without an existing relationship, up to 45 - 50% with one. That means even the best-case scenario burns 40 - 80 hours for one won engagement. The uncomfortable truth: most of that time goes to work that is nearly identical across every proposal - describing your firm's approach, explaining your methodology, listing relevant experience, assembling team credentials. This Play builds a system that produces a 70 - 80% complete first draft in under an hour, drawing from a structured library of your past proposals.

What this play does

When a new RFP comes in, it's uploaded or submitted through an intake form connected to n8n. The system reads the RFP, identifies key requirements, scope signals, and evaluation criteria, then queries a wins library of past proposals for the most relevant matches. It assembles a structured first draft - cover letter, firm overview, relevant experience, methodology section, team credentials, and fee structure placeholders - within an hour. Placeholders are clearly marked where custom content is required. Your team's job is to sharpen the specific differentiation, tailor the case examples, and make sure the voice is right for this particular client. The system removes the 15 hours of assembly work so the team can spend their time on the 5 hours of thinking that actually wins deals.

Before and after

Before

Each new RFP response starts from scratch or is copied and pasted from a prior template. The process takes 3 - 5 days and pulls in multiple senior people. Usually the first two days are purely retrieval and formatting: finding the right past case studies, assembling credentials, pulling methodology language. Partners spend hours on content that is essentially the same as what they wrote for the last RFP.

After

A new RFP is submitted through the intake form. Within an hour, a 70 - 80% complete draft arrives in the team's inbox, with clear notes on what needs customization. The team spends one to two focused days on the parts that require real thought: the differentiation argument, the specific case examples, and the fee strategy. Firms using this approach report RFP response time dropping from 30 - 40 hours to 3 - 5.

Business impact

If your average RFP takes 35 hours and this system cuts that to 7, you've freed 28 hours per RFP. At a fully-loaded cost of $300 per hour for the people doing that work, that's $8,400 per proposal. At 20 RFPs per year, that's $168,000 in recovered capacity annually. There's also a quality argument: a well-assembled first draft that draws from your actual winning proposals is consistently better than a rushed draft assembled from memory at 11 PM. The system is both faster and more consistent - and consistency in proposal quality compounds over time into a stronger track record.

Prerequisites

Complete these before opening n8n. Skipping prerequisites is how you end up rebuilding workflows.

1

Build your wins library first

This is the hard prerequisite most firms want to skip. The quality of your drafts will directly reflect the quality of what you feed the system. You need 20 - 30 well-documented past proposals, structured consistently, before building this workflow. Pull your best work from the last 2 - 3 years and organize it in a single folder in SharePoint, Google Drive, or a comparable system. Templates give structure but not substance - the AI needs real content.

2

Standardize your wins library schema

Each proposal in the library should include: client type (industry, size), scope description, key differentiators that carried the pitch, outcome (won/lost and why), methodology language used, team credentials referenced, and any case study content. Inconsistent records produce inconsistent drafts. Standardize the schema before loading more than a handful of records.

3

Map your proposal structure

Document the sections you use consistently: cover letter, firm overview, relevant experience, methodology, team credentials, fee structure. This structure is what the system assembles the draft into. Know it before you build.

4

Identify your five most common RFP categories

Most professional services firms see recurring RFP types. Categorize your existing proposals by type and note which categories appear most frequently. Build example structures and winning language for the top two or three categories - this dramatically improves draft quality for common RFP types.

5

Assign a partner-level reviewer

The person reviewing drafts needs enough context to know what's accurate, what's on-brand, and what this particular client relationship requires. This is not a proofreading task - it's a judgment task. Name the reviewer before you build.

Step-by-step implementation

The steps below are the full build guide. Each step includes configuration notes and exact AI prompts where applicable.

1

Build and structure the wins library

Create a consistent folder structure in your document repository (SharePoint, Google Drive, or Notion). Each proposal should be stored as a single document or folder with a standardized naming convention: [Year]-[Client Type]-[Scope Category]-[Outcome]. Create a metadata spreadsheet or database that indexes every proposal in the library with these fields: file path, client industry, scope category, service type, engagement size range, key differentiators used, outcome (won/lost), reason won/lost, methodology language classification (which methodology section does this proposal use), and team credentials referenced. This metadata index is what n8n queries when matching a new RFP to relevant past proposals - without it, the system has to read every proposal from scratch, which is slow and imprecise. Build the index alongside the document library. For each proposal, also extract the reusable content chunks into a separate document: methodology description, firm overview paragraph, relevant case studies (anonymized if necessary), and team bios. Label each chunk with its category and date. These chunks are what the AI assembles into the draft - clean, labeled, separated from the full proposal context.

2

Build the RFP intake workflow

Create an intake form (using Typeform, Google Forms, or a simple HTML form) with these fields: RFP title, client name, submission deadline, scope description, RFP document upload, and category (from your predefined list). Connect the form to n8n via webhook. When n8n receives the form submission, it downloads the RFP document from the upload URL, extracts the text content, and passes it to an AI node for initial analysis. The AI extraction prompt identifies: the key requirements and scope signals in the RFP, the evaluation criteria (if listed), any mandatory sections, client-specific signals about their priorities or concerns, and the category match from your predefined list. This structured summary is what the system uses to query the wins library.

AI Prompt

You are an RFP analyst for a professional services firm. Your job is to read a Request for Proposal and extract the key information needed to find relevant past proposals and begin drafting a response.

Here is the RFP content: {{$json.rfp_text}}

Return ONLY a valid JSON object with these fields:

{
  "rfp_title": "The title or description of this RFP",
  "client_type": "Industry/sector of the issuing organization",
  "scope_summary": "2-3 sentence summary of the core scope of work requested",
  "key_requirements": ["Array of the most important specific requirements"],
  "evaluation_criteria": ["Array of stated evaluation criteria, if listed"],
  "mandatory_sections": ["Array of required sections explicitly stated in the RFP"],
  "client_signals": ["Any signals about client priorities, concerns, or preferences based on how the RFP is written"],
  "scope_category": "The most applicable category from: [list your firm's categories]",
  "engagement_complexity": "simple, moderate, or complex",
  "estimated_response_sections": ["Cover letter", "Firm Overview", "Relevant Experience", "Methodology", "Team Credentials", "Fee Structure"]
}
3

Query the wins library and assemble the draft

Using the structured RFP summary, query the metadata index to find the 3 - 5 most relevant past proposals. Match on scope category, client type, and service type. Retrieve the full content of those proposals plus the relevant reusable content chunks. Pass the RFP summary, matched proposals, and content chunks to the AI assembly prompt. The AI maps relevant content from past proposals to the sections of the current RFP, inserts placeholders where custom content is required, and flags sections where the wins library doesn't have strong matches. The assembled draft should follow your standard proposal structure. Every placeholder should be clearly labeled: [CUSTOM: Insert specific differentiator for this client's regulatory context] - not just [INSERT TEXT HERE]. The more specific the placeholder, the easier it is for the reviewer to complete the draft. After draft generation, save the document to a designated folder in your document system and notify the responsible partner with a link to the draft and a summary: what was matched from the wins library, what needs to be written from scratch, and which sections the AI flagged as weak matches.

AI Prompt

You are a senior proposal writer for a professional services firm. Your job is to assemble a first-draft RFP response by drawing on past proposals and firm content.

RFP Summary: {{rfp_summary}}

Matched past proposals and content library: {{matched_content}}

Firm's standard proposal structure: {{proposal_structure}}

Instructions:
1. Assemble a complete first-draft response following the standard structure
2. Pull directly from the matched past proposals where content is highly relevant - quote and adapt, don't start from scratch
3. For sections where matched content is only partially relevant, note the gap and write a bridge paragraph that can be customized
4. For sections where no good match exists, insert a clearly labeled placeholder: [CUSTOM REQUIRED: Description of what needs to be written here and what information is needed]
5. Maintain consistent voice throughout - if the tone shifts because you're drawing from multiple proposals, smooth it
6. Flag any factual claims (client names, case outcomes, credentials) that need verification in a "Fact Check Required" section at the end

Format as a properly structured document with clear section headers. This draft should be 70-80% complete - enough that the reviewer's job is refinement and customization, not assembly.
4

Set up the review and iteration process

The draft review process needs to include a specific accuracy check - not just a quality review. AI will occasionally generate specific claims that aren't in your wins library. Build a fact-check checklist into your review process: - Every client name mentioned: is it in your approved client reference list? - Every case outcome (dollar amounts, percentage improvements): does it match documented results? - Every credential or certification: does it match current team bios? - Every methodology claim: is it consistent with how your firm actually works? Require the reviewing partner to sign off on factual claims explicitly before the proposal goes out. A one-line addition to your submission email - "Factual claims reviewed and confirmed by [Partner Name]" - creates accountability. After each submission (won or lost), update the wins library. Log the key content used, what was customized, what the client response was, and whether you won. Firms that treat the library as a living document see consistent improvement in draft quality over 6 - 12 months as the library grows.

Week-by-week rollout plan

Weeks 1 - 2Wins Library Build
  • Collect 20 - 30 past proposals. Organize into standardized folder structure.
  • Extract reusable content chunks (methodology, firm overview, case studies, team bios) into labeled documents.
  • Build metadata index spreadsheet for all library documents.
Week 3Intake and Analysis
  • Build intake form and n8n webhook connection.
  • Build RFP analysis AI node. Test against 3 past RFPs - verify the summary is accurate and useful.
Week 4Draft Assembly
  • Build library query and matching logic.
  • Build AI assembly node. Test against 2 real past RFPs - compare AI draft to actual submitted proposal.
  • Build notification and document delivery flow.
Week 5Review Process and Launch
  • Define fact-check checklist. Build it into the review workflow.
  • Process first real incoming RFP through the system with the reviewer watching.
  • Debrief after first real use. Adjust prompts based on draft quality.

Success benchmarks

These are the specific, measurable signals that confirm the play is working. Check against each benchmark at the 30-, 60-, and 90-day mark.

RFP response time reduced from 30+ hours to under 8 hours within 90 days
Draft quality rated 'useful starting point or better' by reviewers on 80%+ of RFPs
Zero factual errors in submitted proposals
Wins library growing by at least 2 - 3 new well-documented entries per month
Reviewer completing draft review within 24 hours of notification

Common mistakes

Skipping the wins library build

Templates give structure, not substance. The AI needs real content - actual case descriptions, actual methodology language, actual differentiators - to produce drafts worth editing. Templates produce generic output. Past proposals produce useful output. Build the library first.

Not reviewing for hallucinations

The AI will sometimes generate specific claims - client names, case outcomes, dollar amounts - that are not in your wins library. Your review process must include a specific factual accuracy check. Read every factual claim before the proposal goes out. This is non-negotiable.

Assigning review to someone too junior

The person reviewing the draft needs enough context to know what's accurate, what's on-brand, and what this particular client relationship requires. This is a partner-level review task, not a proofreading task.

Not updating the wins library after each submission

The system improves as the library grows. After every proposal - won or lost - log the key content into the wins library. Firms that treat the library as a living document see consistent improvement in draft quality over 6 - 12 months.

Exception rule

Read before going live

Every draft generated must be reviewed for factual accuracy before submission. The AI will occasionally hallucinate specific details - case outcomes, client names, dollar amounts, credentials - that are not in your wins library. Build a specific accuracy checklist into your review process and require the reviewing partner to sign off on factual claims explicitly.

Downloads & Templates

Downloadable Template

Wins Library Template

Pre-structured template for logging past proposals.

Request Asset
n8n Workflow

Play 4 RFP Generator Workflow (n8n JSON)

Ready-to-import n8n workflow file that extracts requirements and drafts an initial RFP response.

Request Asset

Related plays

Revenue Institute

Want someone to build this play for your firm? Revenue Institute implements the full AI Workforce Playbook system as part of every engagement.

RevenueInstitute.com