Back to Play 4 Resources
Play 4: RFP Generator

Hallucination Accuracy Checklist

Factual accuracy review checklist for AI-generated proposal drafts. Partner sign-off included.

Hallucination Accuracy Checklist

AI-generated proposals fail when they contain plausible-sounding fiction. A client asks about your "award-winning cybersecurity practice" that doesn't exist. Your proposal claims 15 years of experience in a market you entered 18 months ago. You reference a case study from a competitor's website, not your own work.

These aren't edge cases. They're predictable failures when you deploy LLMs

without verification protocols.

This checklist gives you a repeatable process to catch fabrications before they reach clients. Use it on every AI-drafted proposal section. No exceptions.

What Hallucination Actually Looks Like

LLMs

don't "lie" intentionally. They predict plausible next tokens based on training data. When asked to describe your firm's capabilities, the model generates what a typical professional services firm might say, not what your firm can actually deliver.

Common fabrications in proposal drafts:

  • Inflated credentials: "Our team includes 12 CPAs with Big Four experience" (actual count: 7, only 3 from Big Four)
  • Invented projects: Detailed case studies for clients you've never served
  • Fake statistics: "98% client retention rate" when you don't track this metric
  • Borrowed expertise: Claiming capabilities from firms the model saw in training data
  • Outdated information: Referencing partnerships, certifications, or team members no longer current

The pattern: AI fills gaps with statistically likely content, not verified facts.

Pre-Review Setup

Before you start checking, gather your source-of-truth documents:

  1. Client relationship database (CRM
    export with project dates, revenue, scope)
  2. Team credentials spreadsheet (certifications, tenure, education, prior employers)
  3. Marketing approved case studies (only projects cleared for external use)
  4. Current service offerings list (updated within last 90 days)
  5. Partnership and certification records (with expiration dates)

Store these in a shared folder. Every reviewer needs instant access.

Section 1: Claims Verification

Review every factual assertion in the draft. Start with numbers, credentials, and client references.

Step 1: Extract all verifiable claims

Read through the draft once. Highlight or copy every statement that contains:

  • Numbers (client counts, project volumes, success rates, team size)
  • Credentials (certifications, awards, rankings, accreditations)
  • Client names or identifiable project details
  • Time-based claims (years of experience, project timelines)
  • Competitive positioning (market share, unique capabilities)

Step 2: Verify against source documents

For each claim, find the supporting evidence:

  • Client counts: Pull CRM
    report filtered by relevant criteria. Count manually if needed.
  • Success metrics: Check project close-out reports or client satisfaction surveys. If the metric doesn't exist in your records, delete the claim.
  • Team credentials: Cross-reference against HR records or LinkedIn profiles. Verify current employment status.
  • Case study details: Confirm the project exists in your approved case study library. Check that scope, timeline, and results match exactly.
  • Certifications: Verify current status on issuing organization's website. Check expiration dates.

Step 3: Document your verification

Create a simple tracking table:

| Claim in Draft | Source Document | Verified Value | Action Needed | |----------------|-----------------|----------------|---------------| | "Advised 50+ financial institutions" | CRM

export 2019-2024 | 47 clients | Revise to "45+" | | "95% audit pass rate" | No tracking system exists | N/A | Delete claim | | "3 former SEC regulators on team" | HR records | 2 current employees | Revise to "2 former regulators" |

Flag anything you cannot verify within 15 minutes. Escalate to the practice leader.

Step 4: Fix or remove unverified claims

Apply this decision tree:

  • Claim verified exactly: Keep as written
  • Claim close but overstated: Revise to conservative number (47 becomes "45+", not "nearly 50")
  • Claim cannot be verified: Delete entirely
  • Claim contradicts records: Delete and flag for partner review

Never round up. Never use "approximately" to paper over gaps. If you can't prove it, cut it.

Section 2: Reference and Citation Audit

AI models frequently generate citations that look real but link to non-existent sources.

Step 1: List all external references

Extract every citation, statistic source, or third-party reference:

  • Industry reports ("According to Gartner...")
  • Regulatory citations ("Under SOX Section 404...")
  • Market data ("The accounting services market grew 12%...")
  • News articles or press releases

Step 2: Verify each source exists

For each reference:

  1. Search for the exact title and publication
  2. Confirm the publication date matches
  3. Verify the cited statistic or quote appears in the source
  4. Check that the source is current (industry reports older than 2 years need replacement)

Common fabrication patterns:

  • Real publication, fake article title
  • Real organization, invented statistic
  • Outdated data presented as current
  • Paywalled sources the AI "read" in training but you cannot access

Step 3: Replace or remove bad references

  • Source doesn't exist: Delete the claim or find a real source that supports it
  • Source exists but doesn't support the claim: Delete or revise the claim
  • Source is outdated: Find current data or remove
  • Source is competitor content: Replace with your own research or neutral third-party data

Section 3: Internal Consistency Check

Read the full proposal draft in one sitting. Look for contradictions.

Common consistency failures:

  • Executive summary claims 20 years of industry experience; team bios show 12-year tenure
  • Methodology section describes a 6-phase process; project timeline shows 4 phases
  • Pricing assumes 3 senior consultants; staffing plan lists 2
  • Case study describes outcome achieved in 2019; earlier section claims capability launched in 2020

Consistency review process:

  1. Create a fact sheet from the first read-through (team size, project phases, timeline, key capabilities)
  2. Compare every subsequent section against this fact sheet
  3. Flag discrepancies immediately
  4. Resolve by checking source documents, not by choosing the "better sounding" version

Section 4: Partner Sign-Off Protocol

Partners are the final verification layer. They catch context the checklist misses.

Prepare the sign-off package:

  1. Clean draft with all corrections applied
  2. Verification log showing what you checked and what you changed
  3. Flagged items list for anything you couldn't verify
  4. Comparison document (optional) showing original AI output vs. corrected version

Sign-off meeting agenda:

  • Review flagged items first (5-10 minutes)
  • Partner spot-checks 3-5 claims from verification log (5 minutes)
  • Partner reviews client-specific sections for context accuracy (10 minutes)
  • Final approval or revision requests (5 minutes)

Partner approval checklist:

☐ All client names and project details are accurate and approved for external use
☐ Proposed team members are available and qualified for this engagement
☐ Pricing aligns with current rate card and scope assumptions
☐ No claims about capabilities the firm cannot currently deliver
☐ Methodology and timeline are realistic for this client's situation
☐ Competitive positioning is defensible and accurate

Get written approval (email confirmation is sufficient). Never submit without it.

Red Flag Patterns

Stop and escalate immediately if you see:

  • Detailed case study for a client you don't recognize: AI likely borrowed from another firm's marketing
  • Specific statistics without attribution: Model generated plausible numbers
  • Team member names you don't recognize: Fabricated experts or borrowed from training data
  • Capabilities that sound aspirational: "We plan to offer" became "We offer"
  • Oddly specific timelines for future work: AI doesn't understand proposal vs. project plan

Implementation Notes

First-time setup (30 minutes):

  • Gather source-of-truth documents
  • Create verification log template
  • Brief partners on sign-off process

Per-proposal time investment (45-90 minutes for typical 10-15 page proposal):

  • Claims verification: 20-30 minutes
  • Reference audit: 10-15 minutes
  • Consistency check: 10-15 minutes
  • Partner sign-off prep: 5-10 minutes

Efficiency tips:

  • Verify the executive summary and team credentials first (highest hallucination risk)
  • Build a "verified claims library" of pre-checked statistics you reuse across proposals
  • Create templates with locked sections for standard credentials and case studies
  • Train AI on your approved case study library to reduce fabrication rates

This checklist prevents embarrassment. Use it every time.

Revenue Institute

Reviewed by Revenue Institute

This guide is actively maintained and reviewed by the implementation experts at Revenue Institute. As the creators of The AI Workforce Playbook, we test and deploy these exact frameworks for professional services firms scaling without new headcount.

Revenue Institute

Need help turning this guide into reality? Revenue Institute builds and implements the AI workforce for professional services firms.

RevenueInstitute.com