HR/Legal Compliance Review Checklist for AI Screening
Checklist for reviewing screening criteria against employment discrimination laws before deployment.
HR/Legal Compliance Review Checklist for AI Screening
You're about to deploy an AI screening tool that will touch every candidate who applies to your firm. One misconfigured criterion, one biased training dataset, one undocumented decision rule, and you're facing an EEOC complaint or a class-action lawsuit that costs seven figures to defend.
This checklist walks you through the exact compliance review process before you flip the switch. Use it as a gate review. If you can't check every box with documentation to back it up, don't deploy.
Pre-Deployment Legal Framework Review
Before you audit a single screening criterion, confirm you understand the four federal statutes that will define your liability exposure.
Title VII of the Civil Rights Act (1964): Prohibits discrimination based on race, color, religion, sex, or national origin. Applies to screening criteria, knockout questions, and ranking algorithms. The "disparate impact" doctrine means even neutral-seeming criteria (like requiring a bachelor's degree) can be unlawful if they disproportionately exclude protected groups and aren't job-related.
Age Discrimination in Employment Act (ADEA): Protects applicants 40+. Your AI cannot use age as a direct input. It also cannot use proxies like "graduation year" or "years of experience" in ways that systematically disadvantage older candidates.
Americans with Disabilities Act (ADA): Prohibits pre-offer medical inquiries and requires reasonable accommodations. Your AI cannot ask about disabilities, medical history, or workers' compensation claims. It cannot screen out candidates based on gaps in employment that may relate to medical leave.
Genetic Information Nondiscrimination Act (GINA): Bars use of genetic information in hiring. Your AI cannot ingest data from health apps, family medical history, or genetic testing services.
State and local laws: New York City's Local Law 144 requires annual bias audits for automated employment decision tools. Illinois' Artificial Intelligence Video Interview Act mandates candidate consent and explanations. California's CCPA gives candidates rights to access the data you're using. Map your compliance obligations by jurisdiction before deployment.
Screening Criteria Audit (Complete This Section First)
Work through each criterion your AI uses. Document your findings in a compliance matrix with columns for: Criterion Name, Data Source, Job-Relatedness Justification, Adverse Impact Test Result, Mitigation Plan.
Job-Relatedness Test
For every data point your AI ingests, answer this question: "Can I prove in court that this criterion predicts success in this specific role?"
Pass: You're screening software engineers for proficiency in Python. You test for Python skills. You have validation data showing Python test scores correlate with on-the-job performance ratings.
Fail: You're screening software engineers and your AI downgrades candidates who didn't attend a four-year university. You have no validation data. You're excluding qualified candidates and creating disparate impact against Black and Hispanic applicants.
Action: Remove any criterion you cannot defend with a validation study. If the criterion is essential, commission a validation study before deployment. Use criterion-related validity studies (correlate the criterion with job performance metrics) or content validity studies (show subject matter experts agree the criterion measures essential job functions).
Adverse Impact Analysis (The Four-Fifths Rule)
Run selection rate calculations by protected class. The EEOC's four-fifths rule provides a practical threshold: if the selection rate for any protected group is less than 80% of the rate for the highest-selected group, you have adverse impact that requires justification.
Calculation example: Your AI screens 1,000 applicants. It advances 400 white candidates (40% selection rate) and 280 Black candidates (28% selection rate). The ratio is 28% ÷ 40% = 0.70, which is below the 0.80 threshold. You have adverse impact.
Required data cuts: Run this analysis separately for race, sex, age (40+), and any other protected class with sufficient sample size. If you're screening fewer than 30 applicants in a protected class, note the limitation but still calculate the rate.
What to do if you find adverse impact: Document it. Determine which specific criteria are driving the disparity. Assess whether those criteria are job-related and consistent with business necessity. If not, remove them. If yes, explore less discriminatory alternatives (can you use a different test or threshold that achieves the same business goal with less impact?).
Prohibited Data Inputs Check
Your AI cannot use these data points, even indirectly:
- Race, color, national origin, ethnicity
- Sex, gender identity, pregnancy status
- Religion or religious affiliation
- Age or date of birth (you can verify someone is over 18 for legal work eligibility)
- Disability status, medical conditions, prescription drug use
- Genetic information or family medical history
- Arrest records without conviction (some states prohibit considering conviction records too)
- Credit history (banned for employment in 11 states unless the role involves financial responsibility)
- Salary history (banned in 21+ states and cities)
Proxy variable risk: Your AI might not directly ingest "race," but if it uses ZIP code, high school name, or first name, it's effectively using race. Audit for proxies. Test whether removing the suspected proxy variable changes outcomes by protected class.
Transparency and Explainability Requirements
You must be able to explain to a rejected candidate, an EEOC investigator, or a plaintiff's attorney exactly why your AI made its decision.
Minimum documentation standard: For each candidate decision, you should be able to produce a report showing: (1) which criteria the AI evaluated, (2) the candidate's score or status on each criterion, (3) the weight assigned to each criterion, (4) the threshold or cutoff applied, (5) the final decision and reasoning.
Black box problem: If your vendor says "our proprietary algorithm is too complex to explain," that's a red flag. You own the legal liability. Demand explainability or choose a different tool.
Practical test: Pick three rejected candidates at random. Ask your vendor or internal team to produce the explanation report. If it takes more than 10 minutes or the explanation is vague ("the algorithm determined the candidate wasn't a strong fit"), you don't have adequate explainability.
Human Review and Override Protocol
Your AI should assist human decision-makers, not replace them. Build these safeguards into your workflow.
Mandatory human review triggers: Require human review when: (1) the AI's confidence score is below a defined threshold (e.g., 70%), (2) the candidate is flagged for a protected class characteristic, (3) the candidate requests review, (4) the AI's decision contradicts the recruiter's initial assessment.
Override authority: Designate who can override the AI (typically the hiring manager or senior recruiter). Document every override with a written justification. Track override rates by protected class to identify whether humans are introducing bias the AI didn't have.
Sample override policy language: "Hiring managers may override AI screening recommendations when they have documented, job-related reasons to believe the AI's assessment is incorrect. All overrides must be recorded in [ATS system] with a written explanation referencing specific job qualifications or business needs."
Governance and Monitoring Framework
Set up these processes before deployment, not after you receive your first complaint.
Establish an AI Screening Review Committee
Convene a standing committee with representatives from: HR/Talent Acquisition, Legal/Compliance, IT/Data Science, Diversity & Inclusion, and Business Unit Leadership.
Meeting cadence: Quarterly at minimum. Monthly during the first six months post-deployment.
Committee responsibilities: Review adverse impact reports. Approve changes to screening criteria. Investigate complaints. Authorize bias audits. Escalate issues to executive leadership.
Document Your Compliance Review
Create a "Pre-Deployment Compliance Audit Report" that includes:
- List of all screening criteria with job-relatedness justification for each
- Adverse impact analysis results (selection rates by protected class)
- Bias testing methodology and results
- List of prohibited data inputs confirmed absent from the model
- Explainability testing results (sample candidate decision reports)
- Human review protocol and override policy
- Vendor due diligence documentation (if using third-party AI)
- Sign-off from Legal and HR leadership
Store this report for at least four years (the statute of limitations for most employment discrimination claims). Update it annually or whenever you modify the AI's criteria.
Ongoing Monitoring Requirements
Monthly: Pull selection rate data by protected class. Flag any month where the four-fifths rule is violated.
Quarterly: Conduct full adverse impact analysis. Review override logs. Analyze candidate complaints. Update the Review Committee.
Annually: Commission an independent bias audit (required in some jurisdictions, best practice everywhere). Re-validate job-relatedness of screening criteria. Update documentation.
Continuous: Log every AI decision with timestamp, criteria evaluated, scores, and outcome. Retain logs for four years minimum.
Vendor Due Diligence (If Using Third-Party AI)
If you're buying an AI screening tool rather than building it in-house, you're still liable for its discriminatory impact. Require your vendor to provide:
Bias audit reports: Independent third-party testing for adverse impact by race, sex, and age. Conducted within the last 12 months. Includes methodology, sample size, and results.
Training data transparency: Description of the datasets used to train the model. Demographic composition of training data. Steps taken to mitigate bias in training data.
Explainability capabilities: Technical documentation of how the AI generates decisions. Sample candidate decision reports. Confirmation that you can produce explanations on demand.
Contractual protections: Indemnification for discrimination claims arising from the AI's decisions. Right to audit the vendor's compliance practices. Termination rights if the vendor fails to meet compliance standards.
Red flags: Vendor refuses to share bias audit results. Vendor claims "proprietary algorithm" prevents transparency. Vendor has no process for investigating discrimination complaints. Walk away.
Pre-Deployment Checklist (Gate Review)
Do not deploy your AI screening tool until you can check every box:
- [ ] Completed adverse impact analysis showing no disparate impact, or documented justification for any impact found
- [ ] Verified all screening criteria are job-related with validation evidence
- [ ] Confirmed no prohibited data inputs (race, age, disability, etc.) are used directly or via proxies
- [ ] Tested explainability by generating decision reports for sample candidates
- [ ] Established human review protocol with defined triggers and override authority
- [ ] Formed AI Screening Review Committee with quarterly meeting schedule
- [ ] Created Pre-Deployment Compliance Audit Report signed by Legal and HR
- [ ] Set up monitoring dashboards to track selection rates by protected class
- [ ] Trained all recruiters and hiring managers on proper use of the AI tool and override procedures
- [ ] If using vendor tool: obtained bias audit report, training data documentation, and contractual protections
- [ ] Confirmed compliance with state/local AI hiring laws in all jurisdictions where you're recruiting
- [ ] Established complaint investigation process for candidates who believe they were unfairly screened out
If you can't check a box, that's your deployment blocker. Fix it before you go live.
Post-Deployment: What to Do When You Find a Problem
You will find problems. Your quarterly adverse impact analysis will eventually show a violation. A candidate will file a complaint. Your override logs will reveal a pattern.
Immediate actions: Stop using the problematic criterion while you investigate. Notify Legal. Pull the data to understand scope (how many candidates were affected, what protected classes, over what time period).
Investigation protocol: Determine root cause (biased training data, proxy variable, flawed validation study, human override pattern). Assess legal exposure. Develop remediation plan.
Remediation options: Remove or modify the problematic criterion. Re-screen affected candidates using compliant criteria. Offer to reconsider rejected candidates. Update training for hiring managers. Commission new validation study.
Documentation: Memorialize the issue, investigation, and remediation in a written report. Provide to the Review Committee and Legal. Retain for litigation defense if needed.
The goal isn't perfection on day one. The goal is a documented, good-faith effort to identify and fix discrimination before it becomes a pattern or practice.

Reviewed by Revenue Institute
This guide is actively maintained and reviewed by the implementation experts at Revenue Institute. As the creators of The AI Workforce Playbook, we test and deploy these exact frameworks for professional services firms scaling without new headcount.
Revenue Institute
Need help turning this guide into reality? Revenue Institute builds and implements the AI workforce for professional services firms.