Best practices to prevent AI fraud in your hiring process

Protect your hiring process from fraud. Learn best practices, tools, and strategies to detect fake candidates, deepfakes, and identity misuse early.

Best practices to prevent AI fraud in your hiring process
Learn how Sherlock fights and protects you from AI fraud in hiring

59 % of managers have already suspected candidate AI fraud

Generative AI tools like ChatGPT are changing how we hire, from resume writing to interview prep. But with this change comes a dark side. According to a 2025 Checkr survey of 3,000 hiring managers, 59% have already suspected candidates of using AI to misrepresent themselves. And as per a 2025 CNBC report nearly 65% of job seekers are using AI in some part of their application process. This is leading to costly bad hires and financial losses. In this blog, we’ll break down the scale of hiring fraud, why it’s so hard to detect, and the best practices every employer needs to stay protected.

AI-generated resumes: 59 % of hiring managers cite misrepresentation concerns

With the rise of AI-powered resume builders like Kickresume (which uses GPT-4), candidates can create polished documents in seconds. This has led to a surge in concerns about inflated or fabricated credentials. According to Checkr’s 2025 survey of 3,000 U.S. hiring managers, 59% suspect candidates are using AI to misrepresent themselves, and 62% believe applicants are already outpacing recruiters in the art of deception.

Deep-fake video interviews: 17 % of managers have encountered synthetic candidates

Deep-fake technology is now being used to conduct interviews, with AI face-swapping and lip-syncing used to mimic job seekers. According to a 2025 Resume Genius survey of 1,000 U.S. hiring managers, 17% have already encountered synthetic video candidates.(3) This risk is amplified in remote-first hiring, where in-person identity checks are often skipped.(3) Pindrop CEO Vijay Balasubramaniyan warns that creating convincing deep-fake interviews is "very, very simple."(3)

Financial stakes: 24 % of companies lost >$50 K to fraudulent hires last year

According to Checkr’s 2025 study of 3,000 U.S. managers, 24% of businesses lost more than $50k to fraudulent hires last year, and 10% lost six figures.(4) This is largely due to misrepresented identities and qualifications, which lead to expensive re-hiring and productivity losses.(4)

Looking ahead: Gartner predicts 1 in 4 candidates will be fake by 2028

According to Gartner, 1 in 4 candidate profiles could be fake by 2028.(5) This prediction is not just a futuristic warning; it's a reflection of current trends. A 2025 Gartner survey of 3,000 job seekers revealed that 6% had already engaged in interview fraud.(5) With the rise of generative AI and remote hiring, the barriers to impersonation are only getting lower, making robust fraud-mitigation strategies more critical than ever.(3)

Why Detecting AI Fraud Is So Complicated

Remote-first hiring opens doors to impersonation - 31 % interviewed someone later revealed fake

The rise of remote-first hiring has fundamentally changed how companies conduct interviews. With 62% of firms now running fully online hiring processes, the traditional face-to-face identity check is often removed, creating new vulnerabilities for impersonation.(6) The scale of this risk is already evident: 31% of hiring managers have discovered they interviewed candidates using fake identities.(7)

Biometric deep-fakes could render standalone ID checks unreliable by 2026 (30 % of enterprises)

According to Gartner, deepfake attacks on face biometrics will make 30% of enterprises view standalone ID checks as unreliable by 2026.(8) Biometric deepfakes are synthetic images or videos that mimic real candidates to fool liveness detection.(8) With injection attacks up 200% in 2023, HR teams need to layer on more than just a single face-ID snapshot.(9)

Low candidate trust: only 26 % believe AI will evaluate them fairly

According to Gartner's Q1 2025 survey of 2,918 job seekers, only 26% believe AI will evaluate them fairly.(10) Additionally, 25% trust employers less when AI is used in the hiring process.(10) These fears of bias and lack of transparency can deter authentic candidates from participating.(10)

Regulatory and bias landmines in AI video assessments

AI video assessments are under increasing regulatory scrutiny. NYC Local Law 144 requires annual bias audits as of July 5, 2023, and Illinois’s 2020 AI Video Interview Act mandates advance notice, consent, and deletion rights.(11) The EU’s 2024 AI Act bans workplace emotion recognition and classifies hiring AI as “high risk,” and EEOC’s May 2023 guidance warns that algorithmic discrimination violates Title VII.(12)

Best Practices to Prevent Hiring Fraud

Verify identity early with multi-factor checks - 36 % of firms prioritize in-person or biometric ID

Identity verification should start at the application stage with multi-factor checks that combine government ID scans, biometric liveness detection, or live video confirmation.(4) This creates a trusted foundation and blocks out imposters and deepfake proxies before they even reach the interview stage.(8) Gartner predicts that by 2026, 30% of enterprises will deem standalone ID solutions unreliable due to AI-generated deepfakes, and Checkr’s 2025 survey shows that 36% of firms already prioritize in-person or biometric verification as their top anti-fraud investment.(8)

Cross-check digital footprints to spot AI-generated résumés

According to ResumeBuilder, 46% of job seekers are using ChatGPT to write their résumés.(13) Cross-checking LinkedIn, GitHub, and other public profiles can quickly reveal inconsistencies. In fact, 70% of employers already check social profiles before hiring, and any discrepancies can lead to further investigation or even outright rejection.(14)

Adopt skills-based assessments and live technical demos to outsmart cheats

Adding skills-based tests and live screen-shared demos forces candidates to prove their capabilities, making AI-assisted cheating much harder. According to a recent study, 76% of employers now use skills tests, up from 55% in 2022, and 88% report fewer bad hires when switching to skills-first hiring.(15) Real-time keyboard and webcam eye-tracking can expose hidden prompts or off-camera assistants.

Require at least one “live-only” interview—65 % of HR pros back the move

According to Software Finder data, 65% of HR professionals want at least one mandatory “live-only” interview to verify identity and prevent deepfakes.(16) Google’s Sundar Pichai also supports this, advocating for “at least one in-person round” to expose AI-assisted proxies and pre-written answers in tech interviews.(17)

Layer AI fraud-detection software and robust background checks—31 % and 24 % adoption rates

Combining AI fraud-detection software with comprehensive background checks creates a layered defense that addresses the shortcomings of each method when used in isolation. Currently, only 31% of employers use AI fraud tools, and 24% have enhanced background checks, leaving a significant adoption gap.(4) This synergy complements early multi-factor identity verification by flagging real-time anomalies like deep-fake interviews and historical credential/identity mismatches.

Set clear AI-use policies and communicate consequences to maintain candidate trust

According to a 2025 Greenhouse survey, 27% of candidates have never seen an employer’s AI-use policy. And according to Gartner, only 26% trust algorithms to evaluate them fairly. Publishing clear, plain-language rules—and stated penalties for misuse—can help rebuild that trust.(18)

Train recruiters—65 % of HR teams already receive anti-fraud training

According to a 2025 Checkr survey of 3,000 managers, 65% of HR teams have already completed formal anti-fraud training.(1) This training focuses on developing recruiters' abilities to identify behavioral and technical red flags that automated systems might miss. It also includes ongoing education to keep interviewers aware of new AI-powered deception tactics and how to use technical defenses effectively.

Implement continuous verification aligned with zero-trust identity frameworks

Continuous identity verification throughout the employee lifecycle, aligned with zero-trust principles ("never trust, always verify" as defined in NIST SP 800-207).(19) Only 10% of large organizations are expected to have mature zero-trust programs by 2026, making early adopters more competitive.(20) Currently, 63% of employers conduct ongoing background checks, and this always-on verification layer complements the multi-factor ID checks discussed earlier, creating a deeper defense.(21)

Conclusion: Building a Fraud-Resilient Hiring Process in the Age of AI

According to recent studies, 59% of hiring managers already suspect AI-driven candidate misrepresentation, and 24% of companies suffered losses exceeding $50,000 from fraudulent hires last year.(1) With Gartner predicting one in four candidate profiles will be fake by 2028, the need for action is more urgent than ever.(22) The best practices outlined above—early identity verification, cross-checking digital footprints, skills assessments, live interviews, AI fraud detection, and continuous verification—form a powerful, multi-layered defense strategy.(23) Organizations that embrace these zero-trust hiring practices today will be best positioned to navigate tomorrow's increasingly sophisticated fraud landscape.