Why Your Hiring Process Needs an Urgent Audit

There’s a case working its way through federal court in California that should have every TA leader and CHRO paying very close attention. And if you’re a Workday customer using their screening tools, you need to be paying very close attention.

On May 16th, 2025, Judge Rita Lin granted conditional certification for Mobley v. Workday to proceed as a nationwide collective action under the Age Discrimination in Employment Act (ADEA). What does that mean in plain English?

It means potentially millions of job applicants aged 40 and over who were screened through Workday’s AI-powered tools since September 2020 can now join the lawsuit. And to make matters worse, the judge has ordered Workday to provide a list of customers using their AI screening features so these applicants can be notified and given the opportunity to opt in.

If you’re using Workday’s applicant screening tools, your company’s name is about to be on a list that goes out to potentially millions of people who might believe they were discriminated against. Let that sink in.

What Actually Happened in the Mobley Case

Derek Mobley, an African American man over 40 with a disability, applied to hundreds of jobs through companies using Workday’s screening system. Despite his qualifications and experience, he was consistently rejected – often before a human ever reviewed his application.

His claim? That Workday’s algorithm-based screening tools systematically discriminated against applicants based on age, race, and disability. The court found his claims plausible enough to not only survive Workday’s motion to dismiss, but to allow the case to proceed as a collective action.

Workday argued they’re just a software provider – they don’t make hiring decisions, their customers do. The court wasn’t buying it. Judge Lin found that Workday was sufficiently involved in the hiring process to be held potentially liable as an ‘agent’ of the employers.

This is huge. It’s not just about one company or one plaintiff anymore. It’s about whether AI screening tools – used by thousands of companies – are systematically filtering out protected classes of workers.

The Perfect Storm: Why This Matters More Now Than Ever

Here’s what makes this moment particularly dangerous for employers:

1. The White-Collar Recession is Real

Experienced workers – especially those over 40 – are finding it harder and longer to land their next role. We’re seeing qualified candidates with strong track records spending six, nine, even twelve months in job searches. When people are frustrated, rejected repeatedly, and suspect bias? They’re more motivated than ever to join a class action lawsuit.

2. The Evidence Problem You Didn’t Know You Had

Here’s something that should terrify you: candidates are recording their interviews. On their phones. On their laptops. For all kinds of reasons – some legitimate (they want coaching and feedback), some protective (they suspect discrimination).

Scroll TikTok for five minutes and you’ll find videos of actual job interviews where hiring managers are asking blatantly illegal questions. Age. Marital status. Plans to have children. Health conditions. All captured on video. All potential evidence in discrimination claims.

The question isn’t whether candidates are recording interviews. It’s: if they’re recording, why aren’t you?

3. AI Screening Creates Invisible Risk

Most companies have embraced algorithmic screening for good reasons: efficiency, consistency, reducing human bias. But here’s the problem – these tools can create discriminatory outcomes even when there’s no discriminatory intent.

Resume screening algorithms trained on historical hiring data can perpetuate historical biases. “Culture fit” assessments can systematically disadvantage certain demographic groups. “Predictive” tools can discriminate based on proxies for protected characteristics.

Most companies using these tools have no idea if they’re creating discriminatory impacts. They’re optimizing for efficiency without auditing for fairness.

4. “High Risk” Designation Changes the Game

The EU AI Act and California’s recently passed SB 53 (Transparent and Fair Automated Information Act) both classify hiring as a “high risk” application for AI. That’s not just bureaucratic categorization – it means heightened scrutiny, compliance requirements, and legal exposure.

Companies that treated AI screening as a purely technical decision now need to treat it as a legal and compliance decision.

What You Need to Do Right Now

1. Audit Your Screening Process Immediately

If you’re using AI or algorithmic tools to screen candidates:

  • Document exactly how they work and what data they use
  • Test for adverse impact across protected classes (age, race, gender, disability)
  • Understand what “knockout” criteria automatically eliminate candidates
  • Ensure humans are making final decisions, not algorithms

Critical point: AI can assist. It cannot decide. The moment you let an algorithm make the hiring decision, you’ve created legal exposure you can’t explain away.

2. Shore Up Interview Compliance Fast

Most companies focus interview training on “what not to ask” without teaching people how to actually conduct great, legally defensible interviews. That’s backwards.

Your interviewers need:

  • Training on how to evaluate and assess job-relevant skills systematically
  • Clear frameworks for behavioral interviewing and evidence gathering
  • Real-time support to catch and correct mistakes before they become legal problems
  • Documentation of what was actually asked and answered in each interview

This is where interview intelligence technology becomes critical. If candidates are recording interviews, you should be too – but with proper consent, governance, and the ability to identify and address compliance issues before they become lawsuits.

3. Build Oversight Into Every Stage of the Process

You need visibility into:

  • Disposition management: Why are candidates being rejected at each stage? Are there patterns that suggest bias?
  • Interview quality: Are interviewers staying on script? Asking legal, job-relevant questions? Evaluating consistently?
  • Decision-making: What evidence is actually driving hiring decisions? Can you defend them?

This isn’t about surveillance. It’s about accountability. When (not if) you face a discrimination claim, you need to be able to show your process was fair, consistent, and job-related.

4. Accept That Humans Will Make Mistakes – And Build Systems to Catch Them

Perfect interviewing doesn’t exist. People will ask inappropriate questions. They’ll make snap judgments. They’ll let bias creep in.

The difference between companies that survive legal challenges and those that don’t? Systems that catch mistakes quickly and take corrective action:

  • Flagging problematic interview content for review
  • Providing coaching and retraining where needed
  • Taking disciplinary action when warranted
  • Documenting all of it

5. Prioritize Transparency and Fairness Over Pure Efficiency

Yes, AI can screen thousands of resumes in seconds. Yes, it saves time and money. But if it’s creating discriminatory outcomes – even unintentionally – the lawsuit costs will dwarf any efficiency gains.

The regulatory environment is clear: hiring is high-risk. That means:

  • Slower is sometimes better if it’s more defensible
  • Human oversight is non-negotiable
  • Transparency matters more than speed
  • Fairness must be measurable, not assumed

The Workday Ripple Effect

Even if you’re not a Workday customer, this case matters. Because once the court establishes that AI vendors can be held liable for discriminatory outcomes, every screening tool provider is on notice. And so is every company using them.

The questions coming from legal, compliance, and the C-suite will be:

  • How do we know our tools aren’t discriminating?
  • Can we prove our hiring process is fair?
  • What happens if we get added to a class action?

If you don’t have good answers, now is the time to get them.

Final Thought

The Mobley case is a wake-up call. But it’s not the only one coming.

We’re entering an era where AI-powered hiring will face increasing legal scrutiny. The companies that survive and thrive will be those that build transparency, fairness, and human oversight into their processes from the start.

This isn’t about fear. It’s about responsibility. You have powerful tools at your disposal. Use them wisely. Use them fairly. And make sure you can prove it.

Because the next Derek Mobley might have already applied to work at your company. The question is: can you defend what happened to their application?

Is your organization auditing AI screening tools for bias? How are you ensuring interview compliance?