A groundbreaking lawsuit is casting a shadow over the use of AI in hiring, with potentially far-reaching implications for companies that rely on Workday’s popular HR software. At the center of the legal storm is a case that accuses Workday’s AI-driven hiring tools of systematically excluding older job applicants, raising alarms about algorithmic bias and discrimination in the workplace.
The lawsuit, *Mobley v. Workday, Inc.*, was filed by Derek Mobley, a Black job applicant over the age of 40, who alleges that Workday’s AI-powered screening tools violated federal antidiscrimination laws. Specifically, Mobley claims that the company’s systems disproportionately reject older candidates, pointing to biased training data and algorithmic design as the root cause.
Workday, a leader in HR software, provides tools that help employers automate hiring processes, from applicant screening to onboarding. Its AI systems score, sort, and recommend candidates based on data-driven models. While Workday itself isn’t an employer, the lawsuit argues that the company can be held liable as an “agent” of its clients, given the influential role its technology plays in hiring decisions.
The case has already made history. In May 2025, a federal court in California granted conditional certification to a nationwide collective of job applicants over 40 who were denied recommendations through Workday’s platform. This collective could potentially include millions of individuals, making it one of the largest class actions involving AI-driven hiring tools in U.S. employment law history.
The court’s decision underscores a critical issue: the potential for AI systems to perpetuate bias, even when deployed by third-party vendors. Workday’s tools, the lawsuit claims, reflect and amplify existing biases present in the data used to train its algorithms. This has led to a systematic exclusion of older candidates, violating the Age Discrimination in Employment Act (ADEA).
For employers, the implications are stark. Companies that rely on Workday’s hiring tools could face legal consequences for alleged age discrimination, even if the software is provided by a third party. The court has made it clear that outsourcing hiring functions to AI doesn’t absolve employers of responsibility for ensuring fair and equitable practices.
As the case progresses, it’s shining a light on a broader trend: the growing scrutiny of AI and algorithmic bias in employment. With new regulations in California requiring oversight of automated decision systems, the legal landscape for AI-driven HR technology is rapidly evolving. Employers are being forced to confront the risks of algorithmic exclusion and take proactive steps to ensure compliance.
Workday’s AI-driven hiring tools operate by scoring, sorting, and recommending job applicants based on data-driven models. Employers subscribe to the software, which then uses artificial intelligence to analyze applicant profiles. These scores and recommendations are pivotal in determining which candidates move forward in the hiring process, often serving as a gatekeeper for whether an application is considered by human decision-makers.
The lawsuit highlights the broader regulatory context surrounding AI in hiring. In California, new rules effective October 1, 2025, now require employers using automated decision systems (ADS) to comply with the Fair Employment and Housing Act (FEHA). This mandates ongoing oversight and assessment of algorithmic tools to identify potential discriminatory impacts in hiring, promotions, and evaluations. This trend signals a growing national and potentially global scrutiny of algorithmic bias in employment, with similar regulations likely to emerge in other jurisdictions.
For employers, the stakes are high. Companies using Workday’s tools could face legal exposure for alleged age discrimination, even if the software is provided by a third-party vendor. The court has emphasized that outsourcing hiring functions to AI does not absolve employers of their responsibility to ensure fair and equitable practices. Workday’s marketing materials and internal responses have been cited as evidence that its systems generate AI-based recommendations, rather than merely reflecting employer input.
Experts recommend that employers take immediate steps to mitigate risks. This includes auditing all AI or algorithmic tools used in hiring and HR to assess their potential disparate impact on protected groups. Employers must also ensure robust compliance procedures and stay informed about the progress of cases like *Mobley v. Workday*, as they continue to shape the legal landscape for AI-driven HR technology.
Conclusion
The *Mobley v. Workday* lawsuit has brought critical attention to the role of AI in hiring and the potential for algorithmic bias to perpetuate discrimination. As the case progresses, it underscores the need for employers and HR technology providers to prioritize fairness, transparency, and compliance in their use of automated decision-making tools. With growing regulatory scrutiny and the potential for class actions on a massive scale, the legal and ethical implications of AI-driven hiring practices are more pressing than ever. Employers must take proactive steps to audit their AI tools, ensure compliance with evolving regulations, and address the risks of algorithmic exclusion to avoid legal and reputational damage.