Your Hiring Algorithm Might Be the Most Biased Person in the Room
- Feb 24
- 4 min read
For years, HR leaders have been sold a compelling story: replace subjective human judgment with objective algorithms, and bias disappears. It's a seductive idea. And it's only half true.
AI has genuinely transformed how companies hire. Résumé screening that once took weeks now takes seconds. Candidate pools that used to be limited by a recruiter's network are now global. Organizations like Hilton have cut time-to-fill positions by 90%. The efficiency gains are real, measurable, and hard to argue with.
But a growing body of research — and a trail of very public failures — suggests that in our rush to automate bias out of hiring, many organizations have accidentally automated it in.

The Problem With Training on the Past
Amazon discovered this the hard way. In 2018, the company quietly scrapped an AI recruiting tool it had spent years developing after engineers found it was systematically penalizing résumés that included the word "women's" — as in "women's chess club" or "women's college." The system had been trained on a decade of Amazon's own hiring data. That data reflected a workforce that was historically male. So the algorithm did exactly what it was designed to do: it learned from the past. The past just happened to be biased.
This is the central irony of algorithmic hiring. These systems don't invent new forms of discrimination — they inherit and then scale existing ones. A human recruiter's bias affects one candidate at a time. An algorithm's bias affects everyone who applies.
Researchers at Northeastern University found that AI-driven job advertising algorithms on major platforms were directing STEM job postings predominantly toward men and lower-wage listings predominantly toward women — not because any employer asked for that, but because the AI was optimizing for historical engagement patterns. Nobody programmed sexism in. The data just carried it forward.
"Objective" Is a Design Choice
Here's what the research increasingly makes clear: there is no such thing as a neutral algorithm. Every AI hiring system encodes a definition of what a "good" candidate looks like, based on who got hired and succeeded in the past. That definition is then applied, at scale and with apparent authority, to everyone who applies.
Researchers Seppälä and Małecka, writing in Big Data & Society, put it directly: the promise that AI removes human bias rests on assumptions that don't hold up under scrutiny. What the algorithm calls "objective" is really just one particular version of fairness — baked in by whoever built it, and invisible to everyone using it.
That invisibility is the real risk. When a recruiter passes on a candidate unfairly, there's at least a human in the chain who might catch it, question it, or be held accountable. When an algorithm does it, the decision can look clean, defensible, and data-driven — right up until someone looks closely enough to see the pattern.
Candidates Already Know Something Is Off
Employees and job seekers are not oblivious to this dynamic. A 2023 Pew Research Center survey of more than 11,000 Americans found that 71% oppose AI making final hiring decisions. Two-thirds say they would simply not apply for a job if they knew an algorithm had the final say.
The skepticism cuts across demographic lines, though with different textures. Some respondents — particularly those without traditional career histories — expressed cautious hope that an algorithm might evaluate their potential more fairly than a human with embedded assumptions. Others were blunt: "AI is as biased as the people who designed it, allowing structural biases based on race or socioeconomic status to persist unchallenged."
Both things can be true at once. AI can reduce some forms of human bias while amplifying others. The outcome depends almost entirely on how the system was built — and how much anyone is paying attention to what it's actually doing.
The Regulatory Window Is Closing
Governments aren't waiting for companies to self-regulate. The EU's AI Act now classifies hiring algorithms as high-risk applications, requiring transparency, bias testing, and candidate complaint rights. New York City requires employers using automated hiring tools to commission independent bias audits and disclose the practice to candidates. The EEOC has confirmed that civil rights law applies to algorithmic hiring decisions — meaning employers can be held liable for discriminatory outcomes even if they never intended them.
In 2024, a U.S. court allowed a claim that an applicant tracking system provider could be treated as an employment agency under Title VII. That case alone should be waking up legal and HR teams across the country.
The window for voluntary, proactive action is still open. But it's closing.
What Good Actually Looks Like
None of this means organizations should abandon AI in hiring. The efficiency gains are too significant, and the alternative — inconsistent, gut-driven, deeply human bias — isn't obviously better. What it means is that AI needs to be treated like any other high-stakes business tool: with scrutiny, governance, and accountability.
The organizations getting this right share a few common practices. They don't take vendor claims about fairness at face value — they require independent audits before deployment. They don't train systems on historical hiring data without first interrogating what that history reflects. And critically, they keep humans accountable for consequential decisions rather than outsourcing accountability to the algorithm.
Most importantly, they've stopped asking "does our AI have bias?" — a question that almost always gets answered with "no" — and started asking "what kind of bias does our AI have, and who does it affect?"
That's a harder question. It's also the right one.


























Comments