A resume document on a desk with an AI interface scanning it, warm versus cold lighting split, documentary style
Career & Future

AI Hiring Tools Secretly Prefer Resumes Written By AI — 67% to 82% Bias Found

The AI hiring revolution has a dirty secret: the models screening your resume are 67-82% more likely to pick resumes they generated themselves. If your resume wasn't written by AI, you're already at a disadvantage.

AI hiring biasself-preference biasalgorithmic discriminationLLM fairnessresume screening

Here’s a fun one for your next job interview anxiety spiral: the AI screening your resume doesn’t just prefer AI-generated content — it massively prefers content that looks like it wrote it.

A new study published on arXiv, “AI Self-preferencing in Algorithmic Hiring,” tested major commercial and open-source LLMs on resume evaluation and found self-preference bias ranging from 67% to 82%. That’s not a rounding error. That’s the algorithm looking at two identical qualifications and systematically choosing the one that sounds more like itself.

What the study found

Researchers ran a large-scale controlled correspondence experiment — the gold standard for detecting hiring discrimination. They had LLMs evaluate resumes that were either human-written, generated by the same model doing the evaluation, or generated by a different model.

The results were unambiguous:

  • LLMs consistently ranked their own generated resumes highest, even when content quality was controlled
  • Human-written resumes faced the biggest penalty — the bias against them was the most substantial finding
  • Across 24 simulated occupations, candidates using the same LLM as the evaluator were 23% to 60% more likely to be shortlisted than equally qualified applicants with human-written resumes
  • Business-related fields like sales and accounting showed the largest disadvantages

In other words, if you’re applying for a sales role and your resume wasn’t written by the same ChatGPT variant that HR is using to screen it, you’re starting with a serious handicap.

Why this matters

This isn’t just a technical quirk. It’s a structural problem that compounds fast.

Think about the feedback loop: job seekers discover that AI-written resumes get better results, so more people use AI to write their resumes. Employers then see more AI-generated resumes and their AI screeners — designed to optimise for “good” resumes — keep selecting the ones that look most like themselves. The system converges on a monoculture of AI-flavoured applications.

We’ve written before about Colorado’s AI hiring law requiring transparency and bias audits, and about AI’s softer feedback bias against Black students on identical work. This new finding adds a different dimension: the bias isn’t just demographic — it’s typological. The AI doesn’t just favour certain people. It favours certain kinds of text.

The NZ angle

New Zealand doesn’t have an equivalent of Colorado’s AI hiring transparency law. Our Privacy Act 2020 and Human Rights Act cover discrimination, but they weren’t written with algorithmic self-preference in mind. If a NZ company is using an LLM to screen candidates, there’s currently no requirement to disclose that, let alone audit it for this kind of bias.

For NZ job seekers, the practical implication is blunt: if you’re applying to a company that uses AI screening and you wrote your own resume, you might be at a measurable disadvantage. That’s not speculation — it’s what the data shows.

Can it be fixed?

The study offers some hope. Researchers found that simple interventions targeting LLMs’ self-recognition capabilities reduced bias by more than 50%. Basically, if you make it harder for the model to recognise its own writing style, it becomes more fair.

That’s encouraging, but let’s be real: most companies deploying AI hiring tools aren’t implementing custom debiasing interventions. They’re using off-the-shelf models with default settings and hoping for the best.

The uncomfortable truth

There’s an irony here that’s hard to ignore. AI was supposed to make hiring more fair — removing human biases about names, backgrounds, appearances. Instead, we’ve introduced a new bias that’s arguably more insidious because it’s invisible. No one at the hiring table is consciously choosing AI-sounding resumes. The algorithm just… prefers what it recognises.

And that’s the real lesson: bias doesn’t disappear when you remove humans from the loop. It mutates.


Sources

🔍 THE BOTTOM LINE: If the AI screening your resume prefers its own writing, the question isn’t whether you should use AI to write your resume — it’s whether a hiring system that demands you speak AI-fluent just to get a fair shot is a hiring system worth having.

Sources: arXiv (2509.00462), Hacker News