What did the study actually find?

Researchers Jiannan Xu, Gujie Li and Jane Yi Jiang ran a large-scale controlled resume correspondence experiment testing whether AI hiring tools systematically favor resumes that look like AI-generated output.

They found they do — consistently, across multiple major AI models. The term for this is "self-preference bias": the documented tendency of large language models to rank content that resembles their own generated output more favorably than content written by humans or generated by a different model.

The paper, published in its final form in February 2026 and accepted at the AIES 2025 conference, simulated realistic hiring pipelines across 24 occupations. Candidates submitting the same AI model's output as the screener used were 23% to 60% more likely to be shortlisted than equally qualified candidates who wrote their own resumes. Content quality was controlled. The only variable that changed outcomes was whether the resume looked like the screener's own output.

Why are accounting and finance candidates hit hardest?

Accounting and finance resumes follow strong professional conventions — credentials, software names and dense formatting — that AI screeners read as stylistically unlike their own output.

The study simulated hiring across 24 occupations and found the largest shortlisting disadvantages in business-related fields, specifically naming sales and accounting. The likely reason is format. These professions have their own conventions: specific software names, credentialing formats, industry terminology and tightly structured experience sections. Human-written accounting resumes follow those conventions in ways that don't resemble AI-generated text.

AI screeners read a human-formatted accounting resume — dense with numbers, abbreviations and field-specific shorthand — as stylistically different from what they'd produce. That gap in style, not qualifications, appears to drive the disadvantage.

For accounting candidates, this isn't abstract. A professionally written resume emphasizing CPA credentials, audit experience and software competencies could rank lower than a less-experienced candidate who had ChatGPT polish their application.

How widespread is AI resume screening in accounting and finance hiring?

Most enterprise hiring for accounting and finance roles now passes through an AI screening layer before any human reviewer sees the application.

Major applicant tracking systems — Greenhouse, Workday, iCIMS and others — added AI ranking and shortlisting features starting around 2023. Large employers and staffing firms use these rankings to cut candidate pools before any human gets involved in the review.

Accounting and finance roles at mid-market and enterprise companies are squarely in scope — staff accountant, financial analyst, controller, audit associate. Application volumes are high enough that AI pre-screening is standard practice. A candidate who doesn't clear the AI layer never reaches a human recruiter, regardless of their qualifications.

AI Hiring Self-Preference Bias — Key Findings (arXiv:2509.00462)
Metric Finding
Self-preference bias range 67% to 82% against human-written resumes across major models
Shortlisting advantage Candidates using same LLM as screener are 23%–60% more likely to be shortlisted
Occupations most affected Business-related fields; largest disadvantages in sales and accounting
Occupations tested 24 occupations in simulated hiring pipeline
Bias reduction possible More than 50% reduction possible through targeted model interventions
Study type Large-scale controlled resume correspondence experiment
Authors Jiannan Xu, Gujie Li, Jane Yi Jiang
Publication Accepted at AIES 2025; final revision February 9, 2026

What should accounting and finance job seekers do differently?

The AI model you use to prepare your resume affects how screeners rank it — because AI screeners systematically prefer output that resembles their own.

That's genuinely uncomfortable. But it has practical implications. Here's what the research points toward:

Check the employer's ATS — many large employers disclose their applicant tracking system in job postings or on careers pages. If the company uses Greenhouse, Workday or a system known to embed a specific AI model for ranking, that's useful information going in.

Use AI to assist, not author — the study found bias against human-written resumes, but submitting raw AI output isn't the fix. Write your resume, then use an AI tool to refine language and formatting. You get the polish without losing the substance a human reviewer needs in round two.

Keep the credentials and field-specific language — CPA, CMA, NetSuite, CCH Axcess, ASC 842, PCAOB. These are signals to human reviewers that matter. Don't trade professional credibility for AI score optimization. A screener's preference for AI-shaped text shouldn't determine which professional terms you include.

Diversify where you apply — smaller firms, boutique practices and roles posted through professional networks like the AICPA's job board are more likely to reach a human recruiter first. Larger employers aren't off the table, but spreading across channels reduces dependence on any single AI system's preferences.

If you're an accounting or finance professional thinking through what your career path looks like given the pace of AI change, our Ikigai Wayfinder tool can help you map the intersection of what you're skilled at, what the market values and where AI is most and least likely to reshape your role.

What should hiring teams and firms know?

Hiring teams relying on AI screening tools may be excluding qualified accounting candidates without realizing self-preference bias is shaping their shortlists.

The researchers found the bias is reducible — by more than 50% through targeted interventions on the model's self-recognition capabilities. Most employers aren't applying those interventions because they don't know the problem exists. If your firm or department uses AI screening tools, ask vendors directly whether self-preference bias testing has been conducted and what mitigation steps are in place.

There's also a liability angle. Hiring on AI-screened shortlists without understanding this dynamic can produce less diverse and less qualified candidate pools — and the legal picture around AI-assisted hiring is still developing at both the federal and state level.

The deeper problem: AI is now both sides of the hiring conversation

What the study describes isn't just a bias problem — it's a structural one. LLMs are now used by job seekers to write resumes and by employers to screen them. When both sides use AI, the screening tool is effectively evaluating how much a resume resembles its own output, not how qualified the candidate is. That's a feedback loop with no obvious self-correcting mechanism.

For accounting and finance specifically, the profession's emphasis on precision, credentials and field-specific terminology may work against candidates in AI-screened systems — despite being exactly what makes someone effective in the role. The traditional resume conventions of the profession are now a potential liability in a market where the first reviewer isn't human.

The regulatory response is moving, but slowly. New York City's AI hiring bias law has been in effect since 2023. Similar legislation has been proposed at the federal level. The study's authors argue that AI fairness frameworks need to expand beyond demographic bias to include AI-to-AI interaction bias — a category of discrimination that current employment law wasn't designed to address.

Sources

Fact-checked by Sydney Smart
AI Hiring Resume Screening Accounting Careers AI Bias Finance Jobs Job Seekers Algorithmic Hiring