Key Takeaways
- The IRS formalized AI governance in IRM 10.24.1 (February 10, 2026), authorizing AI-assisted audit selection, document generation, and exam support
- Practitioners have the same audit defense tools for AI-selected returns as traditionally-selected ones, but response strategy should emphasize data that explains pattern anomalies
- When a client's return is flagged by algorithm, the IRM requires human oversight and documentation — there's a paper trail you can request
- Client communication matters: AI-selected audits are now routine and don't indicate wrongdoing, fraud, or criminal scrutiny
- Track how your AI-flagged audit responses resolve — this is emerging practice and your firm's success patterns are valuable
What is IRM 10.24.1 and what did it authorize?
On February 10, 2026, the IRS formalized its internal manual policy for using artificial intelligence in audit selection and exam support. This wasn't a new announcement of AI adoption — the IRS has been using AI-assisted models since 2025. IRM 10.24.1 established the first official governance framework: what AI can do, how human review fits into the process, and what documentation requirements apply.
The policy authorizes three uses: audit selection (deciding which returns to examine), document generation (creating exam request letters and reports), and exam support (analyzing taxpayer data during the audit). The critical phrase in IRM 10.24.1 is the mandate for human oversight. Every AI-flagged return must be reviewed by a human revenue agent or manager before an exam notice is issued. The AI doesn't decide alone. It flags patterns; a person makes the final call.
For practitioners, this means one thing: when your client gets an audit notice, you can ask whether the return was initially flagged by an AI model and, if so, what documentation supports the human override decision. That paper trail exists under IRM 10.24.1.
How is the IRS actually using AI in the audit selection process today?
The IRS is deploying Agentforce (Salesforce's agentic AI platform) for taxpayer-facing customer service. Separately, it's using pattern-matching AI models in the exam selection function. The two are different systems serving different purposes, but both are covered under IRM 10.24.1.
In audit selection specifically, the AI models identify returns that deviate from expected patterns. "Expected patterns" are built from historical data: how much revenue typically appears for a business of a certain size and industry, what percentage of revenue is usually expensed, how often deductions cluster by category. A return that falls outside statistical norms gets flagged. A human revenue agent then reviews the flagged return and decides whether to proceed with an exam.
According to the GAO's early 2026 report on IRS AI skills gaps, the IRS has more returns to examine than staff to handle them. AI is being used to triage: identify which returns are most likely to have discrepancies worth pursuing, so limited auditor time goes to highest-risk cases. This is defensible policy. It's also a permanent shift in how the IRS operates.
The practical impact: algorithmic audit selection will become the majority of audit triggers by 2027. Practitioners who understand how the models work can adjust response strategy accordingly.
How does an AI-selected audit differ from a traditionally-selected one?
In process, not much. The exam notice looks the same. The documentation requests are the same. Taxpayer rights are identical. The difference is in response strategy and what evidence matters most.
A traditionally-selected audit is typically triggered by patterns an experienced revenue agent noticed: unusually high deductions for a business type, or inconsistencies flagged during routine compliance checks. When you respond, you're explaining anomalies to a human who already has a hypothesis about what's unusual.
An AI-selected audit is triggered by statistical deviation from large historical datasets. The algorithm doesn't have intuition or context. It identifies: "This return's meal and entertainment expenses are 3.2 standard deviations above the mean for this industry and revenue level." Your response should lead with quantified data explaining why the deviation is legitimate. "Our client's industry changed in Year 2024 — they shifted from local operations to franchise sales, which included more client entertainment. Here's the revenue growth and the corresponding industry peer data showing that expense ratio is now normal."
With algorithmic flagging, context and data explanation matter more than narrative. The revenue agent reviewing the AI flag will be looking at: Does the taxpayer's financial profile make sense given their business? If yes, the audit is closed quickly. If no, it escalates.
What rights do taxpayers and practitioners have when AI was involved in the selection?
This is the frontier question in tax practice right now, and honest answer is: the IRM 10.24.1 framework exists, but the detailed taxpayer rights specific to AI-flagged audits are still emerging. Here's what is established.
Under IRM 10.24.1, human oversight is mandatory. You can ask the IRS whether a return was flagged by AI and request documentation of the human review step. The IRS is required to have that documentation if the policy was followed. If they don't have it, that's a process failure worth challenging.
Second, you can request the logic behind the AI flag. Because the IRS uses statistical models, they can explain in general terms why a return was selected: "Deductions fell outside normal ranges for your industry." They can't (and won't) disclose the exact algorithmic formula — that's protected as government methodology — but general explanation of what was unusual is within your rights to request.
Third, taxpayer rights under Circular 230 and the standards for tax practice apply regardless of audit origin. If the revenue agent's examination is unreasonable or lacks adequate supporting documentation, you have the same remedies whether the audit was selected by AI or by human judgment. The audit defense doesn't change. The selection method doesn't alter substantive law.
How should practitioners adjust their response and client communication?
Adjust your internal documentation and response protocol. When a client receives an exam notice, ask immediately: "Do you have records showing how the IRS selected your return or any indication this was an automated flag?" Request the examination file before the first meeting with the revenue agent. If the AI flag is documented in the file, it will be obvious.
If AI selection is documented, your response brief to the revenue agent should acknowledge it: "We understand this return was selected through pattern-matching analysis. We're providing data context that explains the deviations noted and demonstrates consistency with industry and peer financial profiles." Lead with quantitative explanation. Data-first response is more persuasive when the initial flag came from data analysis.
In client communication, there's a difference. Scared client: "You're being audited because the IRS suspects fraud." Informed client: "Your return was flagged by an algorithm that identified deductions slightly outside normal ranges for your industry. This is now routine procedure, not a sign of wrongdoing." Use the second version. It's accurate.
Communicate that AI-selected audits are now standard procedure. The IRS has more returns than staff. They use AI to prioritize. This is policy, not suspicion. Audit defense proceeds the same way as always: organized records, clear explanations, substantiation of claimed deductions. The only difference is leading with data explanation when responding to the revenue agent.
What this means for your practice
Track your AI-flagged audit resolutions over the next 12 months. Document: How many exam notices included AI-flagged returns (you can ask)? Of those, how many were closed without adjustment? How many resulted in small adjustments? Which response strategies were most effective? This is emerging knowledge in tax practice. Your firm's data is valuable.
Second, IRM 10.24.1 enforcement and scope will evolve. By 2027, we'll have clearer case law on whether taxpayer rights differ for AI-selected audits. Watch IRS guidance, Tax Court decisions, and AICPA guidance for updates. Practitioners who understand the frontier now have competitive advantage when taxpayers need representation on AI-flagged returns.
Third, continue to push back on overreach. If the IRS deviates from IRM 10.24.1 (human review skipped, documentation absent, AI logic not disclosed), that's worth fighting. The policy exists to protect taxpayers. Compliance with it is a practitioner's responsibility to verify and enforce.
Where Uncertainty Remains
The IRM 10.24.1 framework is clear on process. What's still uncertain is whether courts will recognize different evidentiary standards for AI-flagged audits, or whether the IRS's use of AI models will eventually face constitutional or statutory challenges on grounds of taxpayer due process or transparency. Until those questions are resolved in case law, practitioners should treat AI-flagged audits as strategically similar to any other audit — prepare comprehensive documentation, explain any anomalies with data — while staying alert for regulatory changes.
