Key Takeaways
- A small firm AI policy should define approved tools, prohibited data, review standards, client communication rules, and incident escalation.
- The highest-risk employee behavior is not "using AI." It is pasting identifiable client data into a tool the firm has not reviewed.
- AI outputs should be treated as drafts until a qualified person checks source support, math, citations, tax logic, and client-specific assumptions.
- The policy should be short enough to use: a three-page policy, employee acknowledgement, and 30-minute rollout can beat a long document nobody follows.
Why does a small firm need an AI policy now?
A small firm needs an AI policy because staff may already be using AI before partners approve tools, data rules, or review standards.
The risk is practical. A preparer asks ChatGPT to rewrite a client email. A bookkeeper uses Copilot to summarize messy transaction notes. A CAS manager asks an AI tool to draft a payroll memo. Nobody thinks they are creating a governance issue. But if client names, payroll data, tax positions, bank details, or workpaper evidence entered an unapproved system, the firm has a problem.
CPA.com/AICPA risk-control guidance warns that CPA firms handle financial and personal information that must be protected, and that entering data into a generative AI tool means sharing it with the tool owner. The same guidance flags reliability risk: AI models can produce answers that sound usable but still require professional review.
What should the policy say about approved AI tools?
The policy should list approved AI tools, approved users, allowed workflows, blocked workflows, and the partner responsible for review.
Do not start with a broad sentence like "employees may use AI responsibly." That is too vague to enforce. Start with a tool list. For example: Microsoft Copilot may be used inside firm-controlled accounts for internal drafting; ChatGPT Team may be used for generic research and non-client templates; public free tools may not receive firm or client confidential information.
Set 100% of approved tools on firm-owned accounts, and set 0% tolerance for client data in personal or public accounts. That bright line is easier to enforce than a judgment call buried in a long policy.
The cost test is practical too: a $1,000 AI subscription is not worth a $5,000 remediation scramble if staff accidentally upload client payroll or tax data to the wrong system.
The policy should also explain how new tools get approved. A staff member should not be able to install a browser extension, upload trial-balance data, and call it a pilot. Require a short request: tool name, vendor, use case, data involved, reviewer, cost, and whether the tool can retain or train on firm data.
What client data should never go into public AI?
Public AI tools should not receive client names, tax IDs, payroll data, bank records, financial statements, contracts, or identifiable workpapers.
This is the center of the policy. Employees need concrete examples, not abstract warnings. "Do not paste confidential data" is easy to agree with and hard to apply under deadline pressure. Spell it out.
Prohibited inputs should include Social Security numbers, EINs, account numbers, payroll registers, W-2s, 1099 detail, bank statements, customer lists, vendor lists, financial statements, board materials, audit evidence, tax returns, engagement letters, and client email threads. Even if an employee removes the client name, a combination of location, transaction detail, and unusual facts may still identify the client.
How should staff review AI-generated work?
AI-generated work should be reviewed as a draft, with source checks, math checks, client-specific verification, and documented signoff when risk is high.
A small firm AI policy should say that AI output cannot be copied directly into client deliverables without review. That includes emails, memos, financial summaries, tax research summaries, advisory reports, spreadsheet formulas, and explanations sent through portals.
Use a tiered rule. Low-risk internal wording can be reviewed by the employee using it. Client-facing explanations need review by the engagement owner. Tax, audit, attestation, payroll compliance, and financial statement language need review by a qualified professional before delivery. If an AI output affects a client recommendation, the source support belongs in the file.
A practical firm rule can be this direct: "AI may help prepare the first draft, but the engagement owner remains responsible for the final work product." That sentence is easy to train, easy to remember, and hard to misinterpret during busy season.
| Policy Area | Practical Rule | Example |
|---|---|---|
| Approved use | Allow generic drafting and internal templates | Rewrite a non-client checklist for clarity |
| Prohibited data | Block identifiable client information in public tools | No tax return, payroll, bank, or workpaper uploads |
| Human review | Treat output as a draft until checked | Verify tax citations before sending a memo |
| Client communication | Require professional review before delivery | Partner reviews AI-assisted advisory language |
Should clients be told when the firm uses AI?
Client disclosure depends on law, contract terms, professional duties, and firm judgment, but the policy should create a review process.
CPA.com/AICPA disclosure guidance tells firms to assess whether legal or ethical requirements apply and to consider the trust value of transparent disclosure. The policy does not need one universal answer for every engagement. It does need a decision path.
For example, the firm may decide that generic AI-assisted drafting does not require engagement-level disclosure, while AI tools that process client records, support advisory analysis, or connect to client systems require engagement-letter language or client consent. The key is that staff should not make that call alone.
What incident rules belong in the policy?
The policy should tell employees exactly what to do if they paste client data into an unapproved tool or receive unsafe output.
Employees hide mistakes when the policy sounds punitive and vague. Make the escalation path boring and fast. If client data enters an unapproved tool, the employee should notify the partner or operations lead the same day, preserve the prompt and output if available, stop further use, and help determine what information was exposed.
The FTC's privacy and security guidance is a useful baseline for small businesses: collect only what you need, keep sensitive information safe, and honor privacy promises. A firm AI policy should turn that into day-to-day behavior.
How do you roll out the policy without slowing the firm?
Roll out the policy with a short document, employee acknowledgement, approved-tool list, examples, and a 30-minute training session.
The policy should fit the firm. A six-person bookkeeping shop does not need a 40-page AI governance manual. It needs clear defaults. A good first version can include one page of allowed and prohibited uses, one page of client data rules, one page of review and escalation rules, and one signature page.
Microsoft's responsible AI principles are broader than a small firm policy, but the themes translate well: reliability, safety, privacy, transparency, and accountability. For a small accounting firm, accountability means a partner owns the tool list, engagement owners review client-facing work, and employees know where the line is before the busy season starts.
The firm rule
If an employee would not email the data to an unknown vendor, they should not paste it into an unapproved AI tool. Start there, then build better approved workflows.
