Key Takeaways
- CPA AI vendor due diligence should happen before client files, tax data, payroll records, or workpapers enter the tool.
- The first questions are about confidentiality: retention, model training, subprocessors, access, deletion, and client consent.
- Accuracy controls matter as much as security controls because a wrong AI answer can still look professional.
- Partners should require a one-page scorecard and approval record before staff use a vendor with client data.
Why should CPA firms question AI vendors first?
CPA firms should question AI vendors first because client data obligations do not pause when a tool is labeled artificial intelligence.
AI vendor demos are built to show speed. The partner decision should start somewhere else: what happens when the firm uploads client data? CPA.com/AICPA risk-control guidance warns that entering data into a generative AI tool means sharing that data with the tool's owner. For CPA firms, that can include tax returns, payroll details, bank records, owner compensation, financial statements, audit support, and advisory workpapers.
The AICPA Code of Professional Conduct includes a Confidential Client Information Rule. Firms also have engagement terms, privacy promises, state requirements, IRS rules for tax information, and client expectations. A vendor's AI feature does not remove those duties.
What data questions should every vendor answer?
Every vendor should answer what data it collects, stores, trains on, shares, deletes, logs, and exposes to administrators or subprocessors.
Start with the data path. What exactly leaves the firm's system? Are files copied into the vendor platform? Are prompts stored? Are outputs retained? Are embeddings created? Can vendor employees access uploaded documents? Can the vendor use client data to train, fine-tune, evaluate, or improve models?
Ask for answers in writing. If the vendor says "your data is secure" but cannot explain retention periods, subprocessors, encryption, access controls, deletion rights, or model-training exclusions, the firm does not have enough information to approve client-data use.
Partners can use a simple standard in the approval file: "No client data enters an AI vendor until the firm can explain where the data goes, who can access it, and how it leaves." That turns a vague security concern into an approval condition.
What accuracy and review questions matter most?
The key accuracy questions cover source support, hallucination controls, confidence signals, exception routing, reviewer workflow, and error reporting.
CPA firms should not only ask whether the tool is secure. They should ask what happens when it is wrong. Can the AI cite the document page it used? Can it distinguish source data from generated explanation? Does it show confidence levels? Does it flag incomplete documents? Does it keep a reviewer trail?
CPA.com/AICPA guidance notes that AI models are not infallible. That matters in tax, CAS, audit, and advisory work because the output may look polished even when it misreads a document or invents a conclusion. A vendor that cannot support human review is not ready for high-risk client workflows.
| Due-Diligence Area | Question to Ask | Pause If Vendor Says |
|---|---|---|
| Model training | Can client data train or improve any model? | "It depends on your settings" |
| Retention | How long are prompts, files, outputs, and logs kept? | "We retain what is needed" |
| Subprocessors | Who can process or access client data? | "See our general vendor list" |
| Accuracy | How does the tool support review and source tracing? | "The model is highly accurate" |
| Audit trail | Can we see who used AI on which client record? | "We do not expose that log" |
How should firms handle client consent and disclosure?
Client consent and disclosure should be reviewed before AI processes confidential client information or changes engagement delivery.
CPA.com/AICPA disclosure guidance says firms should assess whether compulsory requirements exist and consider whether transparent voluntary disclosure would build trust. That does not mean every use of AI requires the same client notice. It does mean firms should make an intentional decision.
A firm might allow AI for internal non-client templates without disclosure, require engagement-letter language for tools that process client records, and require specific consent for tools involving tax return information, payroll data, or sensitive advisory material. The wrong answer is letting each staff member decide in the moment.
What security evidence should partners request?
Partners should request SOC reports, security policies, encryption details, access controls, incident response terms, and deletion procedures.
Traditional vendor due diligence still applies. AI does not replace SOC 2 reports, penetration testing, encryption, single sign-on, role-based access, data residency, backup procedures, or breach notification terms. If anything, AI makes those questions more important because more unstructured client data may move through the system.
Require 100% of approved AI vendors to support firm-managed access, and require 0% client-data use in tools without retention and deletion answers. The percentages are simple, but they force partners to decide before staff experiment.
A $2,000 pilot can become a $20,000 cleanup if the firm later discovers that client files were retained, inaccessible for audit, or processed under terms nobody reviewed.
The FTC's privacy and security guidance gives a simple baseline: businesses should protect sensitive information and keep security current. For CPA firms, that baseline should become a vendor file with evidence, not just a sales call note.
How should a CPA firm score an AI vendor?
A CPA firm should score AI vendors by data risk, workflow risk, security evidence, accuracy controls, review support, and contract terms.
Use a one-page scorecard. Give the vendor a red, yellow, or green rating in six areas: client data exposure, model-training risk, confidentiality and disclosure fit, accuracy and review controls, audit trail quality, and contract/security evidence. A vendor should not move to client-data use if any critical area is red.
NIST's AI Risk Management Framework is useful as a broad structure: govern, map, measure, and manage. CPA firms can translate that into a practical approval flow: assign an owner, map the data, test the output, document review, and manage incidents.
What should partners do before staff upload client files?
Partners should approve the vendor, document the use case, restrict data, train staff, and keep evidence before client files are uploaded.
The first approved use should be narrow. Pick one workflow, one owner, one client-data category, one review rule, and one evidence location. Do not turn on a vendor across the whole firm because the demo looked useful.
For example, a firm might approve one AI workpaper tool for accounts payable testing on three low-risk clients, with no payroll data, no tax return data, and a manager review log. That is a pilot. Letting every department upload whatever files they want is not a pilot; it is uncontrolled adoption.
The goal is not to block AI. It is to make adoption defensible. When a client, regulator, carrier, or partner asks why the firm trusted a tool with client data, the firm should be able to show the questions it asked, the risks it accepted, and the controls it put in place.
The partner rule
If the vendor cannot explain what happens to client data, the firm should not upload client data. Utility comes after confidentiality.
