Key Takeaways
- Finance AI security starts with a use inventory: which tools are being used, what data enters them, what outputs they create, and who owns review.
- The highest-risk workflows are close, external reporting, tax, treasury, board materials, payment approval, and any process that touches confidential customer, employee, vendor, or bank data.
- AI can draft, summarize, classify, reconcile, and flag exceptions, but CFOs should define where AI cannot approve, release, file, pay, or communicate without human signoff.
- The board or audit committee does not need a model lecture. It needs a clear status memo: current AI uses, high-risk workflows, incidents, control gaps, and the next 30-day cleanup plan.
Why does finance AI need controls before it scales?
Finance AI needs controls before scaling because its outputs can affect reporting, cash movement, confidential data, tax support, and board materials.
Finance AI security is becoming a CFO control problem, not just an IT policy problem. The finance function now uses AI to summarize contracts, draft variance commentary, classify transactions, forecast cash, read invoices, answer management questions, and prepare board materials. Those are useful workflows. They also touch the records investors, lenders, auditors, employees, customers, and vendors rely on.
The adoption pressure is real. Deloitte's April 2025 CFO Signals survey found that 79% of surveyed finance chiefs expected generative AI to help bridge finance skills gaps over the next 24 months. KPMG's U.S. AI in finance report said 88% of surveyed finance functions were already using AI, with 62% using it to a moderate or large degree.
That does not mean every finance team has matching controls. A company can move from one sanctioned AI pilot to ten hidden workflows in a quarter. One analyst uses a public chatbot to summarize customer contracts. A controller tests an AI close tool. AP turns on invoice coding suggestions. FP&A uses a model to draft board commentary. Each use may look small alone. Together, they create a control surface.
The control principle
AI should not get a shortcut around the finance control environment. If an output affects reporting, cash movement, tax, audit evidence, credit decisions, customer communication, or board materials, it needs ownership, review, and documentation.
What should be in the finance AI use inventory?
A finance AI inventory should identify each active tool, owner, data type, output, review requirement, and system connection before approval.
The first control is not a policy. It is a list. CFOs need an inventory of AI use across finance before they can decide what to approve, restrict, or shut down. The inventory should cover sanctioned tools and shadow AI.
At minimum, capture six fields for every AI use case:
- Tool: ChatGPT, Copilot, Claude, ERP-native AI, AP automation, forecasting software, close management software, spreadsheet add-in, or custom workflow.
- Owner: The finance leader accountable for the process, not only the person testing the tool.
- Data entered: Public data, internal-only data, confidential customer/vendor data, payroll data, banking data, tax data, board materials, or financial reporting support.
- Output created: Draft, classification, reconciliation match, anomaly flag, forecast, journal entry suggestion, payment recommendation, memo, or external communication.
- Review required: Who reviews the output, what evidence is retained, and what threshold triggers escalation.
- System connection: Whether the AI tool only reads data, writes back to a source system, or can trigger an action.
This mirrors the practical intent behind the NIST AI Risk Management Framework. NIST's AI RMF and Playbook organize AI risk work around governance, mapping, measurement, and management. For a finance team, mapping starts with knowing where AI is already present.
| Inventory Field | Why It Matters | CFO Control Question |
|---|---|---|
| Tool and owner | Prevents orphaned pilots and unclear accountability | Who signs off if this output is wrong? |
| Data entered | Identifies confidentiality, privacy, and contractual risk | Can this data leave our controlled systems? |
| Output created | Separates low-risk drafting from higher-risk finance decisions | Does this output influence reporting, cash, tax, or board decisions? |
| Review required | Turns human oversight into a documented control | What evidence proves review happened? |
| System connection | Shows whether AI can only advise or can change records | Can this tool write, approve, pay, file, or notify? |
Which finance workflows need the strictest review?
The strictest finance AI review belongs wherever outputs influence financial statements, tax positions, treasury actions, payment approvals, or board decisions.
Not every AI use case needs the same level of control. A model that drafts an internal meeting agenda does not carry the same risk as a model that suggests journal entries or generates board reporting commentary. CFOs should tier workflows by the consequence of a wrong output.
Low-risk uses include first drafts of internal training notes, generic policy summaries, meeting agenda cleanup, and non-confidential brainstorming. These still need data rules, but they usually do not need formal approval evidence.
Moderate-risk uses include invoice coding suggestions, expense policy triage, contract summaries, forecast drafts, variance commentary drafts, and management reporting summaries. These need named reviewers and retained support.
High-risk uses include financial reporting controls, month-end close conclusions, tax positions, treasury decisions, payment approvals, customer credit decisions, board materials, covenant reporting, external investor communication, and any workflow that can change the general ledger or release money.
For high-risk workflows, AI should be treated as an assistant, not an approver. It can draft a variance explanation. It should not certify the explanation. It can flag a reconciliation exception. It should not clear the exception without human review. It can suggest a payment priority. It should not release funds.
What human review rules should CFOs require?
Human review rules should name the reviewer, review standard, retained evidence, escalation threshold, and actions AI may never take alone.
Human review is too vague to be a control unless the company defines what review means. A useful rule includes the reviewer, the review standard, the evidence retained, and the escalation threshold.
For example, if AI drafts monthly variance commentary, the control should not say "finance reviews the output." It should say the FP&A manager compares the narrative to the approved variance report, checks the top three drivers against source schedules, edits unsupported explanations, and stores the final approved commentary with the reporting package.
That level of detail matters because AI failures are often plausible. The model may produce a confident explanation that sounds right but does not match the underlying account detail. It may merge old context with new results. It may summarize a customer contract but miss a nonstandard termination clause. It may classify an invoice based on vendor history when the current invoice is an exception.
COSO's 2026 generative AI guidance frames this as an internal-control issue. COSO specifically flags risks such as cyber exposure, prompt-based manipulation, opaque reasoning, model drift, and frequent configuration changes. Those risks do not disappear because a finance employee clicked "review."
| Risk Tier | Example Uses | Minimum Human Review | Evidence to Keep |
|---|---|---|---|
| Low | Internal drafts, meeting notes, generic summaries | User review before sharing | Usually none beyond normal file history |
| Moderate | AP coding, contract summaries, forecast narratives | Process owner review before use | Approved output and source support |
| High | Close conclusions, tax, treasury, board reporting, payment release | Named finance approver and documented signoff | Reviewer, date, source records, exception log, final version |
How should vendor risk management change for finance AI?
Finance AI vendor review should test data retention, model training, subprocessors, audit logs, write access, security testing, and termination rights.
Finance AI vendor due diligence should go beyond SOC reports and security questionnaires. Those still matter, but AI adds different questions: what data is retained, whether customer data trains models, which subprocessors receive data, how prompts and outputs are logged, and whether the company can retrieve evidence later.
Ask each vendor for clear answers to these questions before the tool touches finance data:
- Can our prompts, uploaded files, outputs, or corrections be used to train any model?
- How long are prompts, files, embeddings, outputs, and logs retained?
- Which subprocessors can access finance data, and in which jurisdictions?
- Can administrators restrict uploads by data type or user group?
- Can the tool produce audit logs showing who used AI, what record was affected, and when?
- Can the model write back to the ERP, close system, AP platform, bank portal, or reporting tool?
- What happens to our data and logs if we terminate the contract?
- How does the vendor test model changes, drift, security incidents, and prompt injection risk?
Vendor risk management should also include an exit plan. If the company cannot export prompts, outputs, approvals, or audit history, it may lose the evidence trail needed for future audits or internal investigations.
What should CFOs tell the board or audit committee?
A board memo should show current AI uses, high-risk workflows, control gaps, incidents, owners, and the next cleanup actions and deadlines.
The board or audit committee does not need every prompt, model setting, or vendor demo. It needs enough information to understand whether AI is being governed where it affects reporting, cash, compliance, and risk.
A useful finance AI risk memo has five sections:
- Current use: Approved AI tools, active pilots, and known shadow AI concerns.
- High-risk workflows: Any AI touching close, reporting, treasury, tax, AP approvals, AR credit, board materials, or sensitive finance documents.
- Control status: Inventory completeness, data restrictions, review rules, audit logs, and vendor due diligence status.
- Incidents and exceptions: Data exposure events, inaccurate outputs, unauthorized tool use, missing evidence, or failed reviews.
- Next actions: The 30-day cleanup plan, owners, deadlines, and unresolved decisions requiring leadership support.
The SEC's cybersecurity disclosure rules are not AI-specific, but they are a useful reminder of the governance standard public-company boards now face around technology risk. The SEC requires annual disclosure about cybersecurity risk management, strategy, governance, board oversight, and management's role. CFOs should assume AI risk oversight will need the same discipline: documented processes, named owners, and a clear incident path.
What is the 30-day control cleanup plan?
The 30-day plan should inventory AI, classify data, tier workflows, close vendor gaps, define review rules, and report status.
The practical move is not to freeze all finance AI. The practical move is to pause risky expansion until the company has a minimum control baseline.
Use this 30-day sequence:
Days 1-5: Inventory. Ask every finance subfunction to list AI tools and AI-enabled vendor features in use. Include browser tools, spreadsheet add-ins, ERP features, AP systems, close platforms, and custom automations.
Days 6-10: Classify data. Mark each workflow by data type: public, internal, confidential, payroll, customer, vendor, banking, tax, financial reporting, or board-level.
Days 11-15: Tier workflows. Assign low, moderate, or high risk. High-risk workflows should not expand until review, signoff, and evidence rules are documented.
Days 16-20: Fix vendor gaps. Confirm data retention, training use, subprocessors, audit logs, admin controls, and exit rights for each tool handling confidential finance data.
Days 21-25: Write review rules. Define what AI may draft, suggest, flag, or summarize, and what it may not approve, release, file, pay, or communicate without human signoff.
Days 26-30: Report status. Send the CFO, controller, CIO, legal lead, and audit committee chair a one-page status memo with the inventory, red flags, owner list, and next 60-day control work.
The CFO's decision rule
If AI touches confidential finance data, changes a finance record, influences cash movement, supports a filing, or appears in board materials, it needs a control owner. If nobody can name the owner, the workflow is not ready to scale.