AI Readiness
The firm had an AI policy. It was three pages long, approved by the managing partner, and distributed to all staff. It stated that employees should "use AI tools responsibly and in compliance with professional standards." When a staff accountant used ChatGPT to draft a client advisory memo — uploading the client's financial data in the process — the firm discovered that its policy covered this scenario with a single sentence: "Do not share confidential client information with unauthorized third parties." The policy said nothing about which AI tools were authorized, what data could enter which systems, or who determined what constituted an authorized tool. The policy existed. The governance did not.
Most firm AI policies are generic documents that state broad principles without operational specificity. An adequate AI policy defines approved tools by service line and use case, specifies data classification rules for AI systems, establishes output review requirements, creates incident response procedures, and assigns enforcement accountability. The gap between having a policy and having adequate governance is where firms are most exposed — because the policy's existence creates a false confidence that AI usage is managed.
Why generic AI policies fail to govern actual AI usage — and what adequate policies must cover to create real accountability.
Founders, partners, compliance officers, and operations leaders responsible for AI governance and risk management.
A policy that exists but does not govern creates more risk than no policy at all — because the firm believes it is protected when it is not.
A generic AI policy typically contains statements like: "Employees should use AI tools responsibly." "Client data should be protected when using AI." "AI output should be reviewed for accuracy." These statements are true. They are also useless as governance instruments because they do not define what "responsibly" means in specific contexts, what "protected" requires in practice, or what "reviewed" looks like operationally.
Generic policies fail because they delegate every decision to individual judgment. The staff accountant deciding whether to use an AI tool for client work has no guidance on which tools are approved, what data can enter the tool, or what review is required before the output enters a deliverable. They make a reasonable judgment based on incomplete information, and the firm discovers the gap only when something goes wrong.
This is the same pattern that makes AI governance fail without operating discipline. Principles without operational specifics leave every interaction to ad hoc decision-making.
The policy says "use approved tools" but does not list which tools are approved. Staff members interpret this to mean tools the firm has purchased are approved and free tools used for personal productivity are a gray area. Shadow AI proliferates — staff use ChatGPT, Gemini, Claude, and other tools for client work because the policy does not explicitly prohibit specific tools. The security implications of unapproved tools are significant: data leaves the firm's control through channels the firm does not monitor.
The policy says "protect client data" but does not specify which data categories can enter which AI systems. Can anonymized financial data be used? Can client names appear in prompts? Can engagement details be included for context? Without explicit data classification rules mapped to specific AI tools, every staff member makes these decisions independently — and makes them differently. This creates the data flow visibility problems that compound across the firm.
Tax preparation, audit, advisory, and bookkeeping have fundamentally different risk profiles. Using AI to draft a marketing email is different from using AI to analyze a tax position. Using AI to summarize meeting notes is different from using AI to prepare workpapers. A policy that applies identical rules to all service lines either over-restricts low-risk activities — creating workarounds — or under-restricts high-risk activities — creating liability exposure.
The policy does not address what happens when AI output causes an error in a client deliverable. Who is notified? What is the investigation process? How is the client informed? What changes to prevent recurrence? Without incident response procedures, AI-related errors are handled ad hoc — which means they may be minimized, unreported, or repeated. The liability exposure from unreported AI errors compounds over time.
The policy states rules but does not assign anyone to monitor compliance or define consequences for violations. This is the most common and most critical gap. A policy without enforcement relies on voluntary compliance — which works until convenience conflicts with policy, at which point convenience wins. Every time.
Approved tool registry. A maintained list of approved AI tools, what each is approved for, what data each can process, and who approves additions. Updated when tools are added, removed, or change their data handling practices. Staff know exactly which tools they can use for which purposes.
Data classification matrix. A matrix mapping data categories (public, internal, confidential, restricted) to AI tool categories (firm-managed, vendor-hosted, free/consumer). The matrix specifies what data can enter which tools. "Confidential client data may only be processed through firm-managed AI tools with executed data processing agreements." Clear. Specific. Enforceable.
Service-line-specific use cases. Approved AI use cases by service line, with specific guidance for each. Tax: AI may assist with research and draft preparation; all positions require partner review before filing. Audit: AI may assist with data analysis and workpaper preparation; all conclusions require manager review before inclusion. Advisory: AI may assist with research and draft preparation; all client-facing deliverables require director review.
Output review requirements. Specific review requirements based on output destination. Internal use: self-review sufficient. Client communication: manager review required. Regulatory filing: partner review required. The review standard: verify substance, not just format. Document the review with who, when, and what changes were made.
Incident response procedures. What constitutes an AI-related incident, who to notify, investigation timeline, client communication requirements, and corrective action process. This connects to the broader compliance framework that firms should approach proactively.
Enforcement and accountability. Who monitors compliance, how monitoring is conducted, what constitutes a violation, and what consequences follow. The enforcement mechanism transforms a policy from a document into a governance instrument.
The enforcement mechanism is what separates governance from aspiration. Three enforcement layers make policies operational:
Technical enforcement. Network-level controls that restrict access to unapproved AI tools. Data loss prevention tools that monitor for client data in AI interactions. Access controls that limit AI tool access to trained personnel. Technical enforcement removes the compliance decision from individuals — the tool is either accessible or it is not.
Process enforcement. Workflow requirements that embed policy compliance into normal operations. AI output cannot enter a deliverable without documented review. AI tool usage is logged automatically. Engagement setup includes AI tool selection as a standard step. Process enforcement makes compliance the path of least resistance.
Accountability enforcement. A designated owner who reviews compliance data, investigates violations, and reports to leadership. Regular compliance reporting that tracks policy adherence by service line, team, and tool. The accountability mechanism ensures someone is watching — and that non-compliance has consequences.
They write policies that operational staff can follow. The test of an adequate policy is whether a staff accountant encountering a new AI-related decision can find specific guidance in the policy. If the answer requires interpretation, the policy is too generic.
They update policies quarterly. AI capabilities and risks evolve faster than traditional technology. Strong firms review their AI policy every quarter, with immediate updates when new tools are adopted or existing tools change their practices.
They test policy awareness. Having a policy in a shared drive is different from having a policy that staff understand and follow. Strong firms test awareness through scenario questions: "A client asks you to use their preferred AI tool for their engagement. What do you do?" If staff cannot answer correctly, the policy distribution has failed.
They separate policy from governance. The policy document defines rules. The governance program ensures rules are followed. Strong firms maintain both — understanding that a policy without a governance program is a document, not a management system.
The gap between having an AI policy and having adequate AI governance is where most firms are currently exposed. A generic policy creates false confidence — leadership believes AI usage is managed because a document exists. The document is checked in a compliance exercise. The audit trail shows a policy was distributed. But the actual AI usage at the firm is unmanaged because the policy does not provide operational guidance, and no one is monitoring compliance.
Adequate AI governance requires a policy that staff can follow, enforcement mechanisms that make compliance operational, and accountability that ensures the system works. The policy is the starting point, not the destination.
Firms working with Mayank Wadhera through DigiComply Solutions Private Limited or, where relevant, CA4CPA Global LLC, develop AI policies that are operationally specific, enforceable, and connected to governance programs that ensure policies translate into actual AI management.
A policy that exists but does not govern creates more risk than no policy — because the firm believes it is protected when it is not.
Writing a generic policy with broad principles instead of operational specifics. Staff cannot follow guidance that requires interpretation for every decision.
They write workflow-specific policies, assign enforcement accountability, update quarterly, and test staff awareness through scenario questions.
Policy without enforcement is decoration. The enforcement mechanism is what transforms a document into a governance instrument.
They are generic documents stating broad principles without operational specifics. They do not define approved tools, data classification rules, service-line requirements, or enforcement mechanisms — leaving every AI decision to individual judgment.
Five areas: approved tool registry, data classification matrix, service-line-specific use cases, output review requirements, and enforcement mechanisms with accountability.
At minimum quarterly, with immediate updates when tools change, regulations shift, or incidents reveal gaps. Annual reviews are insufficient given the pace of AI evolution.
A designated person with authority to enforce across service lines, coordinating with IT, compliance, operations, and leadership. Shared ownership without a single accountable person leads to gaps.
Yes. Tax, audit, advisory, and bookkeeping have different risk profiles. Uniform policies either over-restrict low-risk uses or under-restrict high-risk ones.
A policy defines rules. Governance ensures rules are followed through monitoring, enforcement, training, and continuous improvement. Both are required.
Three layers: technical controls restricting access, process controls embedding compliance into workflows, and accountability controls assigning monitoring responsibility with consequences for violations.
Concise insights on workflow design, AI readiness, and firm economics. No fluff. Unsubscribe anytime.
Not ready to engage? Take a free self-assessment or download a guide instead.