AI Strategy
The firm assigned AI governance to the IT director. The IT director did what IT directors do well: secured the tools, managed access controls, ensured data encryption, and maintained vendor contracts. What the IT director did not do — because it was outside IT's visibility — was assess whether AI-generated tax positions met professional standards, evaluate whether compliance requirements for AI usage were being met, determine whether AI workflow integrations created operational risks, or verify that AI output review processes were actually being followed by service delivery teams. The governance was structurally incomplete because it was functionally isolated. AI governance requires the perspective of every function it touches.
AI governance isolated in one department — IT, compliance, or operations — creates structural blind spots because AI risks and decisions span all functions. Effective AI governance requires a cross-functional structure with representation from IT (tools and infrastructure), compliance (regulatory alignment), operations (workflow integration), and service delivery (client work quality). A governance lead coordinates across functions with authority to enforce decisions. This structure captures the full risk landscape, ensures policies reflect operational reality, and creates accountability that no single-function approach can achieve.
Why single-function AI governance fails and how cross-functional ownership creates the structural completeness that effective governance requires.
Founders, partners, and senior leaders designing or evaluating their firm's AI governance structure.
Governance blind spots create invisible risk. Cross-functional ownership eliminates the structural gaps that single-function governance cannot see.
IT sees: tool security, data encryption, access controls, infrastructure stability. IT misses: professional quality standards, regulatory compliance nuances, workflow integration problems, client-facing risks. The result: technically secure governance that does not address professional liability or operational effectiveness. IT knows the tools are locked down but cannot assess whether AI output meets professional standards.
Compliance sees: regulatory requirements, documentation obligations, audit trails, policy adherence. Compliance misses: technical implementation gaps, workflow practicality, shadow AI that emerges from impractical policies, real-world tool usage patterns. The result: documented governance that looks complete on paper but does not reflect actual practice. Policies exist but staff work around them because the policies are operationally inadequate.
Operations sees: workflow efficiency, process integration, productivity impact, change management needs. Operations misses: technical security vulnerabilities, regulatory compliance gaps, professional liability implications. The result: governance that optimizes for efficiency without adequate risk management. Workflows are smooth but data flows may be uncontrolled.
Each function's blind spots are the other functions' core competencies. IT's blind spots are compliance and operations strengths. Compliance's blind spots are IT and operations strengths. The only way to eliminate structural blind spots is to combine perspectives. This is not collaboration as an aspirational value — it is a structural requirement for governance completeness.
IT perspective: tools and infrastructure. Which AI tools are technically viable? How is data protected in transit and at rest? What access controls are appropriate? How are tools maintained and updated? What happens when a tool fails? IT provides the technical foundation that governance stands on.
Compliance perspective: regulatory alignment. What regulations apply to the firm's AI usage? What documentation is required? How does AI usage affect the firm's professional obligations? What reporting may be required? Compliance ensures governance meets external requirements — the compliance framework that builds ahead of regulation.
Operations perspective: workflow integration. How do AI tools integrate into actual workflows? What training do staff need? Where do workflow gaps create risks? How are processes monitored for compliance? Operations ensures governance is operationally practical — that policies can actually be followed in daily work.
Service delivery perspective: client work quality. Does AI output meet professional quality standards? Are review processes effective? Do clients understand and accept AI-assisted delivery? Is professional judgment adequately applied? Service delivery ensures governance protects the core of the firm's value proposition — the quality of client work.
A small group of 4–6 people representing IT, compliance, operations, and service delivery. The committee is not a bureaucracy — it is a coordination mechanism that ensures decisions reflect all relevant perspectives. Members bring their function's expertise and carry governance decisions back to their teams for implementation.
Committee responsibilities: Review and approve AI tool adoption decisions. Assess and monitor AI risk exposure using the AI Risk Maturity Framework. Review AI policy effectiveness and approve updates. Investigate AI-related incidents and approve corrective actions. Report governance status to firm leadership.
One person with designated authority and accountability. The lead coordinates committee activities, ensures decisions are implemented, monitors compliance, and escalates issues to leadership. This person does not need to be a technology expert — they need to understand the intersection of technology, risk, compliance, and service delivery. They need credibility across functions and authority to enforce decisions.
The lead is accountable for governance effectiveness. If governance fails — if a policy gap creates an incident, if shadow AI proliferates, if compliance requirements are missed — there is one person who is responsible. This accountability is what transforms governance from a committee exercise into a management system.
Tool adoption: IT evaluates technical viability and security. Compliance evaluates regulatory implications. Operations evaluates workflow integration. Service delivery evaluates quality impact. The governance committee reviews all four assessments and makes a coordinated decision. This is the vendor assessment process embedded in governance structure.
Policy development: Compliance drafts based on regulatory requirements. Operations reviews for operational practicality. IT reviews for technical enforceability. Service delivery reviews for professional standards alignment. The committee approves the integrated policy.
Incident response: IT manages technical response. Compliance manages regulatory reporting. Operations manages workflow impact. Service delivery manages client communication. The governance lead coordinates the cross-functional response and ensures the incident feeds back into risk management and policy improvement.
This article is the capstone of the AI Security, Risk, and Compliance cluster. The eleven articles in this cluster connect into an integrated governance system:
Security as an operating discipline establishes the foundation: security is not an IT project but a firm-wide operating practice. Data privacy and data flow mapping address the most immediate risk category — client data entering AI systems without adequate controls.
Agent autonomy risk and prompt injection risk address emerging risk categories that most firms have not yet considered. Vendor assessment addresses the supply chain risk that AI tools introduce.
Compliance requirements and liability exposure address the regulatory and professional responsibility dimensions. Policy adequacy addresses the documentation foundation that governance requires.
The AI Risk Maturity Framework provides the structured progression model. And this article — cross-functional governance — provides the organizational structure that makes all of these elements work together.
The synthesis: AI governance is not any single element. It is the integrated operation of security practices, risk frameworks, compliance processes, adequate policies, and cross-functional coordination. Each element supports the others. Remove any one, and the governance system has a structural gap.
They make governance a standing function, not a project. AI governance is not something that is built and then maintained. It is a continuous function that evolves with AI adoption, regulatory changes, and risk landscape shifts. Strong firms resource governance as an ongoing function rather than a one-time initiative.
They measure governance effectiveness. Metrics include: shadow AI incident rate, policy compliance rate by service line, time to complete vendor assessments, incident response time, and risk indicator trends. Metrics that improve indicate governance is working. Metrics that stagnate or worsen indicate governance gaps.
They connect AI governance to firm strategy. AI governance is not just about risk prevention — it enables confident AI adoption. When governance is strong, the firm can adopt new AI tools with confidence because the risk management system will identify and manage associated risks. Governance becomes an enabler, not just a constraint.
They review governance structure annually. As AI usage evolves, governance structure may need to adapt. Annual reviews assess whether the right functions are represented, whether the governance lead has adequate authority, whether committee meeting frequency matches the pace of change, and whether the governance program addresses emerging risk categories.
AI governance is a structural problem, not a personnel problem. The most capable individual assigned to AI governance will still produce incomplete governance if their function provides only one perspective. Cross-functional ownership is not a management philosophy preference — it is a structural requirement that reflects the cross-functional nature of AI risk.
The governance structure determines the governance ceiling. A single-function structure can only achieve single-function governance, regardless of effort or expertise. Cross-functional structure is the precondition for comprehensive governance.
Firms working with Mayank Wadhera through DigiComply Solutions Private Limited or, where relevant, CA4CPA Global LLC, design cross-functional AI governance structures that capture the full risk landscape and enable confident AI adoption through systematic risk management.
AI governance is a structural problem. Single-function ownership creates blind spots that no amount of effort can overcome.
Assigning AI governance entirely to IT because AI involves technology. IT sees technical risks but misses professional, regulatory, and operational dimensions.
They build cross-functional governance with IT, compliance, operations, and service delivery — coordinated by a governance lead with enforcement authority.
The governance structure determines the governance ceiling. Cross-functional structure is the precondition for comprehensive AI risk management.
AI risks span IT, compliance, operations, and service delivery. Governance isolated in one function creates blind spots that only other functions can see. Cross-functional ownership provides structural completeness.
4-6 people representing IT, compliance, operations, and service delivery. Meets monthly minimum. One governance lead with enforcement authority. Reviews tool decisions, risk assessments, incidents, and policy updates.
A senior leader with cross-functional credibility and authority. Does not need to be a technology expert but must understand the intersection of technology, risk, compliance, and service delivery.
When governance includes service delivery teams, approved tools reflect actual workflow needs. Shadow AI emerges when approved tools do not address real problems. Cross-functional input closes that gap.
Governance becomes technically sound but operationally disconnected. Policies are enforceable from a technology perspective but impractical for daily work, leading to workarounds that undermine governance.
Monthly for regular reviews, with ad hoc meetings for urgent decisions. Meeting cadence should match the pace of AI adoption.
AI governance is a subset of firm-wide risk management. It should integrate with existing frameworks using the AI Risk Maturity Framework structure, not operate as a standalone program.
Concise insights on workflow design, AI readiness, and firm economics. No fluff. Unsubscribe anytime.
Not ready to engage? Take a free self-assessment or download a guide instead.