LeadershipAI Security

Board AI Governance and Oversight

Not long ago, I sat in a board meeting where the conversation turned to a new AI-driven product. The excitement was palpable as management tried to sell its potential to transform our business model. However, as a director, and with my 20+ years of experience testing AI security, I felt compelled to ask tough questions: How are we securing it against adversarial attacks? How would this AI avoid biased outcomes? Were we prepared if regulators scrutinized its data usage? I then realized – this is what probably every boardrooms faces today – embracing AI’s innovation while guarding against its risks.


AI is reshaping businesses across industries, and corporate boards are increasingly expected to oversee AI strategy, ethics, and risk management. In fact, the number of S&P 500 companies formally assigning AI oversight to a board committee more than tripled in 2025, and nearly half of Fortune 100 companies now highlight AI expertise in their directors’ qualifications. This surge underlines an urgent need for governance frameworks that keep pace with AI-driven innovation. Boards can no longer afford to treat AI as just an IT issue; it has become a strategic imperative – one that demands informed oversight at the highest level.

The Board’s Evolving Role in an AI-Driven World

Boards of directors are recognizing that overseeing AI is now part of their mandate. Many companies have started elevating AI governance in the boardroom by creating new technology or risk committees, updating charters, and recruiting tech-savvy directors. The prevalence of dedicated technology committees in large companies rose from 8% in 2019 to 13% in 2025, reflecting this shift in priorities. Nearly half of Fortune 100 boards include directors with AI experience – whether that means a CEO leading AI initiatives, someone certified in AI ethics, or a director from an AI-focused company. This infusion of expertise helps boards ask the right questions about complex algorithms and data strategies.

Yet, despite growing awareness, many organizations lag in implementing formal AI governance programs. Surveys show that only about 25% of companies have fully put in place AI governance frameworks, despite broad recognition of AI risks and looming regulations. In other words, AI adoption is racing ahead of oversight. Common hurdles include unclear ownership of AI oversight (cited by 44% of organizations), insufficient in-house expertise (39%), and resource constraints (34%) – far more than technical barriers. These gaps suggest that boards need to push management to assign clear responsibility for AI risk management and invest in the necessary talent and tools. In my experience, a key first step is often designating an executive (or committee) explicitly in charge of AI ethics and compliance, so the board has a go-to point of accountability.

As boards take on this evolving role, they must strike a balance. Directors have a fiduciary duty – the duty of care – to oversee “mission critical” risks facing the company. I previously wrote abut boards’ responsibility to oversee cyber risk. Today it’s clear that AI now squarely fits that category. Under landmark case law (e.g. Delaware’s Caremark standard), boards can be found in breach of duty if they fail to make a good-faith effort to monitor major risks and legal compliance. Courts have signaled a willingness to hold boards liable for turning a blind eye to critical areas like cybersecurity, and AI-related failures could expose directors similarly if proper oversight isn’t in place.

Put simply, ensuring AI is used responsibly isn’t just best practice – it’s part of the board’s job of risk oversight. At the same time, board members must be careful to maintain “noses in, fingers out,” providing guidance and asking probing questions without micromanaging technical decisions. In one meeting, I remember our board debating whether to halt an AI project due to potential bias. We ultimately let management proceed – after requiring additional bias testing and reporting. This kind of measured oversight respects management’s role in execution while asserting the board’s responsibility to demand appropriate safeguards.

Innovation vs. Risk

AI offers tremendous opportunities for efficiency and innovation. Forward-looking boards encourage management to leverage AI for competitive advantage – whether it’s using machine learning to detect fraud, deploying chatbots for customer service, or analyzing big data to inform strategy. Or many more use cases. In fact, many boards are exploring how AI can improve their own decision-making, from AI-assisted data analysis to identifying emerging market trends. As one governance colleague told me, “AI is becoming a boardroom tool as much as a boardroom topic.”

However, these opportunities come intertwined with significant risks. AI systems can behave in unpredictable ways or produce unintended consequences that pose legal, financial, and reputational threats. My statement on key AI risks covers some of them.

There is a growing list of cautionary tales: for instance, a healthcare algorithm used across U.S. hospitals was found to be far less likely to recommend Black patients for high-risk care programs, demonstrating how AI can inadvertently amplify bias. In another case, an online real estate company had to write off over $300 million and lay off staff after its AI-driven home pricing model went awry. Even tech giants have stumbled – Amazon famously scrapped an AI recruiting tool after discovering it systematically discriminated against women candidates. These examples highlight why oversight is critical: unchecked algorithms might violate anti-discrimination laws, breach customer privacy, or simply make disastrous errors.

Beyond individual incidents, directors need to recognize AI as a broad risk factor. In the last year, the majority of Fortune 500 companies began flagging AI as a potential risk in their annual reports, an increase of over 470% from the prior year. This reflects concerns that range from cybersecurity vulnerabilities in AI systems, to regulatory non-compliance, to ethical pitfalls.

Indeed, AI’s “black box” complexity and rapid evolution make it challenging to govern. It’s hard to oversee what one doesn’t fully understand, and many boards are on a learning curve with AI technology. I’ve seen directors who are veterans of finance or marketing admit that AI feels like “a different language.” This is why continuous education is so important – boards must invest in raising their digital and AI literacy. Lack of AI knowledge on the board can lead to ineffective oversight or blind trust in management, neither of which is acceptable in today’s environment.

Some boards are addressing this by scheduling AI teach-ins, hiring external advisors, or including AI briefings as a regular agenda item. Others are bringing younger directors or advisory council members with AI backgrounds into discussions. The goal is not to turn directors into data scientists, but to ensure they can ask the right questions and recognize red flags.

Critically, effective AI governance allows a company to innovate with confidence. When the board insists on proper risk assessments, bias testing, security checks, and compliance reviews for AI initiatives, it creates a foundation of trust. Products can be launched knowing they align with the company’s values and legal obligations. As one of my fellow directors put it, “We want to be bold in using AI, but we can’t be cavalier.” Getting that balance right can unlock AI’s benefits while protecting the company from existential risks. Moreover, strong AI governance builds trust with external stakeholders – investors, customers, regulators – demonstrating that the company is proactive and responsible in its AI use. This trust is especially vital because public confidence in government’s ability to manage AI is lukewarm; a recent global survey found 59% of people believe regulators don’t understand emerging tech well enough to regulate it effectively, and they place more trust in businesses to integrate technology responsibly.

In other words, people are looking to companies (and by extension their boards) to “do the right thing” with AI. Directors who embrace this role can turn good governance into a competitive advantage, assuring stakeholders that their company’s AI innovations are not only cutting-edge but also principled and safe.

Ensuring AI Initiatives Align with Corporate Values and Compliance

One of the board’s most important duties is to ensure that AI initiatives align with the company’s core values, ethical standards, and compliance requirements. This means proactively addressing issues of fairness, privacy, security, and legality as part of AI project oversight. In practice, directors should consider (at least) the following key areas and questions:

Algorithmic Bias and Ethical AI

AI systems must be fair and unbiased, reflecting the company’s values of integrity and inclusion. Boards should insist on processes to detect and mitigate algorithmic bias in AI models. This could include regular bias audits, diverse training data, and human oversight of AI decisions.

We’ve seen how biased AI can cause both ethical and legal crises – for example, Amazon’s hiring algorithm learned to favor male applicants and downgrade resumes that even mentioned “women’s” activities. Likewise, facial recognition and credit scoring AIs have been found to discriminate against minorities or other groups, undermining equality.

Directors can ask management: Have we tested our AI for disparate impacts on protected classes? What steps are we taking to ensure fairness and transparency? Ensuring AI ethics isn’t just altruistic; it reduces the risk of discrimination lawsuits and regulatory penalties. (Notably, some jurisdictions now require bias audits for certain AI tools – New York City, for instance, mandates bias audits for AI hiring software.)

By championing ethical AI principles, boards promote technology use that aligns with corporate culture and social responsibility.

Data Privacy and Protection

AI runs on data – often personal, sensitive data about customers, employees, or the public. Boards must verify that AI initiatives comply with privacy laws and uphold individuals’ rights. In an era of stringent data protection regulations like Europe’s GDPR, this is non-negotiable. The EU General Data Protection Regulation (and similar laws) impose obligations on automated decision-making and profiling, meaning companies must be careful how AI uses personal data.

Directors should ask: Where is our AI getting its data? Do we have user consent? Are we anonymizing or encrypting data appropriately?

Privacy lapses can result in hefty fines and reputational damage, so oversight here is key. I’ve found it useful for boards to receive periodic reports on data governance – for instance, summaries of data privacy impact assessments for new AI projects.

Additionally, boards should ensure there are clear data retention and access policies for AI systems. If an AI is drawing on consumer data, does it comply with laws in all jurisdictions we operate in (e.g. the EU, California, etc.)? Proactively addressing privacy builds trust with users and avoids the “creepy factor” that can alienate customers if AI overreaches.

Ultimately, protecting data privacy is a baseline for responsible AI use and must be baked into every project from the start.

AI Security and Cyber Resilience

As companies integrate AI, they must also guard against new security vulnerabilities. AI systems themselves can be targets of attack or sources of threat – think of adversarial attacks that manipulate AI outputs, or generative AI being used to produce convincing phishing scams.

Boards should ensure that cybersecurity teams are extending their practices to cover AI models and data. Questions to management: Have we assessed the security of our AI supply chain (including third-party AI tools)? How do we protect against data poisoning or model tampering? It’s wise for directors to treat AI security on par with overall cyber risk, since compromised AI can cause physical harm (in the case of industrial AI), financial loss, or misinformation.

Some companies are implementing “red team” exercises for AI, where experts attempt to trick or subvert the model to find weaknesses. The board doesn’t need to get technical, but it should demand that management has robust controls, testing, and incident response plans for AI failures. For instance, if an AI system critical to operations goes down or starts making bad decisions, is there a human fallback or kill-switch?

Ensuring AI is secure and resilient is part of the board’s broader mandate to safeguard the company’s assets and continuity. In my own board work, we’ve started integrating AI scenarios into our cybersecurity tabletop drills – a sign of the times that AI risk is cyber risk.

Regulatory Compliance and Oversight

The legal landscape around AI is rapidly evolving, and companies must keep abreast of new regulations to remain compliant. Boards should confirm that management is monitoring relevant AI-related laws and guidance in all jurisdictions where they operate. This includes sector-specific rules (like FDA guidelines for AI in medical devices or FTC rules on AI in consumer protection) as well as broad AI governance laws.

Notably, the European Union is leading the way with its Artificial Intelligence Act – the world’s first comprehensive AI law, which introduces transparency, oversight, and accountability requirements for AI systems. The EU AI Act uses a risk-based approach, classifying AI applications by risk level (unacceptable, high, limited, minimal) and imposing strict obligations on “high-risk” AI, such as rigorous risk assessments, documentation, human oversight, and quality control. If our company deploys AI in, say, recruiting or biometric identification, and operates in Europe, the board must ensure those systems meet the Act’s standards once it takes effect.

Meanwhile in the US, there is no single AI law yet, but regulators have made it clear that existing laws do apply to AI. In 2023, agencies like the FTC, DOJ, CFPB, and EEOC issued a joint statement that they will enforce anti-discrimination, consumer protection, and other regulations in the AI context. The SEC has also warned companies against “AI washing” – making exaggerated or false claims about AI in products or investor disclosures – and took enforcement action against firms misleading investors about their AI use.

The message for boards is that this is not an unregulated space: failing to comply with applicable laws can lead to investigations, fines, or liability. Directors should ask management: Are we auditing our AI for legal compliance? Do we have internal controls and reporting systems for AI-related risks?

It may be prudent to implement an AI compliance framework – for example, ensuring any high-risk AI gets legal review, or setting up an AI ethics committee to review new use cases. Some boards are even adding AI oversight to an existing committee’s charter (common choices are the Audit/Risk Committee or a Governance Committee) to formalize this responsibility.

Keeping a finger on the pulse of regulation – whether it’s Europe’s robust rules, U.S. agency guidance, or even emerging laws in other markets – is now part of a board’s due diligence. As one director colleague quipped, “AI might be artificial, but the regulatory risks are very real.”

The Global Regulatory Push: A Patchwork of Rules and Expectations

AI governance is a global concern, and different jurisdictions are taking notably different approaches to regulating AI. Boards overseeing multinational companies need a panoramic view of these developments, as they can significantly impact strategy and compliance.

In the European Union, as mentioned, regulators have adopted a proactive and strict stance. The EU’s AI Act (expected to fully apply in the next few years) will require companies to conduct risk assessments, ensure human oversight, and register or even ban certain AI systems deemed too risky. Europe’s focus on fundamental rights, transparency, and accountability in AI sets a high bar – and not just for EU firms. If your company markets AI-enabled products or services in the EU, you’ll likely be subject to these rules regardless of where you’re based. European regulators are also discussing AI liability (how to hold companies accountable for AI-caused harm) and updating data laws – all signals that Europe’s strong regulatory push on AI will continue. For boards, this means anticipating compliance requirements (for example, ensuring an AI system can explain its decisions to users, as the EU law may demand for certain use cases). On the positive side, clear rules can provide certainty – one European board director told me that having an AI compliance checklist helped her company structure its AI development more systematically.

The US, by contrast, is taking a more fragmented approach. There is currently no omnibus federal AI law, but federal agencies are actively using their powers to oversee AI under existing laws. The White House issued an Executive Order on AI (late 2023) that promotes AI safety standards, equity, and innovation, and it put agencies to work on AI guidelines (for example, NIST released an AI Risk Management Framework to guide organizations in responsible AI use). Meanwhile, regulators like the EEOC have released guidance on preventing algorithmic bias in hiring, the FTC has warned that unfair or deceptive AI practices will be prosecuted, and the FDA is evaluating AI in healthcare products. At the state level, we’re seeing a patchwork of laws – such as New York City’s bias audit law for AI hiring tools, and proposals in states like California to regulate AI output (deepfakes, etc.). For boards of U.S. companies, this means staying alert to a moving target. It may fall to the board’s Risk or Compliance Committee to receive regular updates on emerging AI legislation and coordinate the company’s response. The board should also ensure that management engages with industry groups or legal counsel to track these developments. I’ve observed that many forward-thinking boards in the U.S. aren’t waiting for laws to pass – they’re already voluntarily adopting frameworks (like the NIST AI framework) and documenting their AI oversight activities, so they can demonstrate good-faith efforts if regulators come knocking.

Other jurisdictions add to the mosaic. The UK has chosen a “light touch” principles-based approach for now. Rather than a new law, the UK government published an AI Regulation White Paper (2023) advocating five core principles – safety, transparency, fairness, accountability, and contestability – and is asking existing regulators to apply these in their sectors. The idea is to encourage innovation and avoid over-regulation, while still guiding responsible AI use. For boards of UK companies (or those operating there), this means working with industry regulators (like the Information Commissioner’s Office for data/privacy or the Financial Conduct Authority for financial AI) to implement these principles. It’s a more flexible regime, but also one that could evolve – the board should be prepared if the UK later shifts to a harder law or if sector regulators start issuing binding rules under these principles.

Meanwhile, China has moved swiftly to control AI development in line with government priorities. In 2023, China introduced the Interim Measures for the Management of Generative AI Services, its first regulation specifically targeting generative AI (like AI chatbots and content-creation tools). These measures, which took effect in August 2023, require providers of generative AI to adhere to state content controls, register their algorithms with authorities, and ensure outputs reflect “core socialist values.” China also earlier implemented rules on recommendation algorithms (aimed at things like news feed and video app algorithms) that mandate algorithmic transparency and user controls. For global companies, China’s approach means that if you deploy AI products in the Chinese market, you face a markedly different set of compliance steps – often centered on censorship, security reviews, and government access to algorithms. Boards need to weigh these requirements when considering entering or expanding in China with AI-driven services. More broadly, China’s aggressive governance of AI indicates how governments can see AI as both a strategic asset and a potential societal risk, warranting tight oversight.

The bottom line is that AI regulation is emerging unevenly across the world. Most governments are trying to balance encouraging AI innovation with protecting the public from harm. However, the lack of a unified approach means businesses face a fragmented regulatory landscape.

For a board guiding a multinational enterprise, this fragmentation itself is a risk – and should be treated as such. Directors should ensure the company adopts a “highest common denominator” strategy for AI governance, essentially meeting the strictest applicable standards if possible, to avoid being caught off guard. It may be wise to develop an internal AI policy that reflects global best practices (for instance, aligning with the EU’s transparency requirements and the U.S. NIST framework’s risk controls), rather than taking a narrow view country by country.

Building Effective AI Governance Frameworks in the Boardroom

How can boards concretely exercise oversight over AI and embed good governance into their organizations? Below are some practical steps and best practices that directors can implement to ensure AI initiatives are managed responsibly and in alignment with corporate values:

  1. Elevate AI to the Board Agenda Regularly: Treat AI-related strategy and risk as a standing topic, not a one-off. Boards should receive periodic briefings on the company’s AI projects, including benefits, risks, and progress on governance measures. Many boards are now scheduling deep-dive sessions on AI (similar to an annual cybersecurity review) to stay informed. Make sure AI is discussed at the full board level periodically, even if detailed oversight is delegated to a committee. This keeps all directors aware of how AI is impacting the business.
  2. Define Clear Oversight Responsibilities: Determine which board committee (or the full board) will take the lead on AI oversight. Audit and Risk Committees are commonly tasked with technology risks, though some companies opt for a dedicated Technology or Ethics Committee. The key is to explicitly assign AI to a group that will report back to the board. Update committee charters to reflect AI oversight duties. Also ensure management has an internal owner for AI governance (e.g. a Chief AI Officer or an AI risk manager) who interfaces with the board. Clarity in “who is watching AI” prevents the lack-of-ownership problem that many companies cite as a barrier.
  3. Develop an AI Governance Policy or Framework: Boards should encourage management to establish a formal AI governance framework – essentially, a set of policies, procedures, and tools to guide AI development and deployment. This framework might cover project approval workflows (ensuring legal/compliance review for high-risk uses), ethical guidelines (e.g. commitments to fairness and transparency), data management standards, and incident response plans for AI failures. The framework should also include metrics and reporting mechanisms.
  4. Foster Cross-Functional Oversight and Expertise: AI issues cut across technical, legal, and ethical domains. Boards should ensure that management has convened a cross-functional team or committee (spanning IT, data science, compliance, legal, HR, etc.) to evaluate AI initiatives. This could take the form of an internal AI ethics council that reviews new AI use cases for alignment with company values and regulations. As directors, request that findings from this council or team be shared with the board, especially any flagged concerns. Additionally, consider bringing in outside experts periodically.
  5. Insist on Risk Assessments and Controls for AI Projects: Before a major AI system is launched, boards should ask to see a risk assessment. What could go wrong and what’s being done about it? For high-stakes AI (e.g. anything making autonomous decisions affecting customers or employees), there should be testing for accuracy, bias, robustness, and privacy compliance before deployment. Ensure that management has instituted controls like human-in-the-loop checkpoints for critical decisions, fallback plans if the AI fails, and ongoing monitoring once the AI is in use. For third-party AI tools or vendors, inquire if due diligence was performed – are those vendors compliant with standards, and do contracts cover AI-specific risks and responsibilities? It’s notable that while 92% of companies express confidence that they know what third-party AI tools are used, only about two-thirds actually conduct formal AI risk assessments of those third-party systems. Boards can push to close that gap by requiring such assessments.
  6. Monitor AI Incidents and Ethical Dilemmas: Boards should ensure management establishes a process to promptly escalate AI-related incidents or dilemmas. If there’s a data breach involving an AI system, a significant mistake made by an AI (say, a financial model error causing loss), or an ethical complaint (e.g. from employees or customers about an AI decision), the board should hear about it in a timely manner. Set expectations for incident reporting – perhaps any AI incident rated above a certain risk level is reported to the Audit/Risk Committee chair within 24 hours. Similarly, boards might request a periodic summary of lesser incidents and near-misses to identify patterns. By staying informed of problems, directors can ensure lessons are learned and guide any necessary course corrections.
  7. Embed AI into Corporate Strategy and Culture: Finally, boards need to view AI not just as a risk to manage but as a strategic opportunity to harness – responsibly. This means working with management to integrate AI considerations into the overall business strategy. What new revenue streams or efficiencies can AI unlock, and how do we pursue those in line with our corporate values? Directors should encourage innovation sandboxes where AI ideas can be tested safely, as well as pilot programs that include ethical guardrails from the get-go. Moreover, the board can set the tone that the company aspires to “trustworthy AI.” By endorsing values like fairness, transparency, and accountability in AI use, the board influences the corporate culture.

Conclusion

As AI becomes ever more entwined with business success, corporate directors sit at the intersection of innovation and accountability. In my own journey on boards, I’ve come to appreciate that effective AI governance is not about stifling innovation – it’s about enabling innovation to thrive within guardrails that protect the company and its stakeholders. Boards that embrace this role are finding they can both spur exciting AI initiatives and sleep better at night knowing the risks are in check. It starts with a willingness to ask the tough questions and to admit what we don’t know. From there, through education, expert input, and robust oversight mechanisms, directors can gain confidence in steering their organizations’ AI strategies – harnessing AI’s transformative power while upholding the principles and duties that define good governance.

Marin Ivezic

I am the Founder of Applied Quantum (AppliedQuantum.com), a research-driven consulting firm empowering organizations to seize quantum opportunities and proactively defend against quantum threats. A former quantum entrepreneur, I’ve previously served as a Fortune Global 500 CISO, CTO, Big 4 partner, and leader at Accenture and IBM. Throughout my career, I’ve specialized in managing emerging tech risks, building and leading innovation labs focused on quantum security, AI security, and cyber-kinetic risks for global corporations, governments, and defense agencies. I regularly share insights on quantum technologies and emerging-tech cybersecurity at PostQuantum.com.
Share via
Copy link
Powered by Social Snap