In 2014, venture capital firm Deep Knowledge Ventures appointed an algorithm to its board, with full voting power on investment decisions. Though it had a seat at the table, the algorithm was really just a faster data analyst, churning out recommendations for human directors to weigh. A decade on, machine learning has made huge leaps. Yet our research suggests most directors still view AI as peripheral to their work.
From June to September 2024, we spoke with more than 50 board chairs, vice chairs and committee heads from global companies including ASM, Lazard, Nestlé, Novo Nordisk and Shell. While some had used AI for personal tasks, very few had integrated it into their boardroom responsibilities.
Still, a small group described promising use cases,from prepping for meetings with large language models (LLMs) to testing assumptions mid-discussion. As we wrote in a recent Harvard Business Review article, these examples may be early signals of a broader shift.
Better preparation, smarter decisions
Let’s start with the potential benefits. First, AI can help directors prepare better. Second, it can improve the information the board receives. Third, it may one day take part in boardroom discussions.
Supporting individual directors
Non-executive directors typically meet just a few times a year. Many sit on multiple boards and must make high-stakes decisions with limited insight into day-to-day operations. To bridge the information gap, chairs often organise site visits and encourage interaction with executives. Even then, board books can be dense and difficult to digest.
AI can help directors make sense of this information. Trained LLMs can extract patterns, flag emerging risks and condense material into digestible formats. For example, a Swiss board chair named Alexander (all names have been changed) runs his board materials through ChatGPT before meetings to generate questions and options for discussion.
Enriching the board’s collective intelligence
Directors frequently endorse scenario-planning but often skip the exercise due to time or resource constraints. AI can dramatically reduce the burden, quickly modelling multiple outcomes and their likely impacts.
One Austrian board chair, Gerhard, asked an LLM to simulate scenarios for a proposed acquisition. The exercise helped the board decide the deal exceeded its risk appetite. Since then, management has routinely included scenario analysis in its proposals.
AI can also simulate strategy outcomes, allowing boards to test ideas before committing. A Finnish board chair, Juho, described using ChatGPT to review the outcomes of a two-day strategy retreat. The tool’s recommendations mirrored those of the board, boosting confidence in the board’s direction – and reinforcing AI’s credibility.
Eventually, we believe every board will have an AI member, perhaps one with a vote.
In the Netherlands, a chair named Catherine used Claude 3.7 Sonnet to re-examine four board conclusions. The AI confirmed three, prompting a deeper debate on the fourth. She credits the tool with helping the board zero in on areas that needed more attention.
Some boards now use AI to analyse their own internal dynamics. A Swiss industrial firm uses it to monitor speaking time, tone and participation. One tool advised reducing airtime for one director and increasing it for others. It also suggested avoiding loaded phrases like “no-brainer”.
Joining the conversation
The next frontier is AI as an active participant. In 2024, UAE-based International Holding Company appointed a virtual human, called Aiden Insight, as a “board observer”. Developed by technology company G42,Aiden uses the BoardNavigator tool to analyse discussions in real time.
Though Aiden lacks voting power, its contributions are formally recorded in meeting minutes. Aiden’s appointment and others like it hint at what could become a broader trend, although these tools have limits. For one thing, AI struggles with nuance and cannot argue for its recommendations. More concerning, it often avoids contentious debates. But the direction of travel is clear.
Navigating the risks
Focus group participants flagged three main risks of using AI in the boardroom. But we find that all are manageable with the right guardrails.
Risk of information leaks
Many directors worry about exposing sensitive data to AI systems. However, this risk isn’t unique to AI – it applies to all digital tools. Companies already use access controls and employee training to manage data exposure. These practices can easily be extended to board members.
Major providers like OpenAI now offer enterprise-grade LLMs that don’t use proprietary data for model training. Others, like SAP, are building smaller custom models trained only on a single client’s data.
Sample bias
AI reflects the data it’s trained on and that data can be skewed. One board chair rightly questioned how AI could foster independence from management if it was trained only on management’s data.
At one firm, the board approved a health and safety plan based on internal employee surveys, but contractors weren’t included. While incident rates dropped in-house, they rose at contractor sites that hadn’t received the same attention. It was a textbook case of action driven by incomplete data.
Still, bias can be mitigated. Data audits and bias-detection tools are increasingly available. So is user awareness: Boards can prompt AI to analyse issues through different demographic lenses.
Anchoring in the past
Boards are tasked with shaping the future but AI relies on historical data. That can make it blind to emerging shifts. One CEO told us: “AI knows the past. Strategy is about the future.”
Yet this critique applies to human instincts too, as they are likewise shaped by past experience. Boards can reduce AI’s backward-looking bias by using newer models that explain the reasoning behind their recommendations. These tools show cause-and-effect logic, making it easier for directors to judge whether assumptions still hold.
When AI flags a risk based on outdated variables – say, interest rates rising – it can spark useful discussion about what’s changed and why. Boards can also prompt scenario simulations to see how outcomes would vary under new conditions.
Ultimately, the risk isn’t using AI. It’s using it blindly.
Making it work
Fortunately, boards don’t need every director to become an AI expert. What they do need is a structured approach to adoption. Based on our focus groups, here’s how chairs can lead the way:
1. Create engagement
Begin with one-on-one conversations. Gauge each director’s AI literacy and explore how they might apply AI in their board role. Surface real or imagined concerns. Then provide personalised learning, ideally through hands-on coaching with someone from within the company.
Training should go beyond interface mechanics. It should show how AI makes directors more effective and their work less burdensome. One focus group participant who was initially sceptical said a workshop changed everything: “Now, ChatGPT is my partner in crime.”
2. Experiment as a group
Start small. Encourage directors to use the same foundation LLM for a few meetings. Let them craft their own prompts during preparation, then debrief afterwards as a team. Once they see the value, boards can introduce an enterprise-trained version and feed it firm-specific data over time.
Progressively, AI can act as a performance coach, assigning prep work and offering tailored advice based on each director’s role and priorities. But bringing it in has to be a shared effort. Pushing AI top-down can backfire.
3. Keep the momentum
AI adoption isn’t a one-time project. Tools evolve, and so will the board’s comfort with them. Chairs should reinforce good habits through regular reviews and even public praise. When board members see the chair learning openly – struggles and all – it sets the tone.
The integration of AI into boardrooms presents real challenges, but even greater opportunity. Eventually, we believe every board will have an AI member, perhaps one with a vote. Boards that invest in AI literacy today will be better prepared to make sharper decisions, faster. That edge may well endure.
No comments yet.