Good governance aims to reduce surprises and effectively overcome them when they arise. The recent crisis at OpenAI, which has resulted in none of the remaining founders – Sam Altman, Ilya Sutskever and Greg Brockman – sitting on the interim board today, has again exposed the fragility of governance in high-speed tech organisations.
Many commentaries on the OpenAI crisis excessively personalise and polarise matters, framing the actors as AI doomers, accelerationists, altruists or greedy techno-capitalists. Instead of pointing fingers or engaging in these emotional debates, a more constructive point, we believe, is to acknowledge from the outset that all actors can excel in certain roles and contexts, while falling short in others.
While human nature cannot be controlled to always be good, effective organisations can more successfully harness their people’s greatest potential – and realise their own – by using a key human technology: good governance.
Indeed, important corporate failures can be traced back to governance mistakes made by the leaders of these organisations, or uncorrected governance vulnerabilities amplified by changing environments. This, in our view, was the case with OpenAI. After all, one would have expected greater resilience from a company with a mission to build AI that “benefits all of humanity”.
Wearing multiple hats well
The founders and owners of private companies play a much more active governance role than the founders and owners of publicly listed companies.[1] They set the mission, and then see to it that effective directors, charged with the responsibility for this mission, are appointed. Directors delegate most, if not all, execution to executives and managers.
One key challenge in a start-up like OpenAI is that the principal actors assume several of the three key and distinct leadership roles for the venture: founders/owners, members of the board of directors and executives or managers. The tendency in corporate governance following the global financial crisis has been to reduce the multiplicity of hats major actors wear. Greater attention is now paid to reducing conflicts of interest generated by these multiple positions, and there is greater insistence that actors effectively understand and fulfil their governance role.
Good governance is key, and OpenAI’s was weak
To be clear, a call for better governance is not a call for halting AI innovation or for added bureaucracy. Nor is it an appeal to follow Silicon Valley’s practice of “move fast and break things”.
Some have rushed to note that OpenAI’s crisis was simply the board’s fault. We agree that the board was not at its best. It had limited experience and diversity and lost three members earlier on that they seemed unable to replace. It also lacked enough corporate heavyweights to both understand Microsoft, the key partner, and to govern the company’s relationship with the software giant. Had it been better structured, perhaps it would have sounded an alarm earlier – possibly by resigning – or offered better guidance to the founders.
But there are also signs that the board was set up to fail by whoever was in charge of nominations and of the corporate mission – likely the three remaining founders. It is surprising that such a board setup even came about. It is even more surprising that they ever committed to such an unusual corporate mission.
A board set up to fail guarding a strange corporate mission
Several other signs indicate that the OpenAI board was set up to fail. First, the independent directors had signed up to oversee a simple non-profit organisation, not the unusual corporate structure they ended up with: a for-profit organisation within a non-profit one, without distinct missions, boards or fiduciary duties.
Second, the three founders on the board were drifting apart, leading one to eventually oust the other two over the phone. Boards in such environments are not there to mediate conflicts between founders, even though they are often called upon to do just that. It puts non-founding directors in a terrible situation compared to founders, who should always present a unified voice. Good board members should not choose sides, but rather force founders to either reunite or withdraw from the board, or as owners altogether, if they become roadblocks for the venture’s success. The presence of divided founders in a board of directors can be near fatal for the effectiveness of that board.
Third, the founders didn’t just hand a poorly built board an unusual corporate structure, but also a charter with some arguably strange clauses. One such clause was that “if a value-aligned, safety-conscious project comes close to building AGI (Artificial General Intelligence) before we do, we commit to stop competing with and start assisting this project”.
It would be quite an interesting business case study to see how OpenAI could “assist” a competitor, being both a non-profit and a for-profit with investors. It is also hard for a board to make calls regarding making progress towards achieving AGI, a concept which has experts disagreeing about its very meaning and validity. All in all, one could describe the founders’ wish list as unreasonable for any board to take responsibility for.
In our opinion, no group of six people can possess such an exceptional diversity of views, competence and experience needed to decide how fast, and how safely, a powerful technology like AI – let alone the elusive AGI – should evolve “for the benefit of all humanity”. These issues ought to be addressed, discussed and hopefully resolved at industry or governmental, if not, international levels. Businesses, for-profit or not, play a different role in society, and a corporate mission has to clearly reflect this.
Founders and owners who don’t learn and properly adjust can do serious damage
The few remaining founders from the original OpenAI team appear to have fallen into a common start-up founders’ trap: emphasising execution at extreme speed – "driving the racing car" – without devoting sufficient attention to building and adapting the governance – or “the proper chassis”, as OpenAI alludes to on its website.
Good governance for start-ups begins with an effective founders board, separate from the board of directors, as the control tower of the firm. It appears that OpenAI did not benefit from a founders board, and, even if one existed, the way the crisis unfolded indicates that it was arguably even less effective than the directors’ one.
It is also worth noting that the OpenAI’s founders’ story is complex. Elon Musk was one of the founders and left. As did several others. Had too many tensions put too great a distance between the remaining founders? Did they not realise that any further fractures would tear the board apart?
Being an owner is a skill that needs to be given considerable attention and continuously improved – or delegated to others. Owners’ failures create direct damage for a company as well as collateral damage for most stakeholders and, in this case, also to the global effort and debate on powerful and safe AI.
Governance should be set for the future, not a legacy of the past
Tech founders and boards must keep pace with, and aim to anticipate, rapidly changing dynamics and the strategic choices made. A corporate mission is not set in stone. Failure to keep up can be damaging for an organisation. Clearly, OpenAI's governance system insufficiently adapted to the company’s rapid growth and complexities of the project.
For example, the large-scale adoption of ChatGPT was likely unexpected, costing significant amounts to provide for free. This, coupled with a belated understanding of the exceptionally high costs to build large AI models, likely led the founders and the board to realise that no foundation would ever be able to donate the necessary funds to support the building of large AI models, let alone AGI.
In our opinion, relevant stakeholders of the for-profit subsidiary (created in 2019) should have defined a new mission, values and governance rules for the subsidiary and appointed a new board for it (with possibly overlapping membership for trust and control). Instead, they chose to retain the same board for both intertwined entities, with the same mission, values and even fiduciary duties. And with no seat for any investors, including Microsoft, on that board.
If the OpenAI founders had wished to retain direct control over the for-profit subsidiary, they could have done so in other ways. In governance, there are always multiple solutions. For example, the Robert Bosch Foundation relies on dividends from Bosch group shares for funding their projects, with a separate trust managing the voting power of these shares to prevent conflicts with the group.
In comparison, OpenAI’s “capped profitability” subsidiary structure certainly was a strange choice – possibly the worst.
Never waste a good crisis
After a series of significant events – including the earlier departures of most founders and unreplaced directors, as well as employees leaving to create rival Anthropic – it took this recent crisis for all the founders to be removed (at least for now) from the board of the company they created. Altman did return as CEO, but without any formal decision power in his company’s board, even though his informal power looms large.
The new interim board must now determine what shape the next board will take. But it is surprisingly unclear who are the current owners with the power and responsibility to make decisions regarding mission, values, governance and future board positions. Such a situation must be traumatising for any founder, and is precisely why founders need to ensure governance is not just an afterthought.
Due to a lack of reliable data, a detailed analysis of the OpenAI crisis is premature. Many questions linger: How was the current board appointed? What about the previous one? Will the current board transform OpenAI into a for-profit entity, leaving founders free to pursue non-profit goals if they wish to (à la Bill Gates)? Does the interim board suggest the end of the non-profit mission and the OpenAI experiment?
OpenAI operates in an ultra-high-speed space with many unknowns and an exceptionally unusual stakeholders’ network for a start-up. For one, it has highly paid expert employees who can play with power, especially when an offer for everyone to transition together with one click is on the table (by Microsoft, a formidable actor in the plot). It also has big corporate partners, a government working on AI regulations, a geopolitical environment where AI is increasingly regarded as a weapon to dominate the world, and a polarised citizenry.
Although the situation at OpenAI may be unique in its specifics, several of the patterns are not – and should have been recognised. Governance failures are too common in the world of tech start-ups, and OpenAI’s crisis provides valuable lessons and reminders for them all.
Founders should bear in mind that good governance depends on reliable “hardware” (structural arrangements such as shareholders agreements, corporate structure and boards), great “software” (processes and ability to communicate clearly and openly), and good “peopleware” (people who can work together, have diverse viewpoints, are not excessively biased and are competent in their ownership and governance roles).
These principles apply not only to organisations, but also to the regulatory debates about, say, self-governance. If self-governance is an option, good governance is a requirement.
Our current divided world, in our view, badly needs better governance at all levels. So does AI and the formidable start-ups that build it.
[1] This is the main point of Value Creation for Owners and Directors, Palgrave Macmillan (2023).
An extended version of this article is available here.
No comments yet.