New Times,
New Thinking.

  1. Business
22 November 2023

OpenAI has executed its turbulent priests

Sam Altman’s dramatic firing and rehiring has shown who is really in control of ChatGPT.

By Will Dunn

Utopia, as Thomas More imagined it, was a land in which “no man has any property”, and “all men zealously pursue the good of the public”. Five centuries later, he might have been a good fit for the board of OpenAI, the company behind ChatGPT. This small group of scientists and academics, driven by a moral rather than a financial code, decided on 17 November to fire Sam Altman, perhaps the world’s most promising executive. In doing so, they revealed who is really in control of the world’s most valuable AI.

Under Altman’s leadership the company, which was founded in 2015 and employs fewer than 800 people, was approaching a valuation of $86bn. Its main product, ChatGPT, has been a global news story since its launch this time last year. It now has more than 180 million users, making it the most popular and lucrative application of what many believe will be the transformative technology of the age.

According to Altman’s co-founder, Greg Brockman, Altman was abruptly sacked in a video call to which he was summoned by text (Brockman was dismissed from the board less than half an hour later). Microsoft, which provides OpenAI’s hardware and financial backing, was not told in advance; Satya Nadella, Microsoft’s CEO, was reported to be “furious”.

It worked out well enough for Nadella, however. Before the markets opened on Monday (20 November) he had secure an agreement to rehire Altman and almost all of his former employees within Microsoft. Eventually, he did not even need to do that: this morning (22 November) it was announced that Altman would return as CEO of OpenaAI, under a new board that retains just one former member, who will be joined by the former US Treasury Secretary Larry Summers – who has defended Big Tech’s “constant pursuit of monopoly power”  and said there is no need to tax robots – and Bret Taylor, co-founder of Salesforce and a former executive at Google, Facebook and Twitter. Gone are the scientists and effective altruists – the new OpenAI board will focus on the money.

Why would any board invite such a disaster in the first place? OpenAI’s statement on the sacking was brief and opaque. It claimed that Altman was “not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities”.

Gary Marcus is professor of neural science and psychology at New York University and the founder of two AI companies. He knows some former members of the OpenAI board personally, and he told me they “had a very specific charter, which was not to make money, but to do AI in a beneficial way. And for whatever reason, they thought that Sam [Altman] undermined that.”

What form this took is the subject of speculation. Altman recently told a conference that he had been “in the room” when the “frontier of discovery” had been pushed forward; some have taken this to mean that the technology had progressed quickly enough to be worrying.

Give a gift subscription to the New Statesman this Christmas from just £49

OpenAI has wrestled with the safety of its product before, however. In 2018 one of the current board members, Helen Toner, and the company’s former policy director, Jack Clark, were among the co-authors of a report on how AI could allow for the cheap and easy automation of “surveillance, persuasion and deception”, with serious political implications. The following year, the company decided to limit the release of its latest model, GPT-2, in case it proved too capable in the wrong hands, but it released the full version less than a year later.

[See also: The West’s dominance is a dangerous delusion]

Around the same time, Elon Musk (an early investor and board member in the project) staged a failed attempt to add it to his group of companies. (Musk then left OpenAI, a decision he claimed was due to a conflict of interest.) To mitigate the risk of this happening again, the company restructured itself into a form that would allow it to take investors’ money without selling them control.

OpenAI became a Matryoshka doll: on the outside it is a for-profit company, into which Microsoft became the main investor, reportedly committing $13bn. However, this was ultimately controlled, through a management company and a charity, by the four people – independents, who hold no shares – who sat on the charity’s board. These included scientists such as the AI pioneer Ilya Sutskever and Tasha McCauley, who also sits on the board of Effective Ventures UK, the investing arm of the effective altruism movement. This structure was the real underlying cause of the recent drama: it gave the boardroom clerics a power they would never really be able to exercise.

In most companies, the fiduciary duty of a board is to ensure that executives deliver value to shareholders. At OpenAI, the charter is priestly and utopian: its “principal beneficiary is humanity, not OpenAI investors”. Company documents state that the board’s purpose is to decide when the company has developed “highly autonomous” artificial general intelligence (AGI) that “outperforms humans at most economically valuable work”, and to ensure that this technology “reshapes society” in a manner that is safe and benevolent.

In other words, the board exists to decide when the machine is sentient – when it is alive – and to ensure that the new consciousness is moral.

As in a religion, the focus was on the greater riches of the world to come. In the terms for prospective investors in OpenAI, one footnote states that “it would be wise to view any investment” in the company “in the spirit of a donation, with the understanding that it may be difficult to know what role money will play in a post-AGI world.”

We are still very much in the pre-AGI world, however, where money retains a central role. This is especially true if you’re running a company that uses very large numbers of powerful computers. OpenAI is estimated to have lost more than half a billion dollars last year.

Watch: How serious is the UK Government taking the threat of AI?

Microsoft was prepared to shoulder these costs ­– a million a day isn’t that much to a company with a market capitalisation of $2.7trn – and the gamble paid off, giving the company a considerable lead in the new technology. One AI expert told me they considered OpenAI’s (and therefore Microsoft’s) tech to be six months to a year ahead of the competition, giving the tech giant a significant edge over Google for the first time in 25 years.

Under the control of the board, however, Microsoft’s money could only push the company so fast. Altman was reportedly seeking billions in investment for a new company making semiconductors to run AI software, from backers including the public investment funds of Saudi Arabia and Dubai.

Marcus said the board’s statement about Altman’s lack of “candour” suggests it was this urge to expand, rather than the threat of a new super-intelligence, that may have led to the problem. “To many of us” in the AI industry, he said, “that reads like Sam was cooking something. They were troubled by what he was cooking, and by the fact that he wasn’t clear about it.”

But the board was wrong to imagine that Altman could be restrained. The Microsoft CEO, Satya Nadella, promptly offered to rehire Altman into Microsoft and within days more than 700 of the company’s employees – practically the entire workforce – threatened to join him in an open letter that demanded the board be sacked. The impartiality and commitment to safety that had supposedly been embedded in the company at its outset was shown to have been disposable.

Like Thomas More (and Thomas Becket, for that matter) the OpenAI board had forgotten that the job of the priestly caste is to anoint power, not to tell power what to do. More wore a hair shirt beneath his chancellor’s robe; his loyalty was to God before the king. When he refused to accept the king’s divorce, he failed to understand that the system was not designed for him or for God, but for Henry VIII, who beheaded him for his disobedience.

The people who sacked Altman similarly failed to realise that Nadella had not given $13bn “in the spirit of a donation”. They were only going to remain in charge for as long as they allowed power to flourish. The money was always going to win.

What’s important about this story is that it shows that AI companies will never regulate themselves. The size of the investments made and the potential returns are too great for any structure other than government regulation to control.

The UK, home to many of the world’s top AI scientists and leading companies such as DeepMind, has a chance to lead on this urgently needed intervention but has so far frittered that opportunity on the science fiction of self-aware super-intelligence, while the EU’s AI Act is being whittled down by lobbying interests (OpenAI among them). “We do need regulation in the short term,” Marcus told me. “And I don’t think you can position yourself as a world leader on the governance of AI if you yourself have no governance on AI.”

[See also: Tax cuts are coming – but Jeremy Hunt isn’t sure where]

Content from our partners
Building Britain’s water security
How to solve the teaching crisis
Pitching in to support grassroots football

Topics in this article : , ,

This article appears in the 22 Nov 2023 issue of the New Statesman, The paranoid style