November 8, 2024

Many Details of Sam Altman’s Ouster Are Murky. But Some Things Are Clear.

Sam Altman #SamAltman

All over Silicon Valley, phones lit up on Friday with the same question: What the heck happened to Sam Altman?

The sudden, mysterious ouster of Mr. Altman, the chief executive of OpenAI, by the company’s board sent shock waves through the tech world and set off a frenetic guessing game about what brought down one of the industry’s biggest stars, at a time when everything seemed to be going his way.

I’ll start by saying: I don’t know all the details about why Mr. Altman was pushed out. Neither, it seems, do OpenAI’s shellshocked employees, investors and business partners, many of whom learned of the move at the same time as the general public. In a blog post on Friday, the company said that Mr. Altman “was not consistently candid in his communications” with the board, but gave no other details.

An all-hands meeting for OpenAI employees on Friday afternoon didn’t reveal much more. Ilya Sutskever, the company’s chief scientist and a member of its board, defended the ouster, according to a person briefed on his remarks. He dismissed employees’ suggestions that pushing Mr. Altman out amounted to a “hostile takeover” and claimed it was necessary to protect OpenAI’s mission of making artificial intelligence beneficial to humanity, the person said.

Mr. Altman appears to have been blindsided, too. He recorded an interview for the podcast I co-host, “Hard Fork,” on Wednesday, two days before his firing. During our chat, he betrayed no hint that anything was amiss, and he talked at length about the success of ChatGPT, his plans for OpenAI and his views on A.I.’s future.

Mr. Altman stayed mum about the precise circumstances of his departure on Friday. But Greg Brockman — OpenAI’s co-founder and president, who quit on Friday in solidarity with Mr. Altman — released a statement saying that both of them were “shocked and saddened by what the board did today.” Mr. Altman was asked to join a video meeting with the board at noon on Friday and was immediately fired, Mr. Brockman said.

There will be plenty of palace intrigue in the coming days, as the full story emerges. But a few things are already clear.

First, the ouster was only possible because of OpenAI’s unusual corporate governance structure. OpenAI started in 2015 as a nonprofit and in 2019 created a capped-profit subsidiary — a novel arrangement in which investors’ returns are limited to a certain amount above their initial investment. But it retained the nonprofit’s mission and it gave the nonprofit’s board the power to govern the activities of the capped-profit entity, including firing the chief executive. Unlike some other tech founders, who keep control of their companies via dual-class stock structures, Mr. Altman doesn’t directly own any shares in OpenAI.

There are several more quirks about OpenAI’s board. It’s small (six members before Friday, and four without Mr. Altman and Mr. Brockman) and includes several A.I. experts who hold no shares in the company. Its directors do not have the responsibility of maximizing value for shareholders, as most corporate boards do, but are instead bound to a fiduciary duty to create “safe A.G.I.” — artificial general intelligence — “that is broadly beneficial.”

At least two of the board members, Tasha McCauley and Helen Toner, have ties to the Effective Altruism movement, a utilitarian-inspired group that has pushed for A.I. safety research and raised alarms that a powerful A.I. system could one day lead to human extinction. Another board member, Adam D’Angelo, is the chief executive of Quora, a question-and-answer website.

Some of Mr. Altman’s friends and allies accused these board members of staging a “coup” on Friday. But it’s still not clear which board members voted to oust Mr. Altman or what their motivations were.

What we also know about Mr. Altman’s ouster is that it has the potential to roil the entire tech industry. Mr. Altman was one of the best-connected executives in Silicon Valley, thanks to his years spent running the start-up accelerator Y Combinator. His connections allowed OpenAI to forge strong bonds with other tech companies.

Microsoft, in particular, has cast its lot with OpenAI, investing more than $10 billion in the company and providing much of the technical infrastructure on which products like ChatGPT depend. Earlier this month, Satya Nadella, Microsoft’s chief executive, appeared onstage at a developer conference with Mr. Altman, and said it had been “just fantastic partnering with you guys.”

Typically, such close ties would entitle you to a heads-up about a sudden C.E.O. ejection. But Microsoft’s top executives learned about Mr. Altman’s firing only a minute — yes, one minute — before the news went out to the public, according to Axios. On Friday, Mr. Nadella reassured customers that the company’s deal with OpenAI remained intact, but it’s clear that the company will want answers about why one of its most important strategic partners removed its top executive so abruptly.

OpenAI’s fate also matters to the thousands of developers who build A.I. products on top of its language models, and rely on the company to maintain stable infrastructure. Those developers may not flock to a rival overnight, but if more OpenAI employees quit — at least three senior OpenAI researchers announced they were leaving on Friday, according to The Information — they may be tempted to start shopping around.

Lastly, Mr. Altman’s defenestration will almost certainly fuel the culture war in the A.I. industry between those who think A.I. should be allowed to move faster and those who think it should be slowed down to prevent potentially catastrophic harms.

This argument, sometimes referred to as one between “accelerationists” and “doomers,” has flared up in recent months as regulators have begun to circle the A.I. industry and the technology has become more powerful. Some prominent accelerationists have argued that big A.I. companies are lobbying for rules that could make it harder for small start-ups to compete with them. They have blamed safety advocates in the industry for inflating A.I.’s risks in order to entrench themselves.

Safety advocates, on the other hand, have sounded alarms that OpenAI and other companies are moving too quickly to build powerful A.I. systems and ignoring voices of caution. And some skeptics have accused these companies of stealing copyrighted works from artists, writers and others to train their models.

Mr. Altman was always careful to straddle the line between optimism and worry — making clear that he believed that A.I. would ultimately be beneficial to humanity, while also agreeing that it needed guardrails and thoughtful design to keep it safe.

Some version of this argument has played out among OpenAI’s staff for years. In 2020, a group of OpenAI employees quit over concerns that the company was becoming too commercial and sidelining safety research in favor of lucrative deals. (They went on to start the rival A.I. lab Anthropic.) And several current and former OpenAI employees have told me that some staff members believed that Mr. Altman and Mr. Brockman could be too aggressive when it came to starting new products.

None of this is necessarily related to why Mr. Altman was pushed out. But it’s certainly a hint of a battle that’s likely to come.

During our interview on Wednesday, Mr. Altman said he considered himself something of a centrist in the A.I. debate.

“I believe that this will be the most important and beneficial technology humanity has ever invented. And I also believe that if we’re not careful about it, it can be quite disastrous, and so we have to navigate it carefully.”

He added, “I think you want the C.E.O. of this company to be somewhere in the middle, which I think I am.”

Leave a Reply