What is the real reason for Sam Altman leaving OpenAI?

OpenAI Job Changes

OpenAI’s primary responsibility is to “be accountable to humanity.”

Author | Xiaoxin
Editor | Jingyu

Hollywood spy war films never dare to portray events like this.

On November 17, local time, during a video conference, OpenAI’s board of directors suddenly announced that co-founder and CEO Sam Altman had resigned immediately, key figure in the development of ChatGPT Greg Brockman had been dismissed, and the company’s CTO Mira Murati had been appointed as interim CEO.

In a post published on X, Sam Altman stated, “I love the time I spent at OpenAI” and mentioned that he would have more to say about the future.

Greg Brockman stated, “Sam and I are shocked and saddened by the actions of the board of directors today.” He continued, “We will be fine. Greater things are yet to come.”

Now everyone is guessing, what exactly has happened at OpenAI? Why was Sam Altman fired and Greg Brockman dismissed?

OpenAI’s board of directors is composed of Ilya Sutskever, the Chief Scientist of OpenAI, Adam D’Angelo, CEO of Quora, Tasha McCauley, a technology entrepreneur, and Helen Toner, the Director of Strategy at the Center for Security and Emerging Technology at Georgetown University.

Just before OpenAI announced the dismissal of Sam Altman, Ilya sent a message to Sam notifying him to join the video conference on the morning of the 17th. Everyone knows what happened later.

From the current information, it can be seen that the dismissal was notified by board member Ilya Sutskever, who leads an AI safety team, and interim CEO Mira Murati was also informed in advance.

The reason Sam Altman was ultimately ousted may not be due to Chief Scientist Ilya’s “internal struggle,” but rather a concern that the whole AI industry has been discussing and what global regulators are most worried about - “safety.”

01

“Divergence” on “Safety”

Before Sam Altman was fired, there had always been internal divisions and debates at OpenAI regarding AI safety.

This summer, OpenAI established a safety team dedicated to finding technical solutions to prevent AI from going out of control and understanding and mitigating the potential harm of AI on society, such as abuse, economic disruption, misinformation, bias and discrimination, addiction, and over-reliance. OpenAI also announced their commitment to dedicate 20% of their compute to this work.

One of the leaders of this safety team is Ilya Sutskever, co-founder and Chief Scientist of OpenAI. It is this person who issued the ultimatum to Sam Altman.

According to Greg Brockman, on the evening of November 16, Sam received a message from Ilya requesting a discussion at noon on Friday. Sam joined a meeting where the whole board was present, except for Greg. Ilya told Sam that he was being dismissed and that the news would be announced soon.

At noon on November 17, Greg received a message from Ilya requesting a quick call. Ilya sent a meeting link, and Greg was then informed that he would be removed from the board, but his role in the company would be retained.

Greg is one of the earliest employees of OpenAI and has spent a lot of time writing software. He played a key role in developing ChatGPT and other core products and holds equity in the company. A few hours after the board dismissed Sam Altman, he chose to resign, perhaps because he was a close ally of Sam Altman.

Around the same time, OpenAI released a blog post stating, “Mr. Altman’s departure was the result of deliberations and reviews by the board of directors. He was not candid in his communications with the board, hindering his ability to fulfill his duties. The board no longer has confidence in his ability to lead OpenAI.”

Many are questioning whether this is a malicious “coup” that occurred at OpenAI. Sam Altman, who was dismissed, received widespread support and sympathy.

Former Google CEO Eric Schmidt directly called Sam Altman a hero and praised him for transforming OpenAI from nothing to a valuation of 90 billion dollars. Brian Chesky, CEO of Airbnb, also joined the praise for Sam Altman, calling him one of the most exceptional founders of his generation.

In response to the accusations of a “coup” at the all-hands meeting held after Sam Altman’s dismissal, Ilya reportedly responded with, “You could say that,” but he disagreed, believing that “this is the board fulfilling its responsibility as a nonprofit organization to ensure that OpenAI builds AGI for the benefit of all of humanity.”

In an internal memo sent to employees, interim CEO Mira Murati mentioned “our mission and our ability to co-develop beneficial and safe AGI.” She stated that OpenAI has three pillars, including “maximizing our research agenda, our safety and alignment work – particularly our scientific forecasting and capabilities around risk – and sharing our technology with the world in a way that benefits everyone.”

Ilya Sutskever, Sam Altman, Mira Murati, Greg Brockman|Wired

OpenAI was founded in 2015 as a nonprofit organization. However, in 2019, OpenAI underwent a reorganization, and Sam Altman became the CEO of OpenAI. He helped create a for-profit subsidiary called OpenAI LP to raise capital from external investors, such as Microsoft, for training AI. OpenAI itself remains a nonprofit organization managed by the board of directors.

Altman played a crucial role in fundraising, including securing $10 billion from Microsoft. Despite setting a profit cap for investors, OpenAI’s valuation has now reached $85 billion, and according to Sam Altman’s recent communication with employees, OpenAI is generating $1.3 billion in annual revenue.

On the other hand, concerns about AI safety and the commercialization of OpenAI have also divided the management at OpenAI. At the end of 2020, a group of OpenAI employees left the company and founded an AI company called Anthropic, which focuses on safety. The founders explicitly stated their departure was due to dissatisfaction with OpenAI’s commercialization and strategy.

Ilya has long been concerned about the risks of powerful AGI in the coming years. In a previous blog post at OpenAI, he wrote, “A superintelligence with power far beyond humans could be very dangerous and could lead to the human race being either dominated or extinct.”

After Sam Altman’s dismissal, the OpenAI board of directors stated in a statement, “OpenAI remains dedicated to our mission: ensuring that artificial general intelligence benefits all of humanity.”

02

Where is the “Lack of Candor”?

In addition to the “safety” issue, the accusation of Sam Altman being “not candid” by OpenAI’s board of directors raises doubts about whether Sam Altman had concealed expenses based on the existing information.

On November 15, just two days before Sam Altman was dismissed, OpenAI controversially announced that they would stop accepting new registrations for ChatGPT Plus, the $20/month premium version. This partially highlights the issue of costs and revenue.

Insiders have claimed that OpenAI is losing money on each ChatGPT Plus user and needs time to increase server capacity.

The timing of Sam Altman’s dismissal closely coincided with this event, causing some people to speculate whether Sam Altman had misrepresented financial data, user numbers, or cloud computing costs to the board. Was expenditure higher than expected? Or did he obstruct the board from obtaining relevant information?

Another confirming aspect is that the board also removed OpenAI co-founder and CTO Greg Brockman from the role of chairman because Greg may focus on the technical costs of running OpenAI’s system. He is one of the most influential figures in the startup, having a say in everything from product decisions to setting directions for the engineering team. If expenditures went off track, the two executives might have conspired together.

Additionally, Sam Altman’s departure is also related to some unusual factors. There are reports that Sam Altman planned to use OpenAI’s technology for Worldcoin, which has raised concerns about ethical risks.

As the “backers” of OpenAI, what is Microsoft’s stance? According to insiders familiar with the matter, the Microsoft leadership received only 5 to 10 minutes' advance notice from OpenAI before Sam Altman’s dismissal was publicly announced. OpenAI’s interim CEO, Mira Murati, told employees in a company-wide meeting on Friday that their relationship with Microsoft remains stable.

Not long ago, on November 9, Microsoft briefly blocked employee access to ChatGPT on company devices and warned employees, citing security concerns.

Just ten days have passed since the OpenAI Developer Conference, during which Sam Altman was still mingling with Microsoft executives and even participated in his regular meeting with Microsoft’s infrastructure executives on the 15th.

“It’s been a strange day in many ways,” Sam Altman admitted in his post on X.

He had mentioned that the ultimate goal of GPT-5 is to create a super AI equivalent to human intelligence. But has AI’s development already posed safety concerns that have ultimately led to this rapid internal “coup”?

After being kicked out of OpenAI, what will be Sam Altman’s next step?


Timeline of Sam Altman’s Dismissal at OpenAI

November 6, 2023

OpenAI holds the Developer Conference and launches the “Custom ChatGPT” feature.

November 9, 2023

Microsoft issues a warning to their employees to stop using ChatGPT due to security concerns.

November 15, 2023

OpenAI temporarily suspends new registrations for ChatGPT Plus. Microsoft CEO Satya Nadella announces at the Ignite conference that all of OpenAI’s innovations will be used on Microsoft’s Azure cloud service.

November 16, 2023

Sam Altman receives a message from Ilya notifying him of a discussion at noon on Friday.

November 17, 2023

OpenAI’s board of directors suddenly announces that co-founder and CEO Sam Altman has resigned immediately. The board concluded this after deliberations and reviews, stating that Altman was not candid in his communications with the board, hindering their ability to fulfill their responsibilities. The board no longer has confidence in his leadership of OpenAI. On the same day, Altman published a statement on X (previously Twitter) regarding his departure.

Later on November 17, 2023

OpenAI co-founder and former chairman of the board Greg Brockman announces that he will also leave the company. This occurred after the board requested Brockman to resign as chairman but retain his role in the company.

November 17, 2023

OpenAI appoints the company’s Chief Technology Officer Mira Murati as interim CEO.

Disagreements within OpenAI lead to Sam Altman’s dismissal.

Because Sam, as a manager/investor, and Ilya, as an AI scientist, have divergent development philosophies and cannot “align”.

I just read an article in The Atlantic that traces the issue back to before the release of ChatGPT, suggesting that OpenAI had already divided into (at least) two camps (the tech optimists and the “AI doomsdayers”). Let me translate the article:

Amidst the Chaos at OpenAI

Sam Altman’s shocking and dramatic weekend began a year ago with the launch of ChatGPT. By Karen Hao & Charlie Warzel

In the astonishing events that unfolded over the past 48 hours - the sudden departure of OpenAI CEO Sam Altman, one of the leaders of the generative AI revolution, followed by reports that the company is considering rehiring him - understanding it all requires recognizing that OpenAI is not an ordinary tech company. At least, not like the paradigm-shifting companies of the internet era, such as Meta, Google, and Microsoft.

OpenAI was founded with the intention of resisting the mainstream values of the tech industry - the relentless pursuit of scale and the fast-and-loose release of consumer products. It was established in 2015 as a nonprofit organization dedicated to creating artificial general intelligence (AGI) that benefits “all of humanity.” (According to the company, AGI would be sufficiently advanced to outperform humans in “most economically valuable work” - a powerful technology that needs responsible management.) In this vision, OpenAI resembled more of a research institute or think tank. The company’s charter explicitly states that OpenAI’s “primary fiduciary duty is to humanity,” not investors or even employees.

This model did not last long. In 2019, OpenAI launched a “capped-profit” subsidiary that was able to raise funds, attract top talent, and ultimately manufacture commercial products. However, the nonprofit board of directors retained complete control. These corporate details are at the heart of OpenAI’s rapid ascent and Altman’s stunning downfall. Last Friday, Altman was dismissed by the OpenAI board, marking the climax of a power struggle between two ideological extremes within the company - one stemming from Silicon Valley’s tech optimism fueled by rapid commercialization, and the other deeply concerned about the risks that artificial intelligence poses to humanity, calling for extreme caution. These two factions have managed to coexist over the years, despite the challenges along the way.

According to current and former employees, this delicate balance almost completely collapsed a year ago today with the emergence of ChatGPT, which gained global prominence for OpenAI. From the outside, ChatGPT seemed like one of the most successful product launches ever. Its growth outpaced anything in consumer applications history, seemingly single-handedly redefining millions of people’s understanding of the threats and promises of automation. But it pushed OpenAI in the opposite direction, exacerbating existing ideological fault lines. ChatGPT accelerated the race to create profit-generating products while putting unprecedented pressure on the company’s infrastructure and the employees focused on evaluating and mitigating technical risks. It heightened tensions between the various factions within OpenAI - referred to by Altman himself as “tribes” in an employee email from 2019.

Dialogues between Atlantic and ten current and former OpenAI employees reveal the transformation that the company has undergone, leading to the unsustainable disagreements among the leadership. (We agreed not to disclose any employees' names - they told us they feared consequences for speaking candidly about internal OpenAI affairs to the media.) From their accounts, it is evident that the pressure to commercialize the profit-generating department of the company has increasingly conflicted with the company’s established mission, until ChatGPT and the subsequent product launches reached a breaking point. “The pathway to profits and revenues became very clear after ChatGPT,” one insider told us. “One could no longer pretend this was an idealistic research institute. There are clients to be served here and now.”

We still do not know the exact reason for Altman’s dismissal or whether he will return to his position. Altman visited OpenAI’s headquarters in San Francisco this afternoon for discussions on possible agreements, but he did not respond to our request for comment. The board announced on Friday that, after a “deliberate review process,” they found that “he had not always been forthcoming in his communications with the board,” leading to a loss of confidence in his ability to serve as the CEO of OpenAI. Following this announcement, an internal memo was sent to employees by the Chief Operating Officer, confirmed by an OpenAI spokesperson, stating that the dismissal was due to a “communication breakdown” between Altman and the board, not due to “any impropriety or anything to do with our financial, business, security, or safety/privacy practices.” However, no specific details were provided. All that is known is that OpenAI has been embroiled in turmoil for the past year, defined largely by apparent disagreements over company direction.

Before the launch of ChatGPT in the fall of 2022, OpenAI was gearing up for the release of its most powerful language model yet, GPT-4. Teams were busy refining the technology, which enabled smooth writing and coding and describing the content of images. They were focused on preparing the necessary infrastructure to support this product and refining policies on what user behaviors OpenAI would tolerate and not tolerate.

Behind all of this, rumors began circulating within OpenAI that their competitor, Anthropic, was developing their own chatbot. This competition became personal: Anthropic was founded in 2020 by a group of employees who reportedly had concerns about the speed of product releases at the company. According to three individuals who were at the company at the time, in November of last year, OpenAI’s leadership told employees they had to release a chatbot within weeks. In order to accomplish this, they instructed employees to release an existing model, GPT-3.5, with a chat-based interface. The leadership carefully characterized this effort as a “low-key research preview” rather than a product release. Altman and other executives argued that by putting GPT-3.5 into people’s hands, OpenAI could gather more data on how people use and interact with AI, which would help inform the development of GPT-4. This approach also aligned with the company’s overall deployment strategy of gradually releasing technology to the world and getting people accustomed to it. Some executives, including Altman, began echoing the sentiment: OpenAI needed to kickstart the “flywheel of data.”

Some employees expressed unease about rushing the release of this new conversational model. The company was already stretched thin with preparations for GPT-4, and introducing a chatbot that could shift the risk landscape posed a challenge. Just a few months earlier, OpenAI had launched a new traffic monitoring tool to track basic user behavior. It was still refining the tool’s capabilities to understand how people were using the company’s products, which would further guide them in mitigating potential dangers and abuses of the technology. Other employees felt that turning GPT-3.5 into a chatbot would pose minimal challenges since the model itself had already undergone thorough testing and refinement.

The company pressed on and launched ChatGPT on November 30. It was seen as a small event, to the extent that there was no major company-wide announcement about the chatbot’s launch. Many employees not directly involved, including those in the security function, weren’t even aware that it happened. According to one employee, some knowledgeable employees started placing bets on how many people might use the tool in the first week. The highest guess was 100,000 users. OpenAI’s president announced on Twitter that the tool reached 1 million users within the first five days. The phrase “low-key research preview” became an instant meme within OpenAI; employees turned it into laptop stickers.

The rapid success of ChatGPT brought immense pressure to the company. Computational resources for the research team were reallocated to handle the surge in traffic. As the traffic continued to increase, OpenAI’s servers repeatedly crashed; the traffic monitoring tool also repeatedly failed. Even when the tool was online, employees struggled to gain detailed insights into user behavior due to its limited functionality.

Internally, the security team pushed for a slowdown. These teams worked to improve ChatGPT, rejecting certain types of abuse requests and responding with more appropriate answers to other queries. However, they encountered difficulties in building features such as automatically banning users who repeatedly abused ChatGPT. Meanwhile, the product department wanted to capitalize on the momentum and increase commercialization efforts. The company hired hundreds of employees to actively expand its product offerings. In February, OpenAI released a paid version of ChatGPT; in March, it quickly followed with an API tool to help businesses integrate ChatGPT into their products. Two weeks later, it finally released GPT-4.

According to three employees who were at the company at the time, the rapid release of the new products exacerbated the situation. The functionality of the traffic monitoring tool significantly lagged behind, providing limited visibility into the traffic generated by products integrated with ChatGPT and GPT-4 through the new API tool, making it more difficult to understand and prevent abuse. At the same time, as users created accounts on a large scale, fraudulent activities on the API platform surged as they took advantage of the $20 per-use billing service available to them through these new accounts. Preventing this fraud became a top priority to stem revenue loss and prevent users from evading abuse enforcement by creating new accounts: overburdened trust and safety team members were reassigned to focus on this issue, affecting work in other abuse areas. Under increasing pressure, some employees began experiencing mental health issues. Communication became strained, and colleagues often only realized someone had been fired after noticing their absence on Slack.

The release of GPT-4 also frustrated the alignment team, which focused on addressing upstream AI safety challenges, such as developing techniques to make models follow user instructions and prevent them from generating harmful speech or “hallucinations” - confidently presenting false information as facts. Many team members, including those increasingly concerned about the risks of more advanced AI models, were uneasy about the rapid launch and extensive integration of GPT-4 into other products. They felt that the AI safety work they were doing was not enough.

At the top level of the company, tensions grew even more pronounced. While Altman and OpenAI President Greg Brockman encouraged further commercialization, Chief Scientist Ilya Sutskever grew increasingly concerned about whether OpenAI was still adhering to its founding mission as a nonprofit organization - to create beneficial AGI. Over the past few years, the rapid advancements of OpenAI’s large language models had made Sutskever more certain of the imminent arrival of AGI, and he became more focused on preventing its potential dangers. This is echoed by AI pioneer Geoffrey Hinton, with whom Sutskever maintained close contact as Hinton was Sutskever’s doctoral supervisor at the University of Toronto. (Sutskever did not respond to requests for comment.)

Anticipating the arrival of this powerful technology, Sutskever began acting like a spiritual leader, according to three individuals who worked closely with him. He often passionately repeated the slogan “Feel the AGI,” alluding to the critical moment the company was in to achieve its ultimate goal. At OpenAI’s holiday party in 2022, held at the California Academy of Sciences, Sutskever led employees in chanting, “Feel the AGI! Feel the AGI!” The slogan was popular enough in itself that OpenAI employees created a special “Feel the AGI” reaction emoji in Slack.

As Sutskever grew more confident in OpenAI’s technological prowess, he also aligned more with the internal teams focused on risks. According to insiders, during a leadership offsite earlier this year, Sutskever commissioned a local artist to create a wooden idol representing “misaligned” AI - AI that does not align with human values. He set it on fire, symbolizing OpenAI’s commitment to its founding principles. In July, OpenAI announced the formation of a so-called Superalignment Team, co-led by Sutskever, to conduct research. The company stated that it would expand the team’s research to develop more upstream AI safety techniques in preparation for the potential arrival of AGI in the next decade. The team would be dedicated to this purpose and receive 20% of the company’s existing computing chips.

Meanwhile, other parts of the company continued to roll out new products. Shortly after the establishment of the Superalignment Team, OpenAI released the powerful image generator DALL-E 3. Then earlier this month, the company held its first “Developer Conference” where Altman unveiled GPTs, a customizable version of ChatGPT that can be built without coding. These launches brought significant challenges. According to company updates, OpenAI experienced a series of disruptions, including a major outage of ChatGPT and its APIs. Three days after the Developer Conference, CNBC reported that Microsoft temporarily restricted employee use of ChatGPT due to security concerns.

Despite facing various challenges, Altman continued to push forward. In the days leading up to his dismissal, he actively promoted OpenAI’s ongoing progress. He revealed to the Financial Times that the company had already started working on GPT-5 and hinted at a “stunning move” at the Asia-Pacific Economic Cooperation (APEC) summit a few days later. “In just the past few weeks, I’ve had the privilege of being close to the action and seeing how we’re peeling back ignorance bit by bit and pushing the frontiers of discovery,” he said. “To be able to be a part of that has been the highest honor of my career.” Reports indicate that Altman was also seeking to raise billions of dollars from SoftBank and Middle Eastern investors to establish a chip company that could compete with NVIDIA and other semiconductor manufacturers while reducing costs for OpenAI. In the span of a year, Altman helped transform OpenAI from a hybrid research company into a full-fledged Silicon Valley tech company in high-growth mode.

In this context, it is understandable why tensions came to a head. OpenAI’s charter places principles above profits, shareholders, and any individual. The company was in part founded by the group that now represents Sutskever - those who fear the potential of AI and whose beliefs sometimes seem rooted in sci-fi - and this group is also part of OpenAI’s current board of directors. But Altman also positioned OpenAI’s commercial products and fundraising efforts as means to achieve the ultimate goal. He told employees that the company’s models are still in the early stages of development and that OpenAI should commercialize and generate sufficient revenue to ensure it can invest in alignment and safety issues without restrictions; ChatGPT was reportedly expected to generate over $1 billion in revenue per year.

On one hand, Altman’s dismissal can be seen as an experiment within the unusual structure of OpenAI. This experiment may lead to the dismantling of the familiar company structure and profoundly influence the direction of artificial intelligence. If Altman were to return to the company amid pressure from investors and strong opposition from existing employees, it would represent a significant concentration of power for Altman. It would indicate that, despite the grand charter and principles of OpenAI, it may ultimately be just another traditional tech company.

However, from another perspective, whether Altman stays or leaves does not address a dangerous flaw in the development of artificial intelligence. Over the past 24 hours, the tech industry has held its breath, waiting to see what fate awaits Altman and OpenAI. Although Altman and others have voiced support for regulation and welcomed global feedback, this tumultuous weekend shows just how few voices can be heard in the progress of what may be the most important technology of our time. The future of AI is being determined by the ideological struggle between wealthy tech optimists, ardent doomsayers, and billion-dollar companies. The fate of OpenAI may be uncertain, but the hubris of the company - its proclaimed openness - has shown its limitations. It seems that the future will be decided behind closed doors.


The latest news is that Ilya is still standing up to the investors without giving in to Microsoft:

Now, there will be even more excitement to come…

Next
Previous