OpenAI investors pressure the board to reinstate Ultraman as CEO, how do we view this situation?

According to media reports from Bloomberg, The Verge, and others, investors of OpenAI are putting pressure on the companys board of directors to overturn the decision to dismiss former CEO Sam Altman and remove him from his position as a director Some investors are seeking the help of Microsoft, the largest shareholder of OpenAI, which operates as a for-profit company However, Altman himself has shown an ambiguous attitude towards returning to OpenAI or starting a new venture Forbes also reported that some venture capital firms are planning to put pressure on OpenAIs research team to support Altmans reinstatement

Ideological Conflict at OpenAI: Safety vs. Development

I would like to discuss the issue at OpenAI from Ilya’s perspective, as he was one of the key figures in initiating the recent upheaval.

It is widely known that Ilya, who studied under Hinton for many years, was deeply influenced by his mentor.

Among the three giants, Hinton is a typical pessimist, believing that AI is like Pandora’s box – once opened, it cannot be closed, and it will ultimately pose a threat to humanity itself.

In his own words: “We have almost taught computers how to improve themselves, which is very dangerous. We must seriously consider how to control it.”

Hinton does not oppose AI research, but he believes that before rushing forward, we should carefully consider the risks and ethics involved, and plan the technology roadmap and safety guidelines accordingly.

Imagine, if 20 years from now, AI technology develops exponentially, eventually surpassing human control, and giving rise to self-awareness and personality, even generating conflicts with carbon-based life.

Wouldn’t humans regret why they didn’t slow down and learn to control AI beforehand?

Many tech industry leaders understand Hinton’s concerns, such as Elon Musk and Bill Gates.

In contrast, another giant, Yang Likun, is an optimist. He thinks Hinton is unnecessarily worrying, and his counterargument makes sense: “If cars didn’t exist, how would we design seat belts? If airplanes weren’t invented, how would we design safe jet engines? This kind of panic about the future is misleading.”

Because Hinton has been researching in the field of deep learning for 30 years, he has a large number of descendants in the academic and technology circles today. They form a considerable force.

Clearly, OpenAI was also influenced by this force from the beginning. Initially, individuals like Ilya, Greg, Musk, Hoffman, Thiel, and now the beautiful CEO Mira (as she has demonstrated strong commitment to safety, which may be one of the reasons why the board appointed her), and even Sam himself, all considered the control of AI risks to be the bottom line for OpenAI’s development. This line could not be crossed.

That is also why OpenAI is a non-profit organization, because once capital enters, it becomes difficult to control the direction of development.

In other words, safety awareness regarding AI is the company’s DNA. It is precisely because of this consensus that this group of geniuses came together to do a research project that, at that time, seemed to have no benefits.

I believe Sam initially also believed in this ideology. Although he holds the position of CEO, he has no equity, no salary (there is a small investment connection), and no majority voting rights. From a business perspective, he is purely a volunteer. If it wasn’t for a great ideal, what was his motive?

However, after the launch of ChatGPT, Sam and Ilya gradually began to diverge.

As the head of the family, Sam knows the cost of living and supporting a company with a large number of people. He has to walk the line between safety and development.

For this reason, Sam even ingeniously designed a complex equity structure for OpenAI, allowing the parent company to focus on research while commercializing the results through its subsidiaries, which generate funds to support the parent company. This is instead of merely relying on the support of Microsoft (Microsoft might devour them).

Ilya certainly didn’t like these maneuvers, but he accepted them, after all, the company needed to survive.

However, with the launch of products like GPT-4 and the subsequent increase in OpenAI’s valuation, as well as the adoration and flocking of the Silicon Valley tech community, Sam’s ambition began to expand. This year’s OpenAI convention, in particular, showcased Sam’s vision of OpenAI’s future commercialization, which was undoubtedly crossing the line in Ilya’s eyes.

From Ilya’s perspective, the initial goal of this joint effort was to counter the giants and prevent AI technology from becoming a threat to humanity in their hands. But now, you’ve become a dragon in your youth? Are you trying to create another Microsoft?

This profound ideological difference made Ilya feel that Sam had changed, giving him a sense of “betrayal by a revolutionary comrade”.

Moreover, given Ilya’s background and abilities, he probably dislikes being referred to as the “father of OpenAI” by the outside world. In his eyes, Sam is just someone involved in finance, responsible for raising funds for the company. How did he become the father?

Therefore, Ilya decided to stage a coup to restore the purity of the revolutionary team.

However, in my opinion, the background of the beautiful CEO Mira also resembles that of a “revolutionary opportunist”.

But, from Sam’s perspective, he felt incredibly misunderstood.

OpenAI is now a monstrous financial beast. If I don’t engage in commercialization or develop an AI platform to boost the company’s valuation, who will foot the bill for the investors?

Furthermore, since I don’t hold equity and receive no salary, my hard work is solely to raise money for you to conduct research. Without money, what kind of research can you do?

Can you see it now? Sam and Ilya are not on the same path.

Ilya tends towards Hinton’s ideology, prioritizing safety as the bottom line and allowing development above it.

Sam tends towards Yang Likun’s ideology, promoting safety while simultaneously pursuing development. Without development, there is no safety.

This clash of ideologies is the root cause of this coup.

As for the future development, I estimate that ultimately, Ilya will compromise.

It is not because the majority of investors support Sam (Ilya certainly doesn’t fear the investors), but rather due to practical considerations.

If you don’t research, others will. It is difficult for an individual to prevent the development of AI technology. Rather than letting giants like Microsoft control this power, it would be safer to keep it in our own hands.

Although Ilya has a background in scientific research and lacks a deep understanding of the business field compared to Sam, and even in the eyes of many people, this coup may seem foolish, I would say that it is precisely because of individuals like Ilya and Hinton that humanity has hope for the future.

Idealism can be ridiculed, but it cannot be absent.

Leadership Change at OpenAI

This matter can be divided into two main processes.

The first part is when Sam Altman, along with several board members led by IIya, suddenly ousted Greg, who was unaware of the situation, and later resigned when he was forced to transfer.

The second part is that the event happened too suddenly, and it took about half a day for many investors and employees to wake up and put pressure on the board to bring Sam back. Investors played their cards with money, while OpenAI employees played their cards with people.

Even Mira, who was just promoted to the CEO position yesterday, uncommonly replied to Sam Altman’s tweet.

Sam’s tweet simply said, “I really like the OpenAI team.”

This kind of thing is actually ambiguous. It can either express that “I like this team, so I want to continue working at OpenAI” or it can express that “I don’t want to stay here anymore, I have other places to go”.

However, for those who are directly related to OpenAI’s interests, namely the investors and employees, they are the ones who suffer the most.

Because whether Sam stays or leaves, it is a huge loss for OpenAI.

And the way Sam was fired is hard for both the company and the employees to accept.

The CEO of a company being fired is decided by just a few people, without informing the other stakeholders or anyone who understands the matter.

Instability is unacceptable both in terms of money and personnel. It is also the same for employees. They could be fired without any reason at any time.

Anyone would be flustered.

Personally, I think it’s quite possible for Sam to return. His return would not have a significant negative impact on OpenAI; after all, he is good at running things and OpenAI would still be the strongest in the field. As for IIya, he may be a technology-driven fanatic, with little interest in being the CEO. His main concern is probably to slow down the release speed of AI.

Slowing down the release pace is not an impossible task.

It’s no big deal to recruit a group of people to work on alignment and safety.

After all, they have been best friends for eight years.

Ilya was still upset.

To think that such matters can be resolved with the simplistic approach of “five hundred school knife-wielding assassins lying in ambush behind the tent, breaking cups as a signal” from the stories is simply impossible… The crucial factor here is whether we can unite the main forces. Currently, it is obvious that the main force in OpenAI is capital. They provide investment and funds to OpenAI, enabling the development of AGI products. For the same reason, employees also tend to side with the capital.

As for safety and other related matters, it is clear that capital does not care. Although they all claim to be deeply concerned about AI safety issues, it is mostly lip service. Therefore, in a few more years, AI safety may truly become a significant filter for humanity.

Serious consequences of CEO’s dismissal of Ilya

Sam Altman still has a great reputation. He posted on Twitter, “I love the OpenAI team very much,” and various OpenAI employees, including verified accounts, flooded the comments to show their support.

So, how can you say that a CEO like this will just fire someone without considering public opinion? It’s difficult to appease the public. Therefore, it is entirely possible for Ilya to return, and his rebellion group has been completely passive. For example, Sam may make demands like “I won’t come back unless Ilya is expelled.” How can you handle that? This shows how formidable Ilya is. He has not publicly explained the reasons for this decision. Currently, the speculation circulating is that he really believes that commercializing GPT-4 does not align with OpenAI’s non-profit mission, and that premature commercialization could pose AI safety problems. If he truly has such pure intentions, he is still a respected scholar with a conscience. However, his approach is too naive.

Explanation of OpenAI employees' joint request to the Board of Directors

It is said that the employees are demanding the Board of Directors to clarify the situation, but there has been no response so far. The current trend on Twitter is to show support for Sam with heart emojis… Many people have been overwhelmed by the flood of love from OpenAI employees, but if you look closely, there aren’t actually that many people showing loyalty through heart emojis. It’s just that suddenly there is a large number of hearts appearing together, making it more noticeable.

Andrej Karpathy has drawn a lot of attention because he is generally seen as Ilya’s close friend. However, even he expressed that he didn’t know what had happened and hopes that the Board of Directors will explain the situation. It seems that Andrej is also a bit displeased with this impulsive behavior. However, he did not send a heart emoji to Sam…

Both Sam and Ilya are indispensable to OpenAI. Ideally, Sam would handle the for-profit business and Ilya would take care of the non-profit research work, finding a compromise between the two. For Sam, he has already gained a lot of reputation for ChatGPT. This time, Ilya made a questionable move without considering the team’s opinions, which further boosted Sam’s influence. Moreover, it is easiest and most predictable to achieve great things based on ChatGPT and Microsoft’s platform, so it is better for him to stay at OpenAI.

If Sam can find a way to bring himself back to OpenAI while also retaining Ilya and solving the problem with an innovative company structure, creating a win-win situation, then he would truly be a successful capitalist. We’ll wait for further developments!

OpenAI Governance and Internal Control

Do not think too highly of Sam Altman. The strange governance structure of OpenAI can easily turn into a situation where insiders have control. Even if Microsoft had invested billions of dollars, it is unimaginable that they would know about a boardroom coup one minute in advance, just like in a normal company.

And this might actually be the case. Some people online have posted the former board members of OpenAI, not only Elon Musk, but also other big names like Peter Thiel. Compared to these people, the current board members are practically nobody. Previously, Elon Musk clashed with Altman and demanded his resignation, but it was Musk who ultimately left. Why have those big names all disappeared?

Many comments say that Ilya does not understand interpersonal matters. On the contrary, he must have witnessed too many high-level upheavals and knows that launching a sudden attack is not feasible. Normally, Altman probably doesn’t listen to advice, so they can only resort to stronger measures.

Investor Confidence Shaken

The biggest issue is that Microsoft claims they also only found out about the news at the last minute, which means that even the largest shareholder with a 49% stake was not notified in advance. This seriously undermines investor confidence.

You can say that Ilya has ideals, but ideals can’t put food on the table. As the Chief Scientist, you must definitely not be lacking in money. You’re willing to work for free for the sake of your ideals, but there are still many people in the company who are working to support their families.

OpenAI is currently losing millions of dollars every day and burning through money daily. Even if they start charging fees, they won’t be able to recover costs, and every time a user asks a question, it’s like throwing money away.

So, it’s not unreasonable for Sam to prioritize expanding commercialization and reaching a break-even point as soon as possible. Microsoft has plenty of money and can afford to burn it, but if you continue burning for a few more years, will Microsoft really have no objections? Will other investors consider exiting in advance?

Next
Previous