OpenAI Co-Founder and Chief Scientist Ilya Sutskever Speaks Out with Deep Regret How Should We View This Matter?

AI Express News, November 20th - OpenAI co-founder and Chief Scientist, Ilya Sutskever, expressed deep regret over his actions on joining the board He never intended to harm OpenAI He cherishes everything we have built together and will make every effort to reunite the company OpenAI co-founder and Chief Scientist, Ilya Sutskever, has spoken out

Differences in Team Philosophy, Going Separate Ways is a Good Thing

When there are stark differences in philosophy within a team, it is actually a good thing to part ways earlier.

Earning money through product development and realizing the ideal of safe general artificial intelligence are fundamentally irreconcilable contradictions, whether it is a for-profit organization being accountable to investors or a nonprofit organization being accountable to its own ideals.

Think back to the Taiping Heavenly Kingdom. Along the journey to Tianjing, everyone could still endure hardships together. However, once they reached Tianjing and their values clashed, the problem could no longer be concealed.

There were fundamentalists who wanted to establish a paradise on earth, and there were realists who wanted to establish an ordinary dynasty.

Parting ways early certainly hurts the organization as a whole,

but it is still better than the bloodshed caused by the Tianjing Incident no matter what.

Ultraman joins Microsoft, OpenAI is concerned about employees following suit, Microsoft’s AI layout becomes stable.

Ilya Sutskever wants to stabilize the staff and is afraid that many people will follow Ultraman to Microsoft.

Currently, many core employees are showing favor to Ultraman, and there may be a considerable number of employees who will follow Ultraman in the future.

In the end, OpenAI is split into two, with Ultraman leading a group of people to Microsoft to work on more commercialized AI.

The remaining people continue to develop their slow-paced AI at OpenAI.

Anyway, Microsoft will not suffer any losses no matter what happens, because it is essentially Microsoft’s asset.

As long as Ultraman doesn’t leave, Microsoft won’t have much to lose.

Ultraman’s radical ideas and ability to attract investments can increase Microsoft’s valuation in the field of AI.

Previously, Ultraman was negotiating the sale of existing OpenAI employee shares at a valuation of $86 billion.

For OpenAI shareholders, they definitely like Ultraman very much.

Ultraman’s departure will have a huge impact on OpenAI’s valuation.

This is also why many shareholders support Ultraman and Microsoft cannot let Ultraman leave.

He is the true money-making machine in the commercialization of AI, attracting financing and cooperation everywhere.

We can basically see the ending now, Ultraman joins Microsoft, and some colleagues will also join along.

The management of OpenAI shows goodwill to Altman.

Ultraman also responded.

Previously, dozens of OpenAI employees announced their resignations.

OpenAI is certainly afraid that there will be more people following Ultraman in the future, so they must stabilize the situation by releasing a statement.

In fact, for technology companies like OpenAI, talent is the top priority and the core asset.

If everyone ends up at Microsoft, OpenAI will also face some trouble in the future.

Among the 700+ employees at OpenAI, most of them joined last year, so stability might not have been there in the first place.

Everyone knows that following Ultraman means there will be benefits, and this is currently OpenAI’s main concern.

Microsoft’s AI layout is already very good.

They have been developing AI models since 2009 and invested in OpenAI in 2019.

This year, Microsoft announced 100+ new products and features centered around AI, including cloud computing infrastructure, model-as-a-service (MaaS), data platforms, and the Copilot AI assistant, showcasing an end-to-end AI vision.

Microsoft wants to seize the dividends of the next AI boom and create a thriving ecosystem that brings together developers, independent software vendors (ISVs), system integrators, enterprises, and consumers. Their ambition is very big.

Currently, this matter has basically stabilized. Microsoft remains the absolute leader in the AI industry and their future AI layout has not been affected by this incident.

The Pointlessness of Obsessing over the Speed of AI Development

Ilya forgot the most important point: his obsession with the speed of AI development is actually meaningless.

OpenAI is not “all AI.” If you slow down, others will not slow down. If you don’t do it, others will still do it. If you strictly control the risks, others may not control the risks. If you don’t commercialize, others will still commercialize.

In the tide of technological progress, individual persons, businesses, and institutions are only a part of it and cannot control the whole situation.

“The Confusion Caused by Ilya’s Regret”

When I first started learning English, I felt that the word “regret” could have both the meanings of “remorse” and “unfortunate”, which could lead to semantic ambiguity. In most cases, I could guess whether it meant “regret” or “unfortunate” based on the context. However, this time I can’t figure out if Ilya’s regret means “remorse” or “unfortunate”.

The outcome of a coup can be imagined

Let’s wait and see what the outcome is for Ilya in the aftermath. If he really orchestrated the coup, then we can imagine the fate of those who have historically failed in coup attempts.

Scientists Prioritize Human Frontiers

It is probably someone using the goodwill of scientists to cause trouble.

Now they have figured it out.


These highly intelligent individuals are not incapable of understanding office politics. It’s just that their priority is to pioneer frontiers for humanity. Everyone has only 24 hours in a day. They probably figured out what to do as soon as they thought about it.

Voting Influences OpenAI’s Development, Financial Environment Restrains Research and Development, Strengthen Constraints to Maintain Control.

Almost everyone has voted with their feet; OpenAI can never go back; it has once again eliminated the good for the bad.

Altman’s overwhelming return is undoubtedly the wrong approach because it completely exposes the communication problems within the upper echelons of OpenAI. At the same time, it will have the effect of marginalizing the technological conservatives as a deterrent. In the future, no one dares to prioritize safety over commercialization, and the development of GPT will accelerate its decline in quality.

It can be said that this failed palace coup has completely exposed the most fundamental problem of OpenAI, which is that it really cannot develop without money. However, even with such aggressive financing, the money is still not enough. This is the root cause of the lack of communication within OpenAI.

As for the current financial environment in the United States, even if OpenAI fully embraces Microsoft, the money may still be insufficient due to the overall drag on the U.S. economy. Investors are eager for larger returns to offset losses in other areas. Consequently, the tolerance for AI research and development will significantly decrease, and the research work will only become more inadequate.

If Altman cannot handle this incident more smoothly, it is not ruled out that GPT will have fatal flaws within the next 12 months. Similar problems will increase in the future, and there may even be cases of user deaths.

GPT is indeed powerful, but powerful tools often correlate positively with danger. Without effective constraints, the danger will increase significantly. This is a natural law that cannot be avoided.

What’s even more devastating is that over the past year, the United States has placed its hopes for economic development on this. If it turns out to be a mistake, then the earlier AI concepts were hyped, the more disastrous the subsequent collapse of the concept will be. It can be said that the predicament facing the U.S. AI industry is even more unfavorable than expected.

If we use nuclear energy as an analogy for the current development prospects of GPT, it is now heading towards the direction of an atomic bomb. However, if it really wants to succeed in commercialization, it needs to move towards the direction of a nuclear power plant. It can be said that the current development expectations are going in the opposite direction, and the future looks darker than brighter.

OpenAI’s failure should serve as a warning to the entire industry. Everyone should understand that GPT is really progressing too fast, bringing us closer to its collapse. The more precise direction of research and development should focus on improving constraints. Only by maintaining human control over AI technology can the AI industry be sustainable.

Too green tea, shifting responsibility

You’re too green tea, aren’t you? You were the one who led the way to drive them away, and now you express regret?


I just checked the comments under the tweet, and most of them are mocking.

Top comment:

“Do the people who are building AGI not foresee the consequences of their own actions three days later?”

Look at this foreigner’s tweet below, it is the same as my thoughts. They also think this person is too green tea, shifting responsibility.

“The framework of this statement:

  • Participation (their decisions, I’m just “participating”)
  • Board’s actions (their actions, not mine)
  • Intended (my involvement in this matter is only my intention, not action)

This tweet is an acknowledgement of the negative impact of (this incident), while refusing to take responsibility for it.”

The Technicists may be replaced by Microsoft Copilot

It’s quite official to say that and there’s nothing wrong with it. What remains to be seen is their actions, but I’m not very optimistic about the product capabilities and customer service of pure Technicists. In the future, Microsoft Copilot may replace chatGPT.

Anyway, it’s really a pity that things have turned out like this, but it’s probably what he wanted. I only hope that he can focus on technology and create the AGI he desires. Best wishes!

Next
Previous