OpenAI’s Sam Altman calls firing saga 'ridiculous' and has a softer tone on AI dangers

OpenAI CEO Sam Altman participates in the "Technology in a turbulent world" panel discussion during the annual meeting of the World Economic Forum in Davos, Switzerland, 2024
OpenAI CEO Sam Altman participates in the "Technology in a turbulent world" panel discussion during the annual meeting of the World Economic Forum in Davos, Switzerland, 2024 Copyright AP Photo/Markus Schreiber
Copyright AP Photo/Markus Schreiber
By Pascale Davies
Share this articleComments
Share this articleClose Button

OpenAI’s CEO spoke about last year’s firing saga and addressed concerns about the future of AI in Davos.

ADVERTISEMENT

OpenAI’s CEO Sam Altman said his surprise firing and then rehiring from the start-up in November last year was “ridiculous” and that "at some point you have to laugh".

Altman spoke about the tumultuous period at the World Economic Forum in Davos, Switzerland on Thursday while speaking on a panel.

OpenAI catapulted onto the scene with its generative AI chatbot ChatGPT last year and the company grew exponentially with companies such as Microsoft investing $10 billion (€9 billion) into the company.

But the company was thrown into turmoil in November after the board said it had lost confidence in Altman and removed him from the company.

The board said that "he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities".

Subsequently, virtually the entire staff of OpenAI threatened to resign unless he was reinstated.

"When the first board asked me to come back my immediate response was no, because I was pissed," he told the audience in Davos.

"But I did also know, and I’d seen from watching the executive team, the company would be fine without me," he added.

Should we be worried AGI?

Speaking about the lessons learnt from the saga, Altman warned: "As the world gets closer to AGI (Artificial General Intelligence that could learn tasks that humans can perform), the stress will go up".

“One thing that I observed for a while is everybody’s character gets plus 10 crazy points”.

He said companies should spend more time thinking about “how all strange things can go wrong”.

Altman previously warned about the "grievous harm" of AGI in February 2023 and penned "a misaligned superintelligent AGI could cause grievous harm to the world; an autocratic regime with a decisive superintelligence lead could do that too".

However, he has softened his tone at Davos this year and in conversation with Microsoft CEO Satya Nadella on Tuesday said that AGI would be a "surprisingly continuous thing," where "every year we put out a new model [and] it’s a lot better than the year before".

In a conversation organised by Bloomberg at the conference, he also said AGI could be developed in the "reasonably close-ish future," but "will change the world much less than we all think and it will change jobs much less than we all think".

Not worried about the New York Times lawsuit

Altman also said on Thursday he was "surprised" the New York Times filed a lawsuit against OpenAI and Microsoft in December, which accused the companies of copyright infringement for using its articles to train its AI models.

"We were as surprised as anybody else to read that they were suing us in the New York Times. That was sort of a strange thing," Altman told the Davos crowd, adding that it does not need the publisher's data to train its AI models.

In December, OpenAI announced news content from Axel Springer publications, which include Politico and Business Insider, would be used to train the company's OpenAI systems.

Share this articleComments

You might also like