OpenAI starts training the next AI model as it combats security issues

Unlock the Editor’s Digest for free

OpenAI said it had started training its next generation of artificial intelligence software, even as the startup backtracked on previous claims that it wanted to build “super-intelligent” systems that were smarter than humans.

The San Francisco-based company said Tuesday that it had begun production of a new AI system “to take us to the next level of capabilities” and that its development would be overseen by a new safety and security committee.

But as OpenAI speeds forward with AI development, a senior OpenAI executive appeared to backtrack on CEO Sam Altman’s earlier comments that its ultimate goal was to build a “superintelligence” far more advanced than humans.

Anna Makanju, OpenAI’s vice president of global affairs, told the Financial Times in an interview that her “mission” was to build artificial general intelligence capable of “cognitive tasks that a human could do today” .

“Our mission is to build AGI; I wouldn’t say our mission is to build super intelligence,” Makanju said. “Superintelligence is a technology that will be orders of magnitude more intelligent than humans on Earth.”

Altman told the FT in November that he spent half his time researching “how to build superintelligence.”

Liz Bourgeois, a spokesperson for OpenAI, said superintelligence was not the company’s “mission.”

“Our mission is AGI that benefits humanity,” she said after the initial publication of Tuesday’s FT story. “To achieve this, we also study superintelligence, which we generally regard as systems that are even more intelligent than AGI.” She disputed any suggestion that the two were in conflict.

As OpenAI fends off competition from Google’s Gemini and Elon Musk’s startup xAI, OpenAI is seeking to reassure policymakers that it is prioritizing responsible AI development after several senior security researchers quit this month.

The new committee will be led by Altman and board members Bret Taylor, Adam D’Angelo and Nicole Seligman, and will report to the remaining three members of the board.

The company didn’t say what the successor to GPT-4, which powers the ChatGPT app and got a major upgrade two weeks ago, might do or when it would launch.

Earlier this month, OpenAI disbanded its so-called superalignment team — which was supposed to focus on the security of potentially super-intelligent systems — after Ilya Sutskever, the team’s leader and company co-founder, quit.

Sutskever’s departure came months after he led a coup against Altman in November, which ultimately proved unsuccessful.

The closure of the superalignment team has led to several employees leaving the company, including Jan Leike, another senior AI safety researcher.

Makanju emphasized that the “long-term possibilities” of AI “even though they are theoretical” are still being worked on.

“AGI does not exist yet,” Makanju added, saying such a technology would not come to market until it is safe.

Training is the most important step in how an artificial intelligence model learns, based on a huge amount of data and information given to it. After processing the data and improving performance, the model is then validated and tested before being deployed into products or applications.

This lengthy and highly technical process means that OpenAI’s new model may not become a tangible product for months.

Additional reporting by Madhumita Murgia in London

Video: AI: a blessing or a curse for humanity? | FT technology

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top