OpenAI Made AI Breakthrough Before Ousting Of Sam Altman

The curious tale of the four day exile of CEO Sam Altman by the OpenAI board of directors has taken a fresh twist.

Reuters, citing two sources, reported that before the brief exiling of Sam Altman, several researchers had send the board a letter in which they warned of a powerful artificial intelligence (AI) discovery that they said could threaten humanity.

It has been an eventful period for OpenAI after Sam Altman had been ousted by the board last week, prompting criticism from Microsoft CEO Satya Nadella and other investors. Microsoft offered jobs to Altman and former president Greg Brockman, who resigned in solidarity with Altman.

Image credit: Sam Altman/X

Letter to board

Then the majority of OpenAI staffers threatened to resign en masse, before Altman resumed his duties on late Tuesday under a new board of directors.

A fresh detail has now been added to this tale, after Reuters reported that the letter and the new AI algorithm were a key development before the board’s ouster of Altman.

The two sources cited the letter as one factor among a longer list of grievances by the non-profit board leading to Altman’s firing, among which were concerns over commercialising advances before understanding the consequences.

Reuters said it was unable to review a copy of the letter, and the staff who wrote the letter did not respond to requests for comment.

After being contacted by Reuters, OpenAI, which declined to comment, acknowledged in an internal message to staffers a project called Q* and a letter to the board before the weekend’s events, one of the people reportedly said.

An OpenAI spokesperson said that the message, sent by long-time executive Mira Murati, alerted staff to certain media stories without commenting on their accuracy.

Q-Star project

Some staff at OpenAI believe Q* (pronounced Q-Star) could be a breakthrough in the startup’s search for what’s known as artificial general intelligence (AGI), one of the people told Reuters.

According to Reuters, OpenAI defines AGI as autonomous systems that surpass humans in most economically valuable tasks.

Given vast computing resources, the new model was able to solve certain mathematical problems, the person told Reuters on condition of anonymity.

Though only performing math on the level of grade-school students, acing such tests made researchers very optimistic about Q*’s future success, the source said.

Reuters could not independently verify the capabilities of Q* claimed by the researchers.

AI danger

According to Reuters, in their letter to the board, researchers flagged AI’s prowess and potential danger, the sources said without specifying the exact safety concerns noted in the letter.

Researchers also flagged work by an “AI scientist” team, the existence of which multiple sources confirmed to Reuters. The group, formed by combining earlier “Code Gen” and “Math Gen” teams, was exploring how to optimise existing AI models to improve their reasoning and eventually perform scientific work, one of the people said.

After the letter had been delivered to the board, the board fired Altman.

This development comes after some had cited multiple reports, which stated that Sam Altman had been seeking to raise billions from some of the world’s biggest investors for an AI chip venture in the weeks before his shock ouster.

Altman has also reportedly been looking to raise funds for an AI-focused hardware device he has been developing with former Apple designer Jony Ive.

Tom Jowitt

Tom Jowitt is a leading British tech freelancer and long standing contributor to Silicon UK. He is also a bit of a Lord of the Rings nut...

Recent Posts

Microsoft Beats Expectations Thanks To AI Investments

Customer adoption of AI services embedded in cloud services continues to deliver results for Microsoft,…

2 days ago

Google Delays Removal Of Third-Party Cookies, Again

For third time Google delays phase-out of third-party Chrome cookies after pushback from industry and…

3 days ago