Before Sam Altman was dismissed from OpenAI, the board of directors had received a letter from several of the company's researchers, detailing a breakthrough in artificial intelligence (AI) that they believed could pose a threat to humanity, as per information from two sources familiar with the matter, reported by Reuters.
The letter was cited as one of the contributing factors in the list of grievances that ultimately led to Altman's removal. The board was also concerned about the potential risks associated with commercializing the advancements without a thorough understanding of the consequences. Reuters did not have access to the exact text of the letter, and the researchers who wrote it did not respond to requests for comment.
After Reuters reached out to OpenAI, the company, which also declined to comment, reportedly acknowledged the existence of a project named Q* and confirmed that the board had received a letter shortly before Altman's ‘exile.’ An internal message to employees from OpenAI's Mira Murati reportedly warned about certain media reports without directly addressing their accuracy.
Some individuals within OpenAI reportedly believe that Q* (pronounced Q-Star) represents a potential breakthrough in the quest for artificial general intelligence (AGI), which refers to autonomous systems surpassing humans in most economically valuable tasks, according to one of the sources cited by Reuters. The new model, fueled by significant computing resources, demonstrated the ability to solve certain mathematical problems. Although these problems were at the school level, successful completion has fueled optimism among researchers about Q*'s future capabilities.
The unique aspect of a model excelling in mathematical problem-solving lies in its potential to push the boundaries of generative AI. While current generative AI excels in tasks like text generation and translation, having the ability to solve mathematical problems with a single correct answer suggests enhanced reasoning capabilities and a closer resemblance to human intelligence. This, in turn, could open new possibilities for applications in scientific research.
In their letter to the board, the researchers reportedly acknowledged the power of AI while also highlighting its potential dangers. However, the specific security concerns raised in the letter were not disclosed by the Reuters sources.