In an interview with The Verge, Sam Altman, head of OpenAI, commented on rumors that the company is developing a really powerful AI, which could pose a potential threat to humanity.
This OpenAI breakthrough appeared in the media shortly after Altman was fired as CEO. According to Reuters, the reason for the dismissal was precisely the fact that Altman did not inform the board of directors about the development.
In a conversation with The Verge, Altman did not reveal details about the powerful AI, but called information about it "an unfortunate leak."
“We expect progress in this technology to continue to be rapid and also that we expect to continue to work very hard to figure out how to make it safe and beneficial,” he said.
Altman added that research is necessary for progress, and this process inevitably comes with various difficulties.
“You can always hit a wall, but we expect that progress will continue to be significant. And we want to engage with the world about that and figure out how to make this as good as we possibly can,” he said.
The project called Q* (pronounced Q-Star, “Q-star”), developed by OpenAI, according to some experts, could be a breakthrough in the search for the so-called artificial general intelligence (AGI): Q* managed to solve some mathematical tasks, and although this was done only at the level of school students, the very fact of successful completion of these tasks speaks volumes.
What, exactly, is special about a model that solves mathematical problems?
According to many researchers, mathematics is the cutting edge of generative AI. Currently, AI can write texts and translate, and its answers to the same question can vary greatly. But mastering the ability to solve mathematical problems in which there is only one correct answer implies that AI will have greater reasoning abilities and become more similar to human intelligence. It can then be used, for example, in new scientific research.
Unlike a calculator, which can only perform a limited number of operations, AGI can generalize, learn, and understand. True, it is not yet clear what risks may be hidden in this.