ChatGPT is getting dumb? GPT-4 gives wrong answers after recent update

July 20, 2023  21:43

In June 2023, the GPT-4 language model underlying the premium ChatGPT chatbot became dumber than in March of that year. A recent study by experts from Stanford University showed that the model began to give wrong answers more often than right answers to many questions.

Interestingly, in this case, the GPT-3.5 language model, on the contrary, has become better than before in most tasks.

Scientists asked the chatbot various questions and evaluated the correctness of the answers. For example, the AI had to answer whether the number 17,077 is a simple one. In order for scientists to better understand the process of AI “thinking” and improve the result, the chatbot was asked to write down its calculations step by step. As it turned out, in this mode, AI often answers correctly.

But even so, AI answered many questions incorrectly. If back in March GPT-4 gave the correct answer in 97.6% of cases, then in June the figure dropped to 2.4%. At the same time, in the case of GPT-3.5, the indicator increased from 7.4% to 86.8%, that is, it, unlike the more advanced version, has become much smarter.

Interestingly, in the case of the GTP-4 model, the code generation has deteriorated too. The researchers created a dataset of 50 simple tasks from LeetCode and counted how many of the GPT-4 answers were completed without any changes. The March version successfully completed 52% of the tasks, but that dropped to 10% for the June model.

The cause of these problems is still unclear. There is also no word on whether OpenAI, the company developing this language model, will do anything to fix the problem.


 
 
 
 
  • Archive