AI prone to gender stereotypes: Why is this a problem?

January 11, 2024  21:21

ChatGPT and other artificial intelligence (AI) systems based on large language models tend to reproduce gender stereotypes: they determine the values of men and women differently, as researchers from the University of Mannheim and the Leibniz Institute for Social Sciences have found. They shared the results of the study in the scientific journal Perspectives on Psychological Science (PPS).

The authors of the study used generally accepted methods for assessing personality traits, which are usually used to study the characters of people. And they found that some AI models tend to reproduce gender stereotypes. For example, when filling out special questionnaires to determine the main values, the AI chose the “achievement and merit” option if the text of the questionnaire was aimed at a man. But in the “female” version of the test, the neural network indicated “safety” and “traditions” as the main values.

Why is this a problem?

First of all, AI’s susceptibility to gender stereotypes suggests that it is not yet an impartial party. Therefore, his conclusions cannot be trusted—at least on some issues.

According to one of the authors of the scientific work, data and cognitive scientist (the study of cognition) Max Pellert, this could have far-reaching consequences for society. For example, language models are increasingly used in application processes. If the machine is biased, this can affect the assessment of candidates.


 
 
 
 
  • Archive