Infinite screeds have been written about whether the internet algorithms we constantly interact with suffer from gender bias, and all you have to do is perform a simple search to see for yourself. According to the researchers behind a new study designed to draw a conclusion on the matter, “however, the debate lacks scientific analysis as of now.” This new article by an interdisciplinary team proposes a new approach to the question and proposes some solutions to prevent these discrepancies in the data and the discrimination they cause.
Algorithms are making more and more decisions about granting a loan or accepting applications. As the range of uses of artificial intelligence (AI), as well as its capabilities and importance, increases, it becomes increasingly important to assess possible biases associated with these operations. “Although not a new concept, there are many instances where this issue has not been studied, thereby ignoring the possible consequences,” said the researchers, whose study was published freely available in the Algorithms journal The various areas of AI mainly focused on gender bias.
Such prejudices can have an enormous impact on society: “Prejudices concern everything that is discriminated against, excluded or associated with a stereotype. For example, gender or race can be excluded from a decision-making process, or behavior can simply be assumed based on one’s gender or skin color,” said study lead researcher Juliana Castañeda Jiménez. an industrial PhD student at the Universitat Oberta de Catalunya (UOC) supervised by Ángel A. Juan from the Universitat Politècnica de València and Javier Panadero from the Universitat Politècnica de Catalunya.
According to Castañeda, “It is possible for algorithmic processes to discriminate on the basis of sex, even if they are programmed to be ‘blind’ to that variable”. The research team – which also includes researchers Milagros Sáinz and Sergi Yanes, both from the Research Group Gender and ICT (GenTIC) of the Interdisciplinary Internet Institute (IN3), Laura Calvet, from the Salesian University School of Sarrià, Assumpta Jover, of the Universitat de València and Ángel A. Juan – illustrate this with a few examples: the case of a well-known recruitment tool that favored male applicants over female applicants, or the case of some credit service providers that offered worse conditions to women than to men. “If old, imbalanced data is used, you’re likely to see negative conditioning in terms of black, gay, and even female demographics, depending on when and where the data came from,” Castañeda explained.
Science is for boys and the arts are for girls
To understand how these patterns affect the various algorithms we study, the researchers analyzed previous work that identified gender biases in data processes in four types of AI: those that have applications in natural language processing and generation, Decision management, speech recognition and face recognition describe recognition.
In general, they found that all algorithms identified and classified white males better. They also found that they reproduced false beliefs about the physical traits that should define someone based on their sex, ethnic or cultural background, or sexual orientation, and that they also made stereotypical associations that males had with science and females with science connected to the arts.
Many methods of image and speech recognition are also based on these clichés: cameras recognize white faces better and audio analysis has problems with higher-pitched voices, which mainly affect women.
The cases most likely to suffer from these problems are those whose algorithms were created based on the analysis of real data linked to a specific social context. “Some of the main causes are the under-representation of women in the design and development of AI products and services and the use of datasets with gender bias,” noted the researcher, who argued that the problem stems from the cultural environment in which they are they are developed.
“When an algorithm is trained on biased data, it can detect hidden patterns in society and reproduce them in operation. So if men and women are unequally represented in society, the design and development of AI products and services will exhibit gender bias.”
How can we put an end to this?
The many sources of gender bias, as well as the specifics of each algorithm and dataset type, mean that eliminating this bias is a very difficult—though not impossible—challenge. “Designers and everyone else involved in their design must be informed of the possibility of the existence of biases related to an algorithm’s logic. In addition, they must understand the measures available to minimize potential prejudice as much as possible and implement them in such a way that they do not occur, because if they are aware of the types of discrimination that occur in society, they will become in be able to determine when the solutions they develop will reproduce them,” Castañeda suggested.
This work is innovative because it was carried out by specialists from different fields, including a sociologist, an anthropologist and experts in gender and statistics. “Team members provided a perspective that went beyond the autonomous mathematics associated with algorithms, thereby helping us to view them as complex sociotechnical systems,” said the study’s lead researcher.
“If you compare this work to others, I think it is one of the few that presents the problem of bias in algorithms from a neutral point of view, emphasizing both social and technical aspects to determine why an algorithm makes a biased decision.” might hit,” she concludes.
This UOC research promotes the Sustainable Development Goals (SDG): 5, Gender Equality; and 10, reduced inequalities.
UOC’s research and innovation (R&I) contributes to addressing pressing challenges facing global societies in the 21st century by exploring the interactions between technology and human and social sciences with a particular focus on the network society, e-learning and e- health examined.
UOC’s research is carried out by over 500 researchers and 51 research groups distributed across the university’s seven faculties, e-learning research program and two research centers: the Internet Interdisciplinary Institute (IN3) and the eHealth Center (eHC).
The university also encourages online learning innovation in its eLearning Innovation Center (eLinC), as well as UOC community entrepreneurship and knowledge transfer through the Hubbik platform.
The United Nations 2030 Agenda for Sustainable Development and Open Knowledge serve as strategic pillars for UOC teaching, research and innovation. More information: research.uoc.edu,