ii-vyyavlyaet-skrytuyu-diskriminaciyu-v-poiskovyh-sistemah
A research team from the University of Lugano and the University of Geneva has developed an innovative method for detecting hidden discrimination in search engine ranking algorithms. Using large language models (LLMs), in particular GPT-4o, scientists have created an injustice detector that can detect gender biases that traditional methods do not capture. This decision becomes particularly important in areas where algorithms influence socially important decisions, such as recruitment, medical information, or educational recommendations.
A key element of the methodology is the new CWEx (Class-wise Weighted Exposure) metric, which takes into account not only the number of documents belonging to different gender groups, but also their position in the search results. Unlike previous approaches, which were limited to counting keywords, CWEx analyzes the semantics of the context, using the ability of language models to understand the tonality and general meaning of the text. This allows you to identify hidden manifestations of bias that are not explicitly expressed in words.
To assess the accuracy of the language models, the researchers compared several systems, including LLaMA, Qwen, Mixtral, and GPT-4o. GPT-4o proved to be the most successful in the Chain-of-Thought mode, correctly classifying over 90% of texts. The analysis showed that models are slightly more likely to detect bias against women than against men.
Testing was conducted on two specialized data sets: Grep-BiasIR, which contains 117 gender-sensitive search queries and about 700 documents, and MSMGenderBias, a carefully annotated corpus of texts divided into neutral texts with a bias in favor of women and men. To check the accuracy of the classification, 180 people were involved, whose scores were closest to the results of GPT-4o.
The CWEx methodology allows you to more accurately assess the validity of the issue, taking into account the visibility of materials, and not just their quantity. This is especially important for referral systems, recruitment platforms, and educational services, where hidden bias can shape public opinion and influence individual decisions. The researchers note that the proposed tool can be adapted to detect discrimination based on other characteristics — age, ethnicity, and others-provided that it is configured accordingly.
The study highlights the importance of transparency and accountability in the use of AI algorithms. Despite the "mathematical" nature of systems, they can reflect and reinforce social and cultural biases. The use of language models as an audit tool provides a new level of understanding and control, allowing you to detect and eliminate unfairness in time before it becomes part of automated solutions.
В Ташкенте состоялся первый инновационный саммит INMerge Uzbekistan — ключевое событие для технологического и инвестиционного сообщества региона. Саммит стал площадкой…
На европейский рынок выходит обновлённый Citroën C5 Aircross 2025 года — кроссовер, в котором сочетаются современные технологии, продуманная эргономика и…
Постановлением Кабинета Министров от 15 июля 2025 года № 443 утверждены изменения в порядке обязательного страхования гражданской ответственности работодателя. Документ…
В Узбекистане продолжается реализация Стратегии развития технологий искусственного интеллекта, утверждённой Постановлением Президента от 14 октября 2024 года. Одним из практических…
Генеральный директор Nvidia Дженсен Хуанг, один из ключевых архитекторов современного искусственного интеллекта, заявил, что если бы начинал карьеру сегодня, сосредоточился…
По данным Центрального банка Узбекистана, за первые шесть месяцев 2025 года объём денежных переводов из Российской Федерации составил 6,4 миллиарда…