AI generates new forms of inequalities between women and men

L’European Union (EU) is founded on the values ​​of respect for human dignity, non-discrimination and equality between women and men. While Community law tends to prohibit discrimination against individuals, or groups of individuals, based on a particular characteristic such as sex, the practices of certain systems ofartificial intelligence (IA) have brought to light new forms of inequality between women and men, which are more difficult to apprehend.

Insofar as the issue of AI bias renews the terms in which the principle of equality must be formulated (1), it is therefore urgent for the EU to equip itself with new tools to combat these new forms of discrimination (2).

The biases of AI

Vulnerable groups include in particular women and ethnic groups, with sex/gender and race/ethnicity being features protected by EU law. With regard to these two sensitive variables, black women are particularly exposed to errors and biases of AI, as shown by the poor performance of facial recognition systems.

Indeed, machine learning algorithms, increasingly used in decision-making, are based on often incomplete or biased data. Problems can arise during the data collection or model development process that can lead to harm.

For example, in medicine, building a model to detect a risk involves to train said model on patient records. However, it is possible that the rate of false negatives (patients who are sick but not diagnosed as such), and therefore not detected by the AI, is higher in women than in men. The reason the model was unable to effectively learn about risk in women was due to a lack of examples. It is therefore necessary to seek additional data on women and retrain the algorithm.

Biases can also originate from human evaluation, as in recruitment: an algorithm that predicts the suitability of candidates by evaluating CVs is based on scores previously assigned by humans who may have discriminatory behavior.

Insufficient regulation

In Europe, the prohibition of discrimination – direct (sex, origin, etc.) and indirect (a supposedly neutral practice which leads, in practice, to treating people with a protected characteristic differently) – finds its source in national and supranational laws. (Council of Europe and EU).

As most of the biases generated by AI do not imply a discriminatory intention but are involuntary or accidental, it is easier to mobilize the concept of indirect discrimination. However, despite its flexibility, this last notion has limits in terms of algorithmic decisions.

[Près de 80 000 lecteurs font confiance à la newsletter de The Conversation pour mieux comprendre les grands enjeux du monde. Abonnez-vous aujourd’hui]

Indeed, said prohibition is based on a general standard which is difficult to implement in practice because it must be possible to demonstrate that a rule, which appears to be neutral, disproportionately affects a protected group.

A presumption of discrimination can be established through statistical evidence. However, in terms of AI, the main difficulty lies in the fact that discrimination is difficult to identify insofar as the reasons for unfavorable decisions (for example, a loan refused to a woman) do not appear in broad daylight. . In the absence of transparency, statistical proof is therefore not easy to establish.

Debias the algorithms

In April 2019, the group of experts set up by the European Commission affirmed, in its ” ethical guidelines for trustworthy AI that the principle of equality should go “beyond non-discrimination” which tolerates the fact of establishing distinctions between different situations on the basis of objective justifications.

In his proposal for AI regulation (AI Act), formulated two years later, the Commission provided a set of requirements for trustworthy AI among which non-discrimination and equality between women. She also recalled that “AI systems can perpetuate historical patterns of discrimination” against women.

In an AI context, equality therefore implies that the operation of the system does not produce results based on unfair biases, which requires appropriate respect for vulnerable groups, such as women.

For this, there are methods for debias the algorithms. It is possible, for example, to introduce a form of positive discrimination via statistical parity: after evaluating thedisproportionate effect by estimating the ratio of two probabilities (probability of a favorable decision for a woman over probability of a favorable decision for a man).

In the United States, software anticipates legal risk by integrating these methods of fair learning. Far from being a technical problem, the question of debiasing requires axiological reflection and a democratic debate. Defining fairness metrics to fight against algorithmic discrimination involves raising the issue of positive discrimination, which we know is more controversial in Europe than in the United States (breach of the principle of formal equality) .


This contribution is based on the intervention of Anaëlle Martin during the “2nd World Week of Scientific Francophonie” organized by the Francophonie University Agency (AUF)end of October 2022, in Cairo (Egypt).

We want to thank the writer of this post for this remarkable web content

AI generates new forms of inequalities between women and men


Find here our social media profiles and other pages related to it.https://yaroos.com/related-pages/