According to a notice (pdf) published on June 22 by the Defender of Rights, non-discrimination must be at the heart of the regulation of artificial intelligence. Recommendations co-drafted with Equinet (European Network of Equality Bodies) and the European Network of Equality Bodies.
The Defender of Rights publishes its recommendations
The Defender of Rights has published a series of recommendations addressed directly to the European institutions. With this opinion, the independent administrative authority responsible for ensuring that public bodies respect citizens’ rights wishes to recall the importance of placing the principle of non-discrimination at the heart of the future European regulation on artificial intelligence : the Artificial Intelligence Act. The Defender of Rights reminds us that we must not forget that “algorithms are developed by humans and therefore from data reflecting human practices”.
The authority believes that one of the most common biases “the lack of representativeness of the data used”. The Defender of Rights is also sounding the alarm about the recruitment processes fueled by artificial intelligence. According to her, some algorithms have inherited “the mathematical translation of historical discriminatory practices”. In concrete terms, recruitment technologies can tend to systematically reject applications from women. Similar drifts have also been observed in other sectors.
Avoid abuses by establishing the principle of non-discrimination
This is for example the case “in the context of the fight against fraud in social benefits”. The Defender of Rights recalls that the algorithms used for this purpose have already led to abuses. Some artificial intelligences concentrated the controls on certain people because of their place of residence and their family situation. This, according to the authority, is why the principle of non-discrimination is essential to building fair regulation on artificial intelligence within the European Union. The Defender of Rights offers several recommendations.
In its opinion, the authority recommends in particular the introduction of “accessible and effective complaints and redress mechanisms” for data subjects in the event of a violation of the principles of equality and non-discrimination when such a violation results from the use of an algorithm. The Defender of Rights also wishes “to require national supervisory authorities to consult equality bodies”. To ensure that non-discrimination will be at the center of the project, the authority believes that academia and non-governmental organizations should be involved in decisions.
We would love to say thanks to the author of this short article for this incredible material
Artificial Intelligence Act: why should non-discrimination be at the center of the project?
Explore our social media accounts and other related pageshttps://yaroos.com/related-pages/