AI law: French EU presidency wants to change supervisory board and market oversight

Yet another compromise text on the Artificial Intelligence (AI) law was circulated among EU Council diplomats by the French EU Presidency ahead of a working group meeting on Tuesday (10 May).

The new text, seen by EURACTIV, makes significant changes to text relating to the European Artificial Intelligence Council (EAIB), market surveillance, guidelines and codes of conduct.

Member states have been generally pleased with the direction the French EU presidency has given to the dossier, an EU diplomat told EURACTIV.

European Council on Artificial Intelligence

The structure of the board has been modified to include a representative for each Member State, in place of the national supervisory authority. This representative will be appointed for a term of three years, renewable once.

Eight independent experts have been added to the board, two per category, representing SMEs and start-ups, large companies, academia and civil society. These experts will be selected by the national representatives “under a fair and transparent selection procedure”indicates the compromise text.

The European Data Protection Supervisor has been downgraded from a full member to a mere observer. The role of the Commission has also been significantly reduced, from chairman of the board of directors to non-voting participant.

The French proposal provides that the rules of procedure will only be adopted by the national representatives with a two-thirds majority. These rules of procedure should define the process for selecting independent experts as well as the selection, mandate and tasks of the chairman of this council, who must be a national representative.

Guidelines

A new article has been introduced to request the European Commission to provide, on its own initiative or at the request of the Management Board, guidelines on how to apply the AI ​​Regulation, in particular with regard to compliance with high-risk system requirements, prohibited practices, and how to implement significant changes to existing systems.

Guidance would also cover identifying criteria and use cases for high-risk AI systems, how to implement transparency obligations in practice, and how the AI ​​Regulation will interact with other EU legislation.

“When issuing these guidelines, the Commission pays particular attention to the needs of SMEs, including start-ups, and to the sectors most likely to be affected by this Regulation”adds the text.

Market monitoring

The modification of this part of the text is “intended to clarify the powers of market surveillance authorities and the modalities for exercising these powers, as well as the extent to which they must have access to relevant data and information, in particular the source code. »

Market surveillance authorities will have to be granted full access to the source code of a high-risk AI system in the event of “reasoned request”namely that the code is necessary to assess the conformity of the system, the data and documents provided having been deemed insufficient.

The compromise text states that market surveillance authorities are responsible for supervising high-risk systems used by financial institutions. National authorities must immediately inform the European Central Bank of any information relevant to its supervisory tasks.

The procedure for notifying other Member States and the Commission of measures taken against non-compliant AI systems has been significantly modified. These cases now concern systems that do not comply with the restriction of prohibited practices, high-risk systems with their requirements and failure to comply with the transparency obligations provided for deepfakes and recognition of emotions.

In general, the Commission and EU countries will have three months to object to these actions. In the event of a suspected violation of the prohibition of prohibited practices, the period has been reduced to 30 days.

If an objection is raised, the Commission enters into consultation with the competent national authorities. The EU executive will decide within nine months whether the decision is justified, but for prohibited practices the deadline is 60 days.

The Commission can overturn the decision of the national authority. However, if the Commission deems the measures appropriate, all other authorities will have to replicate them, including by withdrawing the AI ​​system from the market.

Codes of Conduct

The Codes of Conduct article has been amended to clarify that they are voluntary tools for AI systems that do not fall into the category of high-risk systems. Codes of conduct have been expanded to cover the obligations of users of AI systems.

We would like to say thanks to the author of this short article for this remarkable web content

AI law: French EU presidency wants to change supervisory board and market oversight


Explore our social media profiles and other pages related to themhttps://yaroos.com/related-pages/