AI-related standards will be jointly developed by European standardization bodies

Technical standards that will enable the implementation of the European law on artificial intelligence (AI) will be developed jointly by the three European standards bodies, according to a draft standardization request obtained by EURACTIV.

The European Committee for Standardization (CEN), the European Committee for Electronic and Electrotechnical Standardization (CENELEC) and the European Telecommunications Standards Institute (ETSI) will all be responsible for developing the technical standards for the Intelligence Act. artificial.

The three standards bodies will be required to provide a work program detailing the timeline and technical bodies responsible for each standard requested in the flagship AI regulation. They will also have to submit a progress report to the European Commission every six months.

Technical standards will play a key role in the enforcement of AI law, as companies that apply them will by default be deemed to be compliant with EU rules. Standards play such an important role in reducing compliance costs that they have been defined as the “real regulation” in a influential article on EU regulation on AI.

The appendix to the draft request details the standards that will need to be developed, including on risk management systems, governance and quality of datasets, record keeping, transparency and user information, monitoring human resources, precision specifications, quality management, including post-market surveillance, and cybersecurity.

In addition, organizations will need to define validation procedures and methodologies to assess whether AI systems are fit for purpose and meet European standards. Depending on the regulation, this conformity assessment could be carried out by the AI ​​provider itself or by a third party.

European bodies will have to take into account the interdependencies between the different requirements and make them explicit when developing technical standards.

In addition, they will have to pay particular attention to the compatibility of standards with the needs of SMEs, which will have to be involved with civil society in the consensus-building exercise.

For Kris Shrishak, technology researcher at the Irish Council for Civil Liberties (ICCL), the involvement of civil society is a positive development, as standards bodies may be ill-equipped to address issues such as mitigating bias.

“This question goes beyond the technical aspects. It depends on the people who use the AI ​​systems, the context of use and the people who are affected”said Mr. Shrishak, while lamenting that it is unclear what would happen if civil society was not sufficiently involved.

Technical standards are increasingly politicized, with China and the United States investing massive resources in an attempt to influence the debate in international forums according to their strategic interests and those of their companies.

To counter the steady decline of European companies in defining technical standards, the European Commission has recently launched a standardization strategy which, in line with the EU’s digital sovereignty programme, aims to reduce the foreign influence of European standards and to promote EU interests more vigorously in international standardization bodies.

Therefore, standards will need to be aligned with policy objectives to “respect the values ​​of the Union and strengthen the digital sovereignty of the Union, by encouraging investment and innovation in AI as well as the competitiveness and growth of the Union market, while strengthening the global standardization cooperation in the field of AI, in line with the values ​​and interests of the Union”.

The reference to taking into account the interests of SMEs can also be interpreted in this sense, as the Commission seeks to scale up European technology companies which are still relatively small compared to their international competitors.

At the latest summit of the EU-US Trade and Technology Council (CTC) on May 16, both sides are committed to develop a common roadmap on assessment and measurement tools for trustworthy AI and risk management. This roadmap is expected for the next TCC meeting in December.

In this preliminary version, the standardization request is valid until August 31, 2025, and the three standardization bodies are expected to submit their joint final report by October 31, 2024.

However, the request will only be finalized once the AI ​​law has been consolidated through inter-institutional negotiations, which are not expected to start before 2023.

We want to say thanks to the author of this write-up for this outstanding web content

AI-related standards will be jointly developed by European standardization bodies

Our social media pages here and other related pages here