AI law: Germany’s position close to that of the European Parliament

Interinstitutional negotiations on the regulation on artificial intelligence (AI Act) are expected later in the year. While the Council of the European Union has adopted its position, Germany has expressed reservations on certain points and is closer to the position of the European Parliament than to that of other Member States.

The European ministers meeting in the “Telecommunications” Council confirmed on December 6 their support for the general approach of the regulation on artificial intelligence (AI), legislation intended to regulate AI according to its potential for harm .

While welcoming the compromise, German Federal Digital Minister Volker Wissing noted that “Improvements are still possible”. He also wants Germany’s comments to be taken into account during negotiations with the European Parliament and the Commission – a phase called “trilogue”.

The choice of elements that Berlin will continue to put forward during these trialogues could prove decisive during the negotiations, insofar as the largest country in Europe could allow MEPs to tip the balance within the Council.


Germany is in favor of a total ban on biometric recognition technology, as already stated in thecoalition agreement signed in 2021 by the three ruling parties in the country. This is also a fundamental element for Parliament’s co-rapporteurs.

However, according to written comments submitted in October and obtained by EURACTIV, Berlin is only in favor of banning real-time biometric identification in public spaces. However, it wishes to authorize the identification a posteriori.

At the same time, the Germans reserved the right to provide more in-depth comments on the matter at a later stage, as the discussion evolves.

Furthermore, Germany wanted the definition of biometric data to overlap with the definition in the EU General Data Protection Regulation (GDPR) in order to avoid any discrepancies in terminology and to classify biometric categorization systems as being high risk.

Predictive policing and emotion recognition

Another controversial topic is the application of AI systems in criminal proceedings. In the same set of comments, Berlin pushed to ban any AI app that substitutes for human judges in assessing an individual’s risk of committing or repeating a criminal offense.

These AI applications were simply included in the high-risk categories in the Board’s final agreement. However, there seems to be strong support in the European Parliament for an outright ban on these practices.

The Germans also wanted to add to the list of prohibited practices AI systems used by public authorities as polygraphs – also called lie detectors – or any other emotion recognition tool. They also called for all other emotion recognition systems to be classified as high risk.

Law enforcement

The EU Council text introduced several important exceptions for law enforcement. Germany’s approach, meanwhile, was broadly to put in place stricter safeguards for AI used by law enforcement agencies.

Berlin also advocated excluding such apps from the principle of the “two-man rule”, according to which human surveillance must be carried out by at least two people. The reason given is that in many cases only one agent is needed to make the decision.

This inconsistency likely stems from the fact that the list of comments is drawn from different ministries headed by different coalition members. It is not always clear which ministry view has prevailed on a certain issue, making the German position difficult for EU policy makers to interpret.

Throughout the negotiations, the German government requested that the AI ​​provisions relating to security and migration be combined in a separate proposal. This approach, which would require a separate general legislative proposal, has so far received little attention.

AI in the workplace

The German government has also pushed to ban any AI system intended to systematically monitor employee performance and behavior without a specific reason. This surveillance results in psychological pressure preventing them from behaving freely.

“These AI systems can accurately track employee performance and behavior, generate probabilities of an employee quitting or productivity, indicate which employees may be harboring a negative climate, and ultimately complete employee profiles »can we read in the comment.

In the general approach, Germany managed to get a reference to the fact that Member States remain free to take action at national level in setting up more specific rules for AI in the workplace. Similar wording was introduced on the protection of minors.

High risk classification

The Czech Presidency of the Council succeeded in introducing an additional level in the high-risk classification. Thus, AI would be considered to pose a significant risk not only based on its area of ​​application, but also when it helps to influence the decision-making process.

The Germans objected to this approach, pointing out that AI providers would not be able to anticipate different uses. They also highlight the absence of an obligation for providers of systems not presenting a high risk to explain how they arrived at such a classification.

Berlin made other proposals on which AI applications should be considered high-risk. These include emissions-intensive industries, wastewater disposal, security components for critical digital infrastructure, and public warning systems for extreme weather events.

Additional proposals on the list of high-risk apps include AI systems used to allocate social housing, collect debts, and provide personalized pricing. These applications are indeed likely to disadvantage the most vulnerable categories.

[Édité par Anne-Sophie Gayet]

We would like to say thanks to the author of this article for this outstanding content

AI law: Germany’s position close to that of the European Parliament

Check out our social media profiles and also other related pages