Machine learning models victims of computer attacks | Engineering Techniques

Used by electronic payment solutions and banks, machine learning aims to fight against financial fraud. But hackers could try to lure these learning models into attacks and disinformation campaigns.

Subfield of AI, the machine learning (ML) allows computers to rely on statistical techniques to perform a specific task. Taking advantage of the computing power of cloud computing, many industries rely on the results of these algorithms to make decisions that are considered more relevant.

But in its recent “Threat Landscape” report, the European Union Agency for Cybersecurity (ENISA) is concerned about attempted attacks targeting ML in order to reduce their accuracy. The major drawback of AI is that its effectiveness is directly linked to the quality of its data. No matter how complex the model, poor quality data will produce poor results, and history shows that it doesn’t take much.

Even though they are mainly experiments carried out in laboratories or by a few states, these algorithmic manipulations could be used for various purposes, including misinformation, phishing scams, altering public opinion, discrediting individuals or brands.

Make up the traffic

One of the main avenues of research concerns Data Poisoning. These attacks act on the training phase to alter, or even completely falsify the results of the predictive model. It is this technique that was used between 2017 and 2018 to make Google’s anti-spam solutions less effective.

This deformation of a model could be used in a cybersecurity context in order to disguise traffic on a computer network. Imagine the target is a solution that uses machine learning to detect suspicious activity. An attacker may attempt to slowly introduce data that decreases the accuracy of the model in question so that it no longer flags certain behaviors as anomalous.

This technique known as poisoning and data manipulation represents one of the main threats in the field of data. Hence the need to control and secure the integrity of the data, but also their origin and their non-repudiation.

Measures which are still far from being generalized. Published in 2019, a report by the US National Security Commission on theartificial intelligence indicated that a very small percentage of current AI research is devoted to defending these systems against attack.

Oversold AI

However, some systems already used in production could be vulnerable to attacks that are not necessarily very sophisticated and expensive. The proof, by placing a few small stickers on the ground, researchers have shown that they can “train” a self-driving car to join the opposite lane to traffic.

Other studies have shown that making imperceptible changes to an image can trick a medical analysis system into classifying a benign mole as malignant.

“AI and ML detection is still very rare and what many people and companies call artificial intelligence are often simple rules. Commercially, the term is therefore oversold. There are a few companies that do intrusion monitoring based on these techniques, but only a few state attackers are capable of carrying out such actions. It is not within the reach of the basic criminal”tempers Renaud Lifchitz, Chief Scientific Officer at Holiseum, a French company specializing in the cybersecurity of critical and industrial infrastructures.

We want to give thanks to the writer of this write-up for this outstanding material

Machine learning models victims of computer attacks | Engineering Techniques


Check out our social media accounts along with other pages related to themhttps://yaroos.com/related-pages/