Nowadays, one of the most important applications of machine learning is autonomous vehicles that employ driver assistance technologies to facilitate driving or remove the need for the human operator. As a result, different machine learning models, such as traffic sign recognition systems, are useful to help the vehicle change the driving style based on the scenario. In this context, the systems are vulnerable to different kinds of adversarial attacks, such as evasion and poisoning, that can compromise the security and integrity of these systems, as well as the safety of passengers. These types of attacks involve corrupting a model's training process, compromising its future use. In addition, since these algorithms acquire images from multiple sensors, it is important to investigate attacks that could compromise the detection of Machine Learning or Deep Learning models to prevent civilian and military autonomous vehicles from being compromised and thus generating false information or misclassifications. Therefore, the paper presents an experimentation of black-box evasion and poisoning attacks called Zeroth-Order Optimization (ZOO) and BadNets on a traffic sign recognition CNN (Convolutional Neural Network) by using optimization techniques to maximize the attacks' performance. The goal is to measure the risks linked to these attacks in the autonomous automotive field and check their efficiency compared to the chosen model.
Measuring the risk of evasion and poisoning attacks on a traffic sign recognition system
Barletta V. S.
;Catalano C.;De Vincentiis M.;Piccinno A.
2024-01-01
Abstract
Nowadays, one of the most important applications of machine learning is autonomous vehicles that employ driver assistance technologies to facilitate driving or remove the need for the human operator. As a result, different machine learning models, such as traffic sign recognition systems, are useful to help the vehicle change the driving style based on the scenario. In this context, the systems are vulnerable to different kinds of adversarial attacks, such as evasion and poisoning, that can compromise the security and integrity of these systems, as well as the safety of passengers. These types of attacks involve corrupting a model's training process, compromising its future use. In addition, since these algorithms acquire images from multiple sensors, it is important to investigate attacks that could compromise the detection of Machine Learning or Deep Learning models to prevent civilian and military autonomous vehicles from being compromised and thus generating false information or misclassifications. Therefore, the paper presents an experimentation of black-box evasion and poisoning attacks called Zeroth-Order Optimization (ZOO) and BadNets on a traffic sign recognition CNN (Convolutional Neural Network) by using optimization techniques to maximize the attacks' performance. The goal is to measure the risks linked to these attacks in the autonomous automotive field and check their efficiency compared to the chosen model.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


