Recently, the automotive sector has experienced innovation driven by the increasing connectivity of vehicles not only with each other but also with integrated smart city systems. Due to this upgrade, the vehicle’s attack surface can extend to unexpected boundaries enabling complex attacks and providing access to a critical infrastructure’s network for government and military organizations for the ”Country System”. Considering the Machine Learning (ML) based Intrusion Detection System (IDS) as the target of these attacks, it’s essential to investigate the possible execution, risk, and impact of Adversarial Machine Learning (AML) attacks on the IDS performance in the most likely scenario for attackers i.e. the black-box one. Therefore, this work aims to verify the applicability of some black-box decision-based AML attacks in the Controller Area Network (CAN) bus frame detection task. The victim is a Supervised ML-based IDS (assumed to be resident in the vehicle itself) considering various Technology Transfer state-of-the-art ML models. The evasion attacks are Boundary and HopSkipJump, often exploited in image classification tasks. The results show the evasion attacks impact (in terms of weighted accuracy loss) is about 70% and it is approximately the same. However, the time needed for the attacks is profoundly different making the Boundary attack more appropriate in this context. In addition, it is possible to control the needed generation adversarial examples’ time through an hyperparameter related to some ML ensemble models. Adversarial Training (AT) proves to be a Security by Design and by Default countermeasure useful to prefer the Random Forest model.

Adversarial Manipulation of CAN Bus IDS

Vita Santa Barletta;Danilo Caivano;Christian Catalano;Samuele del Vescovo;Antonio Piccinno
In corso di stampa

Abstract

Recently, the automotive sector has experienced innovation driven by the increasing connectivity of vehicles not only with each other but also with integrated smart city systems. Due to this upgrade, the vehicle’s attack surface can extend to unexpected boundaries enabling complex attacks and providing access to a critical infrastructure’s network for government and military organizations for the ”Country System”. Considering the Machine Learning (ML) based Intrusion Detection System (IDS) as the target of these attacks, it’s essential to investigate the possible execution, risk, and impact of Adversarial Machine Learning (AML) attacks on the IDS performance in the most likely scenario for attackers i.e. the black-box one. Therefore, this work aims to verify the applicability of some black-box decision-based AML attacks in the Controller Area Network (CAN) bus frame detection task. The victim is a Supervised ML-based IDS (assumed to be resident in the vehicle itself) considering various Technology Transfer state-of-the-art ML models. The evasion attacks are Boundary and HopSkipJump, often exploited in image classification tasks. The results show the evasion attacks impact (in terms of weighted accuracy loss) is about 70% and it is approximately the same. However, the time needed for the attacks is profoundly different making the Boundary attack more appropriate in this context. In addition, it is possible to control the needed generation adversarial examples’ time through an hyperparameter related to some ML ensemble models. Adversarial Training (AT) proves to be a Security by Design and by Default countermeasure useful to prefer the Random Forest model.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11586/569300
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact