Artificial Intelligence (AI) is transforming cybersecurity practices thanks to the amazing accuracy performance achieved with several AI-based malware detection systems. However, several recent studies have shown that AI decision models can be vulnerable to adversarial attacks. In malware detection scenarios, adversarial attacks are realistic manipulations of existing malware, which preserve the executable and malicious behaviour but evade the malware detection measures. In this study, we consider Windows Portable Executable (PE) malware, which is currently trending to prominent malware types, and we show that counterfactual explanations can be used to drive the generation of realistic adversarial Windows PE malware to evade AI-based detection. In particular, the proposed method OLIVANDER works in a black-box manner, which is the most restrictive attack option, as the evasion method interacts with the target decision system to evade by merely knowing the model input and output. The evaluation study explores the effectiveness of the proposed evasion method in terms of evasion ability, efficiency of computation, and attack transferability compared to two state-of-the-art evasion methods. In addition, the performed evaluation accounts for performances on commercial anti-malware systems.

OLIVANDER: a counterfactual-based method to generate adversarial Windows PE malware

De Rose, Luca;Andresini, Giuseppina;Appice, Annalisa;Malerba, Donato
2025-01-01

Abstract

Artificial Intelligence (AI) is transforming cybersecurity practices thanks to the amazing accuracy performance achieved with several AI-based malware detection systems. However, several recent studies have shown that AI decision models can be vulnerable to adversarial attacks. In malware detection scenarios, adversarial attacks are realistic manipulations of existing malware, which preserve the executable and malicious behaviour but evade the malware detection measures. In this study, we consider Windows Portable Executable (PE) malware, which is currently trending to prominent malware types, and we show that counterfactual explanations can be used to drive the generation of realistic adversarial Windows PE malware to evade AI-based detection. In particular, the proposed method OLIVANDER works in a black-box manner, which is the most restrictive attack option, as the evasion method interacts with the target decision system to evade by merely knowing the model input and output. The evaluation study explores the effectiveness of the proposed evasion method in terms of evasion ability, efficiency of computation, and attack transferability compared to two state-of-the-art evasion methods. In addition, the performed evaluation accounts for performances on commercial anti-malware systems.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11586/553521
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? 0
social impact