[2112.01724] Single-Shot Black-Box Adversarial Attacks Against Malware Detectors: A Causal Language Model Approach

Deep Learning (DL)-based malware detectors are increasingly adopted for early
detection of malicious behavior in cybersecurity. However, their sensitivity to
adversarial malware variants has raised immense security concerns. Generating
such adversarial variants by the defender is crucial to improving the
resistance of DL-based malware detectors against them. This necessity has given
rise to an emerging stream of machine learning research, Adversarial Malware
example Generation (AMG), which aims to generate evasive adversarial malware
variants that preserve the malicious functionality of a given malware. Within
AMG research, black-box method has gained more attention than white-box
methods. However, most black-box AMG methods require numerous interactions with
the malware detectors to generate adversarial malware examples. Given that most
malware detectors enforce a query limit, this could result in generating
non-realistic adversarial examples that are likely to be detected in practice
due to lack of stealth. In this study, we show that a novel DL-based causal
language model enables single-shot evasion (i.e., with only one query to
malware detector) by treating the content of the malware executable as a byte
sequence and training a Generative Pre-Trained Transformer (GPT). Our proposed
method, MalGPT, significantly outperformed the leading benchmark methods on a
real-world malware dataset obtained from VirusTotal, achieving over 24.51\%
evasion rate. MalGPT enables cybersecurity researchers to develop advanced
defense capabilities by emulating large-scale realistic AMG.

This content was originally published here.

Laat een reactie achter

Het e-mailadres wordt niet gepubliceerd.