Crafting Adversarial Example to Bypass Flow-&ML- based Botnet Detector via RL
Published: 6 October 2021
24th International Symposium on Research in Attacks, Intrusions and Defenses ; https://doi.org/10.1145/3471621.3471841
Abstract: Machine learning(ML)-based botnet detection methods have become mainstream in corporate practice. However, researchers have found that ML models are vulnerable to adversarial attacks, which can mislead the models by adding subtle perturbations to the sample. Due to the complexity of traffic samples and the special constraints that to keep malicious functions, no substantial research of adversarial ML has been conducted in the botnet detection field, where the evasion attacks caused by carefully crafted adversarial examples may directly make ML-based detectors unavailable and cause significant property damage. In this paper, we propose a reinforcement learning(RL) method for bypassing ML-based botnet detectors. Specifically, we train an RL agent as a functionality-preserving botnet flow modifier through a series of interactions with the detector in a black-box scenario. This enables the attacker to evade detection without modifying the botnet source code or affecting the botnet utility. Experiments on 14 botnet families prove that our method has considerable evasion performance and time performance.
Keywords: Bypass Botnet Detector / Adversarial Machine Learning / Reinforcement Learning
Scifeed alert for new publicationsNever miss any articles matching your research from any publisher
- Get alerts for new papers matching your research
- Find out the new papers from selected authors
- Updated daily for 49'000+ journals and 6000+ publishers
- Define your Scifeed now
Click here to see the statistics on "24th International Symposium on Research in Attacks, Intrusions and Defenses" .