Crafting Adversarial Example to Bypass Flow-&ML- based Botnet Detector via RL

Abstract
Machine learning(ML)-based botnet detection methods have become mainstream in corporate practice. However, researchers have found that ML models are vulnerable to adversarial attacks, which can mislead the models by adding subtle perturbations to the sample. Due to the complexity of traffic samples and the special constraints that to keep malicious functions, no substantial research of adversarial ML has been conducted in the botnet detection field, where the evasion attacks caused by carefully crafted adversarial examples may directly make ML-based detectors unavailable and cause significant property damage. In this paper, we propose a reinforcement learning(RL) method for bypassing ML-based botnet detectors. Specifically, we train an RL agent as a functionality-preserving botnet flow modifier through a series of interactions with the detector in a black-box scenario. This enables the attacker to evade detection without modifying the botnet source code or affecting the botnet utility. Experiments on 14 botnet families prove that our method has considerable evasion performance and time performance.
Funding Information
  • National Natural Science Foundation of China (No.61902396)
  • Strategic Priority Research Program of Chinese Academy of Sciences (No. XDC02040100)
  • Key Laboratory of Network Assessment Technology at Chinese Academy of Sciences
  • Youth Innovation Promotion Association CAS (No.2019163)
  • Beijing Key Laboratory of Network Security and Protection Technology

This publication has 21 references indexed in Scilit: