A Game Theoretical Model for Adversarial Learning
- 1 December 2009
- conference paper
- conference paper
- Published by Institute of Electrical and Electronics Engineers (IEEE)
Abstract
It is now widely accepted that in many situations where classifiers are deployed, adversaries deliberately manipulate data in order to reduce the classifier's accuracy. The most prominent example is email spam, where spammers routinely modify emails to get past classifier-based spam filters. In this paper we model the interaction between the adversary and the data miner as a two-person sequential noncooperative Stackelberg game and analyze the outcomes when there is a natural leader and a follower. We then proceed to model the interaction (both discrete and continuous) as an optimization problem and note that even solving linear Stackelberg game is NP-Hard. Finally we use a real spam email data set and evaluate the performance of local search algorithm under different strategy spaces.Keywords
This publication has 6 references indexed in Scilit:
- A Game Theoretical Model for Adversarial LearningPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2009
- Open problems in the security of learningPublished by Association for Computing Machinery (ACM) ,2008
- Adversarial learningPublished by Association for Computing Machinery (ACM) ,2005
- A case-based technique for tracking concept drift in spam filteringKnowledge-Based Systems, 2005
- Adversarial classificationPublished by Association for Computing Machinery (ACM) ,2004
- On Information and SufficiencyThe Annals of Mathematical Statistics, 1951