Evasion Attacks against Machine Learning at Test Time
Top Cited Papers
- 1 January 2013
- book chapter
- conference paper
- Published by Springer Science and Business Media LLC in Lecture Notes in Computer Science
Abstract
No abstract availableKeywords
This publication has 12 references indexed in Scilit:
- Security Evaluation of Pattern Classifiers under AttackIEEE Transactions on Knowledge and Data Engineering, 2013
- Adversarial machine learningPublished by Association for Computing Machinery (ACM) ,2011
- Design of robust classifiers for adversarial environmentsPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2011
- Stackelberg games for adversarial prediction problemsPublished by Association for Computing Machinery (ACM) ,2011
- Multiple classifier systems for robust classifier design in adversarial environmentsInternational Journal of Machine Learning and Cybernetics, 2010
- Learning to classify with missing and corrupted featuresMachine Learning, 2009
- Can machine learning be secure?Published by Association for Computing Machinery (ACM) ,2006
- Nightmare at test timePublished by Association for Computing Machinery (ACM) ,2006
- Adversarial learningPublished by Association for Computing Machinery (ACM) ,2005
- Adversarial classificationPublished by Association for Computing Machinery (ACM) ,2004