Adversarial Examples Are Not Easily Detected
- 3 November 2017
- conference paper
- conference paper
- Published by Association for Computing Machinery (ACM)
Abstract
No abstract availableFunding Information
- Qualcomm
- AFOSR (MURI award FA9550-12-1-0040)
- Intel (ISTC for Secure Computing)
- Hewlett Foundation (Center for Long-Term Cybersecurity)
This publication has 16 references indexed in Scilit:
- Adversarial Diversity and Hard Positive GenerationPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2016
- Distillation as a Defense to Adversarial Perturbations Against Deep Neural NetworksPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2016
- Mastering the game of Go with deep neural networks and tree searchNature, 2016
- Evasion Attacks against Machine Learning at Test TimeLecture Notes in Computer Science, 2013
- The security of machine learningMachine Learning, 2010
- Integrating structured biological data by Kernel Maximum Mean DiscrepancyBioinformatics, 2006
- Can machine learning be secure?Published by Association for Computing Machinery (ACM) ,2006
- Adversarial learningPublished by Association for Computing Machinery (ACM) ,2005
- Adversarial classificationPublished by Association for Computing Machinery (ACM) ,2004
- Arguments for Fisher's Permutation TestThe Annals of Statistics, 1975