Residual Attention Network for Image Classification
Top Cited Papers
- 1 July 2017
- conference paper
- conference paper
- Published by Institute of Electrical and Electronics Engineers (IEEE)
- No. 10636919,p. 6450-6458
- https://doi.org/10.1109/cvpr.2017.683
Abstract
In this work, we propose Residual Attention Network, a convolutional neural network using attention mechanism which can incorporate with state-of-art feed forward network architecture in an end-to-end training fashion. Our Residual Attention Network is built by stacking Attention Modules which generate attention-aware features. The attention-aware features from different modules change adaptively as layers going deeper. Inside each Attention Module, bottom-up top-down feedforward structure is used to unfold the feedforward and feedback attention process into a single feedforward process. Importantly, we propose attention residual learning to train very deep Residual Attention Networks which can be easily scaled up to hundreds of layers. Extensive analyses are conducted on CIFAR-10 and CIFAR-100 datasets to verify the effectiveness of every module mentioned above. Our Residual Attention Network achieves state-of-the-art object recognition performance on three benchmark datasets including CIFAR-10 (3.90% error), CIFAR-100 (20.45% error) and ImageNet (4.8% single model and single crop, top-5 error). Note that, our method achieves 0.6% top-1 accuracy improvement with 46% trunk depth and 69% forward FLOPs comparing to ResNet-200. The experiment also demonstrates that our network is robust against noisy labels.Keywords
This publication has 13 references indexed in Scilit:
- Deep Residual Learning for Image RecognitionPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2016
- Look and Think Twice: Capturing Top-Down Visual Attention with Feedback Convolutional Neural NetworksPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2015
- Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet ClassificationPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2015
- From Facial Parts Responses to Face Detection: A Deep Learning ApproachPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2015
- Convolutional feature masking for joint object and stuff segmentationPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2015
- Going deeper with convolutionsPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2015
- Scalable Object Detection Using Deep Neural NetworksPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2014
- Attentional Selection for Object Recognition — A Gentle WayLecture Notes in Computer Science, 2002
- Computational modelling of visual attentionNature Reviews Neuroscience, 2001
- Long Short-Term MemoryNeural Computation, 1997