Describing Multimedia Content Using Attention-Based Encoder-Decoder Networks
- 4 September 2015
- journal article
- research article
- Published by Institute of Electrical and Electronics Engineers (IEEE) in IEEE Transactions on Multimedia
- Vol. 17 (11), 1875-1886
- https://doi.org/10.1109/tmm.2015.2477044
Abstract
Whereas deep neural networks were first mostly used for classification tasks, they are rapidly expanding in the realm of structured output problems, where the observed target is composed of multiple random variables that have a rich joint distribution, given the input. In this paper we focus on the case where the input also has a rich structure and the input and output structures are somehow related. We describe systems that learn to attend to different places in the input, for each element of the output, for a variety of tasks: machine translation, image caption generation, video clip description, and speech recognition. All these systems are based on a shared set of building blocks: gated recurrent neural networks and convolutional neural networks, along with trained attention mechanisms. We report on experimental results with these systems, showing impressively good performance and the advantage of the attention mechanism.Keywords
This publication has 21 references indexed in Scilit:
- Simultaneously Uncovering the Patterns of Brain Regions Involved in Different Story Reading SubprocessesPLOS ONE, 2014
- A Neural Autoregressive Approach to Attention-based RecognitionInternational Journal of Computer Vision, 2014
- Meteor Universal: Language Specific Translation Evaluation for Any Target LanguagePublished by Association for Computational Linguistics (ACL) ,2014
- Edinburgh’s Phrase-based Machine Translation Systems for WMT-14Published by Association for Computational Linguistics (ACL) ,2014
- Learning Phrase Representations using RNN Encoder–Decoder for Statistical Machine TranslationPublished by Association for Computational Linguistics (ACL) ,2014
- On the Properties of Neural Machine Translation: Encoder–Decoder ApproachesPublished by Association for Computational Linguistics (ACL) ,2014
- Learning Where to Attend with Deep Architectures for Image TrackingNeural Computation, 2012
- Gradient-based learning applied to document recognitionProceedings of the IEEE, 1998
- Long Short-Term MemoryNeural Computation, 1997
- Learning representations by back-propagating errorsNature, 1986