Video Captioning With Attention-Based LSTM and Semantic Consistency
Top Cited Papers
- 19 July 2017
- journal article
- Published by Institute of Electrical and Electronics Engineers (IEEE) in IEEE Transactions on Multimedia
- Vol. 19 (9), 2045-2055
- https://doi.org/10.1109/tmm.2017.2729019
Abstract
Recent progress in using long short-term memory (LSTM) for image captioning has motivated the exploration of their applications for video captioning. By taking a video as a sequence of features, an LSTM model is trained on video-sentence pairs and learns to associate a video to a sentence. However, most existing methods compress an entire video shot or frame into a static representation, without considering attention mechanism which allows for selecting salient features. Furthermore, existing approaches usually model the translating error, but ignore the correlations between sentence semantics and visual content. To tackle these issues, we propose a novel end-to-end framework named aLSTMs, an attention-based LSTM model with semantic consistency, to transfer videos to natural sentences. This framework integrates attention mechanism with LSTM to capture salient structures of video, and explores the correlation between multimodal representations (i.e., words and visual content) for generating sentences with rich semantic content. Specifically, we first propose an attention mechanism that uses the dynamic weighted sum of local two-dimensional convolutional neural network representations. Then, an LSTM decoder takes these visual features at time t and the word-embedding feature at time t-1 to generate important words. Finally, we use multimodal embedding to map the visual and sentence features into a joint space to guarantee the semantic consistence of the sentence description and the video visual content. Experiments on the benchmark datasets demonstrate that our method using single feature can achieve competitive or even better results than the state-of-the-art baselines for video captioning in both BLEU and METEOR.Keywords
Funding Information
- National Natural Science Foundation of China (61502080, 61632007)
- Fundamental Research Funds for the Central Universities (ZYGX2016J085, ZYGX2014Z007)
This publication has 37 references indexed in Scilit:
- The Long-Short Story of Movie DescriptionPublished by Springer Science and Business Media LLC ,2015
- Going deeper with convolutionsPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2015
- Show and tell: A neural image caption generatorPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2015
- CIDEr: Consensus-based image description evaluationPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2015
- Efficient Motion and Disparity Estimation Optimization for Low Complexity Multiview Video CodingIEEE Transactions on Broadcasting, 2015
- Effective Approaches to Attention-based Neural Machine TranslationPublished by Association for Computational Linguistics (ACL) ,2015
- Translating Videos to Natural Language Using Deep Recurrent Neural NetworksPublished by Association for Computational Linguistics (ACL) ,2015
- Inductive Hashing on ManifoldsPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2013
- BLEUPublished by Association for Computational Linguistics (ACL) ,2001
- Long Short-Term MemoryNeural Computation, 1997