Spatial attention model‐modulated bi‐directional long short‐term memory for unsupervised video summarisation

Abstract
Compared with surveillance video, user-created videos contain more frequent shot changes, which lead to diversified backgrounds and a wide variety of content. The high redundancy among keyframes is a critical issue for the existing summarising methods in dealing with user-created videos. To address the critical issue, we designed a salient- area-size-based spatial attention model (SAM) on the observation that humans tend to focus on sizable and moving objects in videos. Moreover, the SAM is taken as guidance to refine frame-wise soft selected probability for the bi-directional long short-term memory model. The reinforcement learning framework, trained by the deep deterministic policy gradient algorithm, is adopted to do unsupervised training. Extensive experiments on the SumMe and TVSum datasets demonstrate that our method outperforms the state-of-the-art in terms of F-score.
Funding Information
  • National Natural Science Foundation of China (62002130, 61702472, 2019AAA049)

This publication has 8 references indexed in Scilit: