Multiframe-to-Multiframe Network for Video Denoising

Abstract
Most existing studies performed video denoising by using multiple adjacent noisy frames to recover one clean frame; however, despite achieving relatively good quality for each individual frame, these approaches may result in visual flickering when the denoised frames are considered in sequence. In this paper, instead of separately restoring each clean frame, we propose a multiframe-to-multiframe (MM) denoising scheme that simultaneously recovers multiple clean frames from consecutive noisy frames. The proposed MM denoising scheme uses a training strategy that optimizes the denoised video from both the spatial and temporal dimensions, enabling better temporal consistency in the denoised video. Furthermore, we present an MM network (MMNet), which adopts a spatiotemporal convolutional architecture that considers both the interframe similarity and single-frame characteristics. Benefiting from the underlying parallel mechanism of the MM denoising scheme, MMNet achieves a highly competitive denoising efficiency. Extensive analyses and experiments demonstrate that MMNet outperforms the state-of-the-art video denoising methods, yielding temporal consistency improvements of at least 13.3% and running more than 2 times faster than the other methods.
Funding Information
  • National Natural Science Foundation of China (61727809)
  • Special Fund for Key Program of Science, and Technology of Anhui Province (201903a05020022, 201903c08020002)

This publication has 63 references indexed in Scilit: