Infrared and Visible Image Fusion Based on Deep Decomposition Network and Saliency Analysis

Abstract
Traditional image fusion focuses on selecting an effective decomposition approach to extract representative features from the source image and attempts to find appropriate fusion rules to merge extracted features respectively. However, the existing image decomposition tools are mostly based on kernels or global energy-optimized functions limiting the performance of the wide range of image contents. This paper proposes a novel infrared and visible image fusion method based on deep decomposition network and saliency analysis (named DDNSA). First, the modified residual dense network (MRDN) is trained with a publicly available dataset to learn the decomposition process. Second, the structure and texture features of source images are separated by the trained decomposition network. Then, according to the characteristics of the above features, we construct the combination of local and global saliency maps by using stacked sparse autoencoder and visual saliency mechanism to fuse the structural features. Besides, we propose a bi-direction edge-strength fusion strategy for merging the texture features. Finally, the resultant image is reconstructed by combining the fused structure and texture features. The experimental results confirm that our proposed method outperforms the state-of-the-art methods in both visual perception and objective evaluation.
Funding Information
  • National Natural Science Foundation of China (U1604262)
  • Zhengzhou Collaborative Innovation Major Special Project (20XTZX11020)