A New Robust Adaptive Fusion Method for Double-Modality Medical Image PET/CT

Abstract
A new robust adaptive fusion method for double-modality medical image PET/CT is proposed according to the Piella framework. The algorithm consists of the following three steps. Firstly, the registered PET and CT images are decomposed using the nonsubsampled contourlet transform (NSCT). Secondly, in order to highlight the lesions of the low-frequency image, low-frequency components are fused by pulse-coupled neural network (PCNN) that has a higher sensitivity to featured area with low intensities. With regard to high-frequency subbands, the Gauss random matrix is used for compression measurements, histogram distance between the every two corresponding subblocks of high coefficient is employed as match measure, and regional energy is used as activity measure. The fusion factor d is then calculated by using the match measure and the activity measure. The high-frequency measurement value is fused according to the fusion factor, and high-frequency fusion image is reconstructed by using the orthogonal matching pursuit algorithm of the high-frequency measurement after fusion. Thirdly, the final image is acquired through the NSCT inverse transformation of the low-frequency fusion image and the reconstructed high-frequency fusion image. To validate the proposed algorithm, four comparative experiments were performed: comparative experiment with other image fusion algorithms, comparison of different activity measures, different match measures, and PET/CT fusion results of lung cancer (20 groups). The experimental results showed that the proposed algorithm could better retain and show the lesion information, and is superior to other fusion algorithms based on both the subjective and objective evaluations.
Funding Information
  • North Minzu University (2020KYQD08, 2020BEB04022, 62062003)