Deep similarity learning for multimodal medical images
- 1 January 2018
- journal article
- research article
- Published by Taylor & Francis Ltd in Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization
- Vol. 6 (3), 248-252
- https://doi.org/10.1080/21681163.2015.1135299
Abstract
An effective similarity measure for multi-modal images is crucial for medical image fusion in many clinical applications. The underlining correlation across modalities is usually too complex to be modelled by intensity based statistical metrics. Therefore, approaches of learning a similarity metric are proposed in recent years. In this work, we propose a novel deep similarity learning method that trains a binary classifier to learn the correspondence of two image patches. The classification output is transformed to a continuous probability value, then used as the similarity score. Moreover, we propose to utilise multi-modal stacked denoising autoencoder to effectively pre-train the deep neural network. We train and test the proposed metric using sampled corresponding/non-corresponding computed tomography and magnetic resonance head image patches from a same subject. Comparison is made with two commonly used metrics: normalised mutual information and local cross correlation. The contributions of the multi-modal stacked denoising autoencoder and the deep structure of the neural network are also evaluated. Both the quantitative and qualitative results from the similarity ranking experiments show the advantage of the proposed metric for a highly accurate and robust similarity measure.This publication has 2 references indexed in Scilit:
- Hierarchical feature representation and multimodal fusion with deep learning for AD/MCI diagnosisNeuroImage, 2014
- An overlap invariant entropy measure of 3D medical image alignmentPattern Recognition, 1999