Journal of Medical Imaging

Journal Information
ISSN: 23294302
Total articles ≅ 1,116

Latest articles in this journal

Roshan Reddy Upendra, Richard Simon, Cristian A. Linte
Published: 24 May 2023
Journal of Medical Imaging, Volume 10; https://doi.org/10.1117/1.jmi.10.5.051808

Abstract:
PurposeHigh-resolution late gadolinium enhanced (LGE) cardiac magnetic resonance imaging (MRI) volumes are difficult to acquire due to the limitations of the maximal breath-hold time achievable by the patient. This results in anisotropic 3D volumes of the heart with high in-plane resolution, but low-through-plane resolution. Thus, we propose a 3D convolutional neural network (CNN) approach to improve the through-plane resolution of the cardiac LGE-MRI volumes.ApproachWe present a 3D CNN-based framework with two branches: a super-resolution branch to learn the mapping between low-resolution and high-resolution LGE-MRI volumes, and a gradient branch that learns the mapping between the gradient map of low-resolution LGE-MRI volumes and the gradient map of high-resolution LGE-MRI volumes. The gradient branch provides structural guidance to the CNN-based super-resolution framework. To assess the performance of the proposed CNN-based framework, we train two CNN models with and without gradient guidance, namely, dense deep back-projection network (DBPN) and enhanced deep super-resolution network. We train and evaluate our method on the 2018 atrial segmentation challenge dataset. Additionally, we also evaluate these trained models on the left atrial and scar quantification and segmentation challenge 2022 dataset to assess their generalization ability. Finally, we investigate the effect of the proposed CNN-based super-resolution framework on the 3D segmentation of the left atrium (LA) from these cardiac LGE-MRI image volumes.ResultsExperimental results demonstrate that our proposed CNN method with gradient guidance consistently outperforms bicubic interpolation and the CNN models without gradient guidance. Furthermore, the segmentation results, evaluated using Dice score, obtained using the super-resolved images generated by our proposed method are superior to the segmentation results obtained using the images generated by bicubic interpolation (p < 0.01) and the CNN models without gradient guidance (p < 0.05).ConclusionThe presented CNN-based super-resolution method with gradient guidance improves the through-plane resolution of the LGE-MRI volumes and the structure guidance provided by the gradient branch can be useful to aid the 3D segmentation of cardiac chambers, such as LA, from the 3D LGE-MRI images.
, Denis Henrique Pinheiro Salvadeo, Davi D. de Paula
Published: 1 May 2023
Journal of Medical Imaging, Volume 10; https://doi.org/10.1117/1.jmi.10.3.034001

Abstract:
PurposeImage denoising based on deep neural networks (DNN) needs a big dataset containing digital breast tomosynthesis (DBT) projections acquired in different radiation doses to be trained, which is impracticable. Therefore, we propose extensively investigating the use of synthetic data generated by software for training DNNs to denoise DBT real data.ApproachThe approach consists of generating a synthetic dataset representative of the DBT sample space by software, containing noisy and original images. Synthetic data were generated in two different ways: (a) virtual DBT projections generated by OpenVCT and (b) noisy images synthesized from photography regarding noise models used in DBT (e.g., Poisson–Gaussian noise). Then, DNN-based denoising techniques were trained using a synthetic dataset and tested for denoising physical DBT data. Results were evaluated in quantitative (PSNR and SSIM measures) and qualitative (visual analysis) terms. Furthermore, a dimensionality reduction technique (t-SNE) was used for visualization of sample spaces of synthetic and real datasets.ResultsThe experiments showed that training DNN models with synthetic data could denoise DBT real data, achieving competitive results to traditional methods in quantitative terms but showing a better balance between noise filtering and detail preservation in a visual analysis. T-SNE enables us to visualize if synthetic and real noises are in the same sample space.ConclusionWe propose a solution for the lack of suitable training data to train DNN models for denoising DBT projections, showing that we just need the synthesized noise to be in the same sample space as the target image.
Azadeh Fakhrzadeh, Pouya Karimian, Mahsa Meyari, Cris L. Luengo Hendriks, Lena Holm, Christian Sonne, Rune Dietz, Ellinor Spörndly-Nees
Published: 1 May 2023
Journal of Medical Imaging, Volume 10; https://doi.org/10.1117/1.jmi.10.3.039801

Abstract:
The publisher’s note corrects the article citation information.
, Florian Kordon, Felix Denzinger, Jan S. El Barbari, Maxim Privalov, Sven Y. Vetter, Andreas Maier,
Published: 1 May 2023
Journal of Medical Imaging, Volume 10; https://doi.org/10.1117/1.jmi.10.3.034503

Abstract:
PurposeMobile C-arm systems represent the standard imaging devices within the field of spine surgery. In addition to 2D imaging, they allow for 3D scans while preserving unrestricted patient access. For viewing, the acquired volumes are adjusted such that their anatomical standard planes align with the axes of the viewing modality. This difficult and time-consuming step is currently performed manually by the leading surgeon. This process is automatized within this work to improve the usability of C-arm systems. Thereby, the spinal region consisting of multiple vertebrae and the standard planes of all vertebrae being of interest to the surgeon need to be taken into account.ApproachAn object detection algorithm based on the you only look once version 3 architecture, adapted to 3D inputs, is compared with a segmentation-based approach employing a 3D U-Net. Both algorithms are trained on a dataset of 440 and tested on 218 spinal volumes.ResultsAlthough the detection-based algorithm is slightly inferior concerning the detection (91% versus 97% accuracy), localization (1.26 mm versus 0.74 mm error) and alignment accuracy (5.00 deg versus 4.73 deg error), it outperforms the segmentation-based one in terms of speed (5 s versus 38 s).ConclusionsBoth algorithms show similar good results. However, the speed gain of the detection-based algorithm, resulting in a run time of 5 s, makes it more suitable for usage in an intra-operative scenario.
, Hervé Delingette, Anne-Laure Rousseau, Eric de Kerviler, Nicholas Ayache
Published: 1 May 2023
Journal of Medical Imaging, Volume 10; https://doi.org/10.1117/1.jmi.10.3.034502

Abstract:
PurposeThe purpose of this study is to examine the utilization of unlabeled data for abdominal organ classification in multi-label (non-mutually exclusive classes) ultrasound images, as an alternative to the conventional transfer learning approach.ApproachWe present a new method for classifying abdominal organs in ultrasound images. Unlike previous approaches that only relied on labeled data, we consider the use of both labeled and unlabeled data. To explore this approach, we first examine the application of deep clustering for pretraining a classification model. We then compare two training methods, fine-tuning with labeled data through supervised learning and fine-tuning with both labeled and unlabeled data using semisupervised learning. All experiments were conducted on a large dataset of unlabeled images (nu = 84967) and a small set of labeled images (ns = 2742) comprising progressively 10%, 20%, 50%, and 100% of the images.ResultsWe show that for supervised fine-tuning, deep clustering is an effective pre-training method, with performance matching that of ImageNet pre-training using five times less labeled data. For semi-supervised learning, deep clustering pre-training also yields higher performance when the amount of labeled data is limited. Best performance is obtained with deep clustering pre-training combined with semi-supervised learning and 2742 labeled example images with an F1-score weighted average of 84.1%.ConclusionsThis method can be used as a tool to preprocess large unprocessed databases, thus reducing the need for prior annotations of abdominal ultrasound studies for the training of image classification algorithms, which in turn could improve the clinical use of ultrasound images.
Sabien van Elst, Christiaan M. de Bloeme, Samantha Noteboom, Marcus C. de Jong, Annette C. Moll, Sophia Göricke, Pim De Graaf,
Published: 1 May 2023
Journal of Medical Imaging, Volume 10; https://doi.org/10.1117/1.jmi.10.3.034501

Abstract:
PurposePathological conditions associated with the optic nerve (ON) can cause structural changes in the nerve. Quantifying these changes could provide further understanding of disease mechanisms. We aim to develop a framework that automatically segments the ON separately from its surrounding cerebrospinal fluid (CSF) on magnetic resonance imaging (MRI) and quantifies the diameter and cross-sectional area along the entire length of the nerve.ApproachMulticenter data were obtained from retinoblastoma referral centers, providing a heterogeneous dataset of 40 high-resolution 3D T2-weighted MRI scans with manual ground truth delineations of both ONs. A 3D U-Net was used for ON segmentation, and performance was assessed in a tenfold cross-validation (n = 32) and on a separate test-set (n = 8) by measuring spatial, volumetric, and distance agreement with manual ground truths. Segmentations were used to quantify diameter and cross-sectional area along the length of the ON, using centerline extraction of tubular 3D surface models. Absolute agreement between automated and manual measurements was assessed by the intraclass correlation coefficient (ICC).ResultsThe segmentation network achieved high performance, with a mean Dice similarity coefficient score of 0.84, median Hausdorff distance of 0.64 mm, and ICC of 0.95 on the test-set. The quantification method obtained acceptable correspondence to manual reference measurements with mean ICC values of 0.76 for the diameter and 0.71 for the cross-sectional area. Compared with other methods, our method precisely identifies the ON from surrounding CSF and accurately estimates its diameter along the nerve’s centerline.ConclusionsOur automated framework provides an objective method for ON assessment in vivo.
, Paula M. C. Donahue, Michael D. Pridmore, Maria E. Garza, Niral J. Patel, Chelsea A. Custer, Yu Luo, Aaron W. Aday, Joshua A. Beckman, Manus J. Donahue, et al.
Published: 1 May 2023
Journal of Medical Imaging, Volume 10; https://doi.org/10.1117/1.jmi.10.3.036001

Abstract:
PurposeLipedema is a painful subcutaneous adipose tissue (SAT) disease involving disproportionate SAT accumulation in the lower extremities that is frequently misdiagnosed as obesity. We developed a semiautomatic segmentation pipeline to quantify the unique lower-extremity SAT quantity in lipedema from multislice chemical-shift-encoded (CSE) magnetic resonance imaging (MRI).ApproachPatients with lipedema (n = 15) and controls (n = 13) matched for age and body mass index (BMI) underwent CSE-MRI acquired from the thighs to ankles. Images were segmented to partition SAT and skeletal muscle with a semiautomated algorithm incorporating classical image processing techniques (thresholding, active contours, Boolean operations, and morphological operations). The Dice similarity coefficient (DSC) was computed for SAT and muscle automated versus ground truth segmentations in the calf and thigh. SAT and muscle volumes and the SAT-to-muscle volume ratio were calculated across slices for decades containing 10% of total slices per participant. The effect size was calculated, and Mann–Whitney U test applied to compare metrics in each decade between groups (significance: two-sided P < 0.05).ResultsMean DSC for SAT segmentations was 0.96 in the calf and 0.98 in the thigh, and for muscle was 0.97 in the calf and 0.97 in the thigh. In all decades, mean SAT volume was significantly elevated in participants with versus without lipedema (P < 0.01), whereas muscle volume did not differ. Mean SAT-to-muscle volume ratio was significantly elevated (P < 0.001) in all decades, where the greatest effect size for distinguishing lipedema was in the seventh decade approximately midthigh (r = 0.76).ConclusionsThe semiautomated segmentation of lower-extremity SAT and muscle from CSE-MRI could enable fast multislice analysis of SAT deposition throughout the legs relevant to distinguishing patients with lipedema from females with similar BMI but without SAT disease.
, Pouya Karimian, Mahsa Meyari, Cris L. Luengo Hendriks, Lena Holm, Christian Sonne, Rune Dietz, Ellinor Spörndly-Nees
Published: 1 May 2023
Journal of Medical Imaging, Volume 10; https://doi.org/10.1117/1.jmi.10.3.037501

Abstract:
PurposeThere is growing concern that male reproduction is affected by environmental chemicals. One way to determine the adverse effect of environmental pollutants is to use wild animals as monitors and evaluate testicular toxicity using histopathology. We propose an automated method to process histology images of testicular tissue.ApproachTesticular tissue consists of seminiferous tubules. Segmenting the epithelial layer of the seminiferous tubule is a prerequisite for developing automated methods to detect abnormalities in tissue. We suggest an encoder–decoder fully connected convolutional neural network model to segment the epithelial layer of the seminiferous tubules in histological images. The ResNet-34 is used in the feature encoder module, and the squeeze and excitation attention block is integrated into the encoding module improving the segmentation and localization of epithelium.ResultsWe applied the proposed method for the two-class problem, where the epithelial layer of the tubule is the target class. The F-score and Intersection over Union of the proposed method are 0.85 and 0.92. Although the proposed method is trained on a limited training set, it performs well on an independent dataset and outperforms other state-of-the-art methods.ConclusionThe pretrained ResNet-34 in the encoder and attention block suggested in the decoder result in better segmentation and generalization. The proposed method can be applied to testicular tissue images from any mammalian species and can be used as the first part of a fully automated testicular tissue processing pipeline. The dataset and codes are publicly available on GitHub.
, Grace J. Gang, Wenying Wang, Peter Noël, Jeremias Sulam,
Published: 1 May 2023
Journal of Medical Imaging, Volume 10; https://doi.org/10.1117/1.jmi.10.3.033501

Abstract:
Optimization of CT image quality typically involves balancing variance and bias. In traditional filtered back-projection, this trade-off is controlled by the filter cutoff frequency. In model-based iterative reconstruction, the regularization strength parameter often serves the same function. Deep neural networks (DNNs) typically do not provide this tunable control over output image properties. Models are often trained to minimize the expected mean squared error, which penalizes both variance and bias in image outputs but does not offer any control over the trade-off between the two. We propose a method for controlling the output image properties of neural networks with a new loss function called weighted covariance and bias (WCB). Our proposed method uses multiple noise realizations of the input images during training to allow for separate weighting matrices for the variance and bias penalty terms. Moreover, we show that tuning these weights enables targeted penalization of specific image features with spatial frequency domain penalties. To evaluate our method, we present a simulation study using digital anthropomorphic phantoms, physical simulation of CT measurements, and image formation with various algorithms. We show that the WCB loss function offers a greater degree of control over trade-offs between variance and bias, whereas mean-squared error provides only one specific image quality configuration. We also show that WCB can be used to control specific image properties including variance, bias, spatial resolution, and the noise correlation of neural network outputs. Finally, we present a method to optimize the proposed weights for a spiculated lung nodule shape discrimination task. Our results demonstrate this new image quality can control the image properties of DNN outputs and optimize image quality for task-specific performance.
, Weijie Chen, Judy W. Gichoya, , Jayashree Kalpathy-Cramer, Sanmi Koyejo, Kyle J. Myers, Rui C. Sá, Berkman Sahiner, , et al.
Published: 26 April 2023
Journal of Medical Imaging, Volume 10; https://doi.org/10.1117/1.jmi.10.6.061104

Abstract:
PurposeTo recognize and address various sources of bias essential for algorithmic fairness and trustworthiness and to contribute to a just and equitable deployment of AI in medical imaging, there is an increasing interest in developing medical imaging-based machine learning methods, also known as medical imaging artificial intelligence (AI), for the detection, diagnosis, prognosis, and risk assessment of disease with the goal of clinical implementation. These tools are intended to help improve traditional human decision-making in medical imaging. However, biases introduced in the steps toward clinical deployment may impede their intended function, potentially exacerbating inequities. Specifically, medical imaging AI can propagate or amplify biases introduced in the many steps from model inception to deployment, resulting in a systematic difference in the treatment of different groups.ApproachOur multi-institutional team included medical physicists, medical imaging artificial intelligence/machine learning (AI/ML) researchers, experts in AI/ML bias, statisticians, physicians, and scientists from regulatory bodies. We identified sources of bias in AI/ML, mitigation strategies for these biases, and developed recommendations for best practices in medical imaging AI/ML development.ResultsFive main steps along the roadmap of medical imaging AI/ML were identified: (1) data collection, (2) data preparation and annotation, (3) model development, (4) model evaluation, and (5) model deployment. Within these steps, or bias categories, we identified 29 sources of potential bias, many of which can impact multiple steps, as well as mitigation strategies.ConclusionsOur findings provide a valuable resource to researchers, clinicians, and the public at large.
Back to Top Top