Refine Search

New Search

Results in Journal Medical Physics: 40,776

(searched for: journal_id:(252419))
Page of 816
Articles per Page
by
Show export options
  Select all
Xiaoxuan Zhang, Wojciech Zbijewski, Yixuan Huang, Ali Uneri, Craig K. Jones, Sheng‐Fu L. Lo, Timothy F. Witham, Mark Luciano, William Stanley Anderson, Patrick A. Helm, et al.
Published: 14 September 2021
Abstract:
Purpose : To characterize the 3D imaging performance and radiation dose for a prototype slot-beam configuration on an intraoperative O-arm™ Surgical Imaging System (Medtronic Inc., Littleton MA) and identify potential improvements in soft-tissue image quality for surgical interventions. Methods : A slot collimator was integrated with the O-arm™ system for slot-beam axial CT. The collimator can be automatically actuated to provide 1.2° slot-beam longitudinal collimation. Cone-beam and slot-beam configurations were investigated with and without an antiscatter grid (12:1 grid ratio, 60 lines/cm). Dose, scatter, image noise, and soft-tissue contrast resolution were evaluated in quantitative phantoms for head and body configurations over a range of exposure levels (beam energy and mAs), with reconstruction performed via filtered-backprojection. Qualitative imaging performance across various anatomical sites and imaging tasks was assessed with anthropomorphic head, abdomen, and pelvis phantoms. Results : The dose for a slot-beam scan varied from 0.02–0.06 mGy/mAs for head protocols to 0.01–0.03 mGy/mAs for body protocols, yielding dose reduction by ∼1/5 to 1/3 compared to cone-beam, owing to beam collimation and reduced x-ray scatter. The slot-beam provided a ∼6–7× reduction in scatter-to-primary ratio (SPR) compared to the cone-beam, yielding SPR ∼20%–80% for head and body without the grid and ∼7%–30% with the grid. Compared to cone-beam scans at equivalent dose, slot-beam images exhibited a ∼2.5× increase in soft-tissue CNR for both grid and gridless configurations. For slot-beam scans, a further ∼10–30% improvement in CNR was achieved when the grid was removed. Slot-beam imaging could benefit certain interventional scenarios in which improved visualization of soft tissues is required within a fairly narrow longitudinal region of interest (7 mm in ) – e.g., checking the completeness of tumor resection, preservation of adjacent anatomy, or detection of complications (e.g., hemorrhage). While preserving existing capabilities for fluoroscopy and cone-beam CT, slot-beam scanning could enhance the utility of intraoperative imaging and provide a useful mode for safety and validation checks in image-guided surgery. Conclusions : The 3D imaging performance and dose of a prototype slot-beam CT configuration on the O-arm™ system was investigated. Substantial improvements in soft-tissue image quality and reduction in radiation dose are evident with the slot-beam configuration due to reduced x-ray scatter. This article is protected by copyright. All rights reserved
Shaohua Zhi, Marc Kachelrieß,
Published: 13 September 2021
Abstract:
Purpose Four-dimensional cone-beam computed tomography (4D CBCT) is developed to reconstruct a sequence of phase-resolved images, which could assist in verifying the patient's position and offering information for cancer treatment planning. However, 4D CBCT images suffer from severe streaking artifacts and noise due to the extreme sparse-view CT reconstruction problem for each phase. As a result, it would cause inaccuracy of treatment estimation. The purpose of this paper was to develop a new 4D CBCT reconstruction method to generate a series of high spatiotemporal 4D CBCT images. Methods Considering the advantage of (DL) on representing structural features and correlation between neighboring pixels effectively, we construct a novel DL-based method for the 4D CBCT reconstruction. In this study, both a motion-aware dictionary and a spatially structural 2D dictionary are trained for 4D CBCT by excavating the spatiotemporal correlation among ten phase-resolved images and the spatial information in each image, respectively. Specifically, two reconstruction models are produced in this study. The first one is the motion-aware dictionary learning-based 4D CBCT algorithm, called motion-aware DL based 4D CBCT (MaDL). The second one is the MaDL equipped with a prior knowledge constraint, called pMaDL. Qualitative and quantitative evaluations are performed using a 4D extended cardiac torso (XCAT) phantom, simulated patient data, and two sets of patient data sets. Several state-of-the-art 4D CBCT algorithms, such as the McKinnon–Bates (MKB) algorithm, prior image constrained compressed sensing (PICCS), and the high-quality initial image-guided 4D CBCT reconstruction method (HQI-4DCBCT) are applied for comparison to validate the performance of the proposed MaDL and prior constraint MaDL (pMaDL) pmadl reconstruction frameworks. Results Experimental results validate that the proposed MaDL can output the reconstructions with few streaking artifacts but some structural information such as tumors and blood vessels, may still be missed. Meanwhile, the results of the proposed pMaDL demonstrate an improved spatiotemporal resolution of the reconstructed 4D CBCT images. In these improved 4D CBCT reconstructions, streaking artifacts are suppressed primarily and detailed structures are also restored. Regarding the XCAT phantom, quantitative evaluations indicate that an average of 58.70%, 45.25%, and 40.10% decrease in terms of root-mean-square error (RMSE) and an average of 2.10, 1.37, and 1.37 times in terms of structural similarity index (SSIM) are achieved by the proposed pMaDL method when compared with piccs, PICCS, MaDL(2D), and MaDL(2D), respectively. Moreover the proposed pMaDL achieves a comparable performance with HQI-4DCBCT algorithm in terms of RMSE and SSIM metrics. However, pMaDL has a better ability to suppress streaking artifacts than HQI-4DCBCT. Conclusions The proposed algorithm could reconstruct a set of 4D CBCT images with both high spatiotemporal resolution and detailed features preservation. Moreover the proposed pMaDL can effectively suppress the streaking artifacts in the resultant reconstructions, while achieving an overall improved spatiotemporal resolution by incorporating the motion-aware dictionary with a prior constraint into the proposed 4D CBCT iterative framework.
Xikai Yang, , Saiprasad Ravishankar
Published: 13 September 2021
Abstract:
Purpose Signal models based on sparse representations have received considerable attention in recent years. On the other hand, deep models consisting of a cascade of functional layers, commonly known as deep neural networks, have been highly successful for the task of object classification and have been recently introduced to image reconstruction. In this work, we develop a new image reconstruction approach based on a novel multilayer model learned in an unsupervised manner by combining both sparse representations and deep models. The proposed framework extends the classical sparsifying transform model for images to a Multilayer residual sparsifying transform (MARS) model, wherein the transform domain data are jointly sparsified over layers. We investigate the application of MARS models learned from limited regular-dose images for low-dose CT reconstruction using penalized weighted least squares (PWLS) optimization. Methods We propose new formulations for multilayer transform learning and image reconstruction. We derive an efficient block coordinate descent algorithm to learn the transforms across layers, in an unsupervised manner from limited regular-dose images. The learned model is then incorporated into the low-dose image reconstruction phase. Results Low-dose CT experimental results with both the XCAT phantom and Mayo Clinic data show that the MARS model outperforms conventional methods such as filtered back-projection and PWLS methods based on the edge-preserving (EP) regularizer in terms of two numerical metrics (RMSE and SSIM) and noise suppression. Compared with the single-layer learned transform (ST) model, the MARS model performs better in maintaining some subtle details. Conclusions This work presents a novel data-driven regularization framework for CT image reconstruction that exploits learned multilayer or cascaded residual sparsifying transforms. The image model is learned in an unsupervised manner from limited images. Our experimental results demonstrate the promising performance of the proposed multilayer scheme over single-layer learned sparsifying transforms. Learned MARS models also offer better image quality than typical nonadaptive PWLS methods.
Kotaro Iijima, , Shie Nishioka, Tatsuya Sakasai, Satoshi Nakamura, Takahito Chiba, Keita Kaga, Mihiro Takemori, Hiroki Nakayama, Yuki Miura, et al.
Published: 12 September 2021
Abstract:
Purpose We report on our proposed phantom based on the new end-to-end (E2E) methodology and its results. In addition, we verify whether the proposed phantom can replace conventional phantoms. Methods The hexagonal-shaped newly designed phantom has pockets on each side for a film dosimeter of size 80 × 90 mm2, which is easily removable, considering the 60Co penumbra. The new phantom comprises water, shell, and auxiliary shell phantoms. The shell and auxiliary shell materials are Solid Water® HE. A mock tumor (aluminum oxide) was attached by a single prop in the water phantom and placed at the center of the new phantom. The results of a conventional E2E test were compared with those of the novel E2E test using the newly designed phantom. The irradiated film dosimeter in the novel E2E test was scanned in a flatbed scanner and analyzed using an in-house software developed with MATLAB®. The irradiated field center, laser center, and mock tumor center were calculated. In the novel image-matching E2E (IM-E2E) test, image matching is performed by aligning the laser center with ruled lines. In the novel irradiation-field E2E (IF-E2E) test, the displacement of the irradiation-field center was defined as its distance from the laser center. In the composite E2E test, the overall displacement, which included the accuracy of the irradiated field and image matching, was defined as the distance between the irradiated field center and mock tumor center. In addition, using the newly designed phantom, the overall irradiation accuracy of the machine was evaluated by calculating the three-dimensional (3D) center of the irradiated field, phantom, and laser. The composite E2E test could be performed using the newly designed phantom only. Results In the IM-E2E test, the results of the conventional and novel IM-E2E tests were significantly different in each direction (left–right direction: p-value << 0.05, anterior–posterior direction: p-value = 0.002, and superior–inferior direction: p-value = 0.002). The displacement directions were the same in both the conventional and novel IM-E2E tests. In the analysis of the IF-E2E test, no significant difference was evident between the results in each direction. Moreover, the displacement directions were the same in the conventional and novel IF-E2E tests, except for the left–right lateral direction of head three. In addition, the 3D analysis results of the novel IF-E2E test were less than 1 mm in all directions. In the analysis of the composite E2E test, the maximum displacement was 1.4 mm in all directions. In addition, almost all results of 3D analysis for the composite E2E test were less than 1 mm in all directions. Conclusion The newly designed E2E phantom simplifies the E2E test for MRIdian, and is a possible alternative to the conventional E2E test. Furthermore, we can perform the previously unfeasible composite E2E tests that include the entire treatment process. This article is protected by copyright. All rights reserved
, Paweł F. Kukołowicz
Published: 9 September 2021
Abstract:
Purpose : The calculation model for the integral quality monitor (IQM) system does not take into account the characteristics of the HD120 multileaf collimator (MLC), which some Varian accelerators are equipped with. Some treatment plans prepared with this collimator are characterized by a high level of modulation. The aim of the work was to prepare a model for that collimator and to determine the influence of modulation on the results of the verification carried out with the use of the IQM system. Methods : The short- and long-stability of the IQM detector response were verified by measuring the signal for a 6 MV FFF beam with the static field of 10 × 10 cm2 size. The obtained results were compared with the measurements performed with the PTW Farmer chamber. Next, the signal for 35 static square fields 4 × 4 cm2, covering the whole field 38 × 20 cm2, were measured with the IQM. Based on the results of these measurements, the original calculation model has been changed, in order to achieve the smallest differences between calculations and measurements. While tuning the model, the characteristics of the HD120 MLC were included. Measurements were performed for 30 clinical plans (86 arcs) prepared with 6 MV FFF beams. Among those 30 plans, there were 5 multi-target plans with single isocenter. For each plan the modulation complexity score (MCS) was calculated. The measurement results were compared with the calculation results performed with the original and authors’ calculation model. Results : Very good stability of the short- and long-stability of the IQM detector response were obtained. Measurements performed for 35 static fields revealed that for the manufacturer's and for the authors’ models the deviation exceeded 3% for 12 and 5 of the 35 static fields respectively. The differences for the manufacturer's and authors’ algorithms were in the range of ±2% for the 15 and 26 of the fields respectively. For original and the authors’ models the differences between measured and calculated signals (starting with the segment number 40) were within the range of ±3.5% for 87.6% and 96.7% of all arcs for the respective models. For both models the dependence of the compliance of measurements and calculations on the modulation complexity score was observed. For most of the very modulated arcs the measured signal was at least 3% lower than the calculated one. The largest differences between measurements and calculations were obtained for single isocenter multi-target plans. Conclusions : The signal predicted by an algorithm taking into account the real geometry of the collimating system of the Edge accelerator (equipped with the HD120 MLC) made it possible to obtain greater consistency between the measurements and calculations. We characterized the dependence between the modulation complexity score of each arc and the compliance of the measurements and calculations. Much worse results were obtained for single isocenter multi-target plans. This article is protected by copyright. All rights reserved
Yungeng Zhang, Haifang Qin, Peixin Li, , Yuke Guo, Tianmin Xu, HongBin Zha
Published: 8 September 2021
Abstract:
Purpose: This study aimed to design and evaluate a novel method for the registration of 2D lateral cephalograms and 3D craniofacial cone-beam computed tomography (CBCT) images, providing patient-specific 3D structures from a 2D lateral cephalogram without additional radiation exposure. Methods: We developed a cross-modal deformable registration model based on a deep convolutional neural network. Our approach took advantage of a low-dimensional deformation field encoding and an iterative feedback scheme to infer coarse-to-fine volumetric deformations. In particular, we constructed a statistical subspace of deformation fields and parameterized the nonlinear mapping function from an image pair, consisting of the target 2D lateral cephalogram and the reference volumetric CBCT, to a latent encoding of the deformation field. Instead of the one-shot registration by the learned mapping function, a feedback scheme was introduced to progressively update the reference volumetric image and to infer coarse-to-fine deformations fields, accounting for the shape variations of anatomical structures. A total of 220 clinically obtained CBCTs were used to train and validate the proposed model, among which 120 CBCTs were used to generate a training dataset with 24k paired synthetic lateral cephalograms and CBCTs. The proposed approach was evaluated on the deformable 2D-3D registration of clinically obtained lateral cephalograms and CBCTs from growing and adult orthodontic patients. Results: Strong structural consistencies were observed between the deformed CBCT and the target lateral cephalogram in all criteria. The proposed method achieved state-of-the-art performances with the mean contour deviation of 0.41±0.12 mm on the anterior cranial base, 0.48±0.17 mm on the mandible, and 0.35±0.08 mm on the maxilla, respectively. The mean surface mesh ranged from 0.78 mm to 0.97 mm on various craniofacial structures, and the landmark registration errors ranged from 0.83 mm to 1.24 mm on the growing datasets regarding 14 landmarks. The proposed iterative feedback scheme handled the structural details and improved the registration. The resultant deformed volumetric image was consistent with the target lateral cephalogram in both 2D projective planes and 3D volumetric space regarding the multi-category craniofacial structures. Conclusions: The results suggest that the deep learning-based 2D-3D registration model enables the deformable alignment of 2D lateral cephalograms and CBCTs and estimates patient-specific 3D craniofacial structures.
, Sukhmanjit Kaur, Emma L Thomson, Melissa Mitchell, Premkumar Elangovan, Lucy M Warren, David R. Dance, Kenneth C. Young
Published: 8 September 2021
Abstract:
Purpose The purpose of this study was to measure the threshold diameter of calcifications and masses for 2D imaging, digital breast tomosynthesis (DBT) and synthetic 2D images, for a range of breast glandularities. This study shows the limits of detection for each of the technologies and the strengths and weaknesses of each in terms of visualising the radiological features of small cancers. Methods Mathematical voxel breast phantoms with glandularities by volume of 9%, 18% and 30% with a thickness of 53 mm were created. Simulated ill-defined masses and calcification clusters with a range of diameters were inserted into some of these breast models. The imaging characteristics of a Siemens Inspiration x-ray system were measured for a 29 kV, tungsten/rhodium anode/filter combination. Ray tracing through the breast models was undertaken to create simulated 2D and DBT projection images. These were then modified to adjust the image sharpness, and to add scatter and noise. The mean glandular doses for the images were 1.43, 1.47, 1.47 mGy for 2D and 1.92, 1.97, 1.98 mGy for DBT for the three glandularities. The resultant images were processed to create 2D, DBT planes and synthetic 2D images. Patches of the images with or without a simulated lesion were extracted, and used in a 4-alternative forced choice study to measure the threshold diameters for each imaging mode, lesion type and glandularity. The study was undertaken by six physicists. Results The threshold diameters of the lesions were 6.2 mm, 4.9 mm and 6.7 mm (masses) and 225 μm, 370 μm, and 399 μm, (calcifications) for 2D, DBT and synthetic 2D respectively for a breast glandularity of 18%. The threshold diameter of ill-defined masses is significantly smaller for DBT than for both 2D (p≤0.006) and synthetic 2D (p≤0.012) for all glandularities. Glandularity has a significant effect on the threshold diameter of masses, even for DBT where there is reduced background structure in the images. The calcification threshold diameters for 2D images were significantly smaller than for DBT and synthetic 2D for all glandularities. There were few significant differences for the threshold diameter of calcifications between glandularities, indicating that the background structure has little effect on the detection of calcifications. We measured larger but non-significant differences in the threshold diameters for synthetic 2D imaging than for 2D imaging for masses in the 9% (p = 0.059) and 18% (p = 0.19) glandularities. The threshold diameters for synthetic 2D imaging were larger than for 2D imaging for calcifications (p<0.001) for all glandularities. Conclusions We have shown that glandularity has only a small effect on the detection of calcifications, but the threshold diameter of masses was significantly larger for higher glandularity for all of the modalities tested. We measured non-significantly larger threshold diameters for synthetic 2D imaging than for 2D imaging for masses at the 9% (p = 0.059) and 18% (p = 0.19) glandularities and significantly larger diameters for calcifications (p<0.001) for all glandularities. The lesions simulated were very subtle and further work is required to examine the clinical effect of not seeing the smallest calcifications in clusters. This article is protected by copyright. All rights reserved
Bo Li, Xinge You, Jing Wang, , Shi Yin, Ruinan Qi, Qianqian Ren, Ziming Hong
Published: 7 September 2021
Abstract:
Purpose: In neonatal brain Magnetic Resonance Image(MRI) segmentation, the model we trained on the training set(source domain) often performs poorly in clinical practice(target domain). Since the label of target-domain images is unavailable, this cross-domain segmentation needs unsupervised domain adaptation(UDA) to make the model adapt to the target domain. However, the shape and intensity distribution of neonatal brain MRI images across the domains are largely different from adults'. Current UDA methods aim to make synthesized images similar to the target domain as a whole. But it is impossible to synthesize images with intra-class similarity, because of the regional misalignment caused by the cross-domain difference. This will result in generating intra-classly incorrect intensity information from target-domain images. To address this issue, we propose an IAS-NET(joint Intra-classly Adaptive GAN and Segmentation) framework to bridge the gap between the two domains for intra-class alignment. Methods: Our proposed IAS-NET is an elegant learning framework that transfers the appearance of images across the domains from both image and feature perspectives. It consists of the proposed intra-classly adaptive GAN(IA-NET) and a segmentation network(S-NET). The proposed IA-NET is a GAN-based adaptive network, which contains one generator(including two encoders and one shared decoder) and four discriminators for cross-domain transfer. The two encoders are implemented to extract original image, mean, and variance features from source and target domains. The proposed Local Adaptive Instance Normalization (LAdaIN) algorithm is used to performs intra-class feature alignment to the target domain in the feature-map level. S-NET is a U-net structure network, which is used to provide semantic constraint by a segmentation loss for the training of IA-NET. Meanwhile it offers pseudo-label images for calculating intra-class features of the target domain. Source code (in Tensorflow) is available at: https://github.com/lb-whu/RAS-NET/. Results: Extensive experiments are carried out on two different data sets(NeoBrainS12 and dHCP) respectively. There exist great differences in the shape, size, and intensity distribution of MR images in the two databases. Compared to baseline, we improve the average dice score of all tissues on NeoBrains12 by 6% through adaptive training with unlabeled dHCP images. Besides, we also conduct experiments on dHCP and improved the average dice score by 4%. The quantitative analysis of the mean and variance of the synthesized images shows that the synthesized image by the proposed is closer to the target domain both in the full brain or within each class than that of the compared methods. Conclusions: In this paper, the proposed IAS-NET can improve the performance of the segmentation network effectively by its intra-class feature alignment in the target domain. Compared to the current UDA methods, the synthesized images by IAS-NET are more intra-classly similar to the target domain for neonatal brain MR images. Therefore, it achieves state-of-the-art results in the compared UDA models for the segmentation task. This article is protected by copyright. All rights reserved
, Joshua P. Kim, Jiwei Zhao, Zachary S. Morris, Newton J. Hurst Jr., Carri K. Glide Hurst
Published: 6 September 2021
Abstract:
Purpose The acquisition of multi-parametric quantitative magnetic resonance imaging (qMRI) is becoming increasingly important for functional characterization of cancer prior to- and throughout the course of radiation therapy. The feasibility of a qMRI method known as magnetic resonance fingerprinting (MRF) for rapid T1 and T2 mapping was assessed on a low-field MR-linac system. Methods A three-dimensional MRF sequence was implemented on a 0.35T MR-guided radiotherapy system. MRF-derived measurements of T1 and T2 were compared to those obtained with gold standard single spin echo methods, and the impacts of the radiofrequency field homogeneity and scan times ranging between six and 48 minutes were analyzed by acquiring between one and eight spokes per time point in a standard quantitative system phantom. The short-term repeatability of MRF was assessed over three measurements taken over a 10-hour period. To evaluate transferability, MRF measurements were acquired on two additional MR-guided radiotherapy systems. Preliminary human volunteer studies were performed. Results The phantom benchmarking studies showed MRF is capable of mapping T1 and T2 values within 8% and 10% of gold standard measures, respectively, at 0.35T. The coefficient of variation of T1 and T2 estimates over three repeated scans was < 5% over a broad range of relaxation times. The T1 and T2 times derived using a single-spoke MRF acquisition across 3 scanners were near unity and mean percent errors in T1 and T2 estimates using the same phantom were <3%. The mean percent differences in T1 and T2 as a result of truncating the scan time to 6 minutes over the large range of relaxation times in the system phantom were 0.65% and 4.05%, respectively. Conclusions The technical feasibility and accuracy of MRF on a low-field MR-guided radiation therapy device has been demonstrated. MRF can be used to measure accurate T1 and T2 maps in three dimensions from a brief six-minute scan, offering strong potential for efficient and reproducible qMRI for future clinical trials in functional plan adaptation and tumor/normal tissue response assessment. This article is protected by copyright. All rights reserved
Lun Chen, Lu Zhaoa,
Published: 6 September 2021
Abstract:
Purpose Deep learning has achieved impressive performance across a variety of tasks, including medical image processing. However, recent research has shown that deep neural networks (DNNs) are susceptible to small adversarial perturbations in the image, which raise safety concerns about the deployment of these systems in clinical settings. Methods In order to improve the defense of the medical imaging system against adversarial examples, we propose a new model-based defense framework for medical image DNNs model equipped with pruning and attention mechanism module based on the analysis of the reason why existing medical image DNNs models are vulnerable to attacks from adversarial examples is that complex biological texture of medical imaging and over-parameterized medical image DNNs model. Results Three benchmark medical image datasets have verified the effectiveness of our method in improving the robustness of medical image DNNs models. In the Chest X-Ray datasets, our defending method can even achieve up 77.18% defense rate for PGD (Projected Gradient Descent) attack and 69.49% defense rate for DeepFool attack. And through ablation experiments on the pruning module and the attention mechanism module, it is verified that the use of pruning and attention mechanism can effectively improve the robustness of the medical image DNNs model. Conclusions Compared with the existing model-based defense methods proposed for natural images, our defense method is more suitable for medical images. Our method can be a general strategy to approach the design of more explainable and secure medical deep learning systems, and can be widely used in various medical image tasks to improve the robustness of medical models. This article is protected by copyright. All rights reserved
Genwei Ma, Yinghui Zhang, Xing Zhao, Tong Wang,
Published: 5 September 2021
Abstract:
Purpose Limited-angle computed tomography is a challenging but important task in certain medical and industrial applications for nondestructive testing. The limited-angle reconstruction problem is highly ill-posed and conventional reconstruction algorithms would introduce heavy artifacts. Various models and methods have been proposed to improve the quality of reconstructions by introducing different priors regarding to the projection data or ideal images. However, the assumed priors might not be practically applicable to all limited-angle reconstruction problems. Convolutional neural network (CNN) exhibits great promise in the modelling of data coupling and has recently become an important technique in medical imaging applications. Although existing CNN methods have demonstrated promising results, their robustness is still a concern. In this paper, in light of the theory of visible and invisible boundaries, we propose an alternating edge-preserving diffusion and smoothing neural network (AEDSNN) for limited-angle reconstruction which builds the visible boundaries as priors into its structure. The proposed method generalizes the alternating edge-preserving diffusion and smoothing (AEDS) method for limited-angle reconstruction developed in the literature by replacing its regularization terms by CNNs, by which the piecewise constant assumption assumed by AEDS is effectively relaxed. Methods The AEDSNN is derived by unrolling the AEDS algorithm. AEDSNN consists of several blocks, and each block corresponds to one iteration of the AEDS algorithm. In each iteration of the AEDS algorithm, three sub-problems are sequentially solved. So, each block of AEDSNN possesses three main layers: data matching layer, x-direction regularization layer for visible edges diffusion and y-direction regularization layer for artifacts suppressing. The data matching layer is implemented by conventional ordered-subset simultaneous algebraic reconstruction technique (OS-SART) reconstruction algorithm, while the two regularization layers are modelled by CNNs for more intelligent and better encoding of priors regarding to the reconstructed images. To further strength the visible edge prior, the attention mechanism and the pooling layers are incorporated into AEDSNN to facilitate the procedure of edge-preserving diffusion from visible edges. Results We have evaluated the performance of AEDSNN by comparing it with popular algorithms for limited-angle reconstruction. Experiments on the medical dataset show that the proposed AEDSNN effectively breaks through the piecewise constant assumption usually assumed by conventional reconstruction algorithms, and works much better for piecewise smooth images with non-sharp edges. Experiments on the Printed Circuit Board (PCB) dataset show that AEDSNN can better encode and utilize the visible edge prior, and its reconstructions are consistently better compared to the competing algorithms. Conclusions A deep-learning approach for limited-angle reconstruction is proposed in this paper, which significantly outperforms existing methods. The superiority of AEDSNN consists of three aspects. Firstly, by the virtue of CNN, AEDSNN is free of parameter-tuning. This is a great facility compared to conventional reconstruction methods; Secondly, AEDSNN is quite fast. Conventional reconstruction methods usually need hundreds even thousands of iterations, while AEDSNN just needs 3-5 iterations (i.e. blocks); Thirdly, the learned regularizer by AEDSNN enjoys a broader application capacity, which could work well with piecewise smooth images and surpass the piecewise constant assumption frequently assumed for computed tomography images.
Sirinya Ruangchan, Hugo Palmans, Barbara Knäusl, Dietmar Georg,
Published: 5 September 2021
Abstract:
Purpose This work presents the validation of an analytical pencil beam dose calculation algorithm in a commercial treatment planning system (TPS) for carbon ions by measurements of dose distributions in heterogeneous phantom geometries. Additionally, a comparison study of carbon ions versus protons is performed considering current best solutions in commercial treatment planning systems (TPS). Methods All treatment plans were optimized and calculated using the RayStation TPS (RaySearch, Sweden). The dose distributions calculated with the TPS were compared with measurements performed using a 24 pinpoint ionization chamber array (T31015, PTW, Germany). Tissue-like inhomogeneities (bone, lung, and soft tissue) were embedded in water while a target volume of 4 x 4 x 4 cm3 was defined at two different depths behind the heterogeneities. In total, ten different test cases, with and without range shifter as well as different air gaps, were investigated. Dose distributions inside as well as behind the target volume were evaluated. Results Inside the target volume, the mean dose difference between calculations and measurements, averaged over all test cases, was 1.6% for carbon ions. This compares well to the final agreement of 1.5% obtained in water at the commissioning stage of the TPS for carbon ions and is also within the clinically acceptable interval of 3%. The mean dose difference and maximal dose difference obtained outside the target area were 1.8% and 13.4%, respectively. The agreement of dose distributions for carbon ions in the target volumes was comparable or better to that between MC dose calculations and measurements for protons. Percentage dose differences of more than 10% were present outside the target area behind bone-lung structures, where the carbon ion calculations systematically over predicted the MC dose calculations for protons were superior to carbon ion beams outside the target volumes. Conclusion The PB dose calculations for carbon ions in RayStation were found to be in good agreement with dosimetric measurements in heterogeneous geometries for points of interest located within the target. Large local discrepancies behind the target may contribute to incorrect dose predictions for organs at risk. This article is protected by copyright. All rights reserved
Ajay Nemani,
Published: 5 September 2021
Abstract:
Purpose Ultrahigh field (UHF) resting state functional magnetic resonance imaging (rsfMRI) has become increasingly available for clinical and basic research, bringing improvements in resolution and contrast over standard high field imaging. Despite these improvements, UHF connectivity studies present several challenges, including increased sensitivity to physiological confounds and a vastly increased data burden. We present a direct quantitative assessment of test-retest reliability of functional connectivity in several standard functional networks between subjects scanned at 3T and 7T. Methods Five healthy subjects were scanned over 4 sessions each in a scan-rescan design at both 3T and 7T field strengths. Resting state fMRI data were segmented into four major intrinsic connectivity networks, and seed-based peak correlations within and between these networks examined. The reliability of these correlations was assessed using intra-class correlation coefficients (ICC). Results Across all data, over 4000 peak correlations were extracted for assessment. The reliability over all intrinsic networks was greater at 7T than 3T (median ICC 0.40 vs 0.33, p ≤ 0.0014), with each network individually showing improvement. Inter-network reliability was stronger than intra-network reliability, but intra-network reliability showed the greatest improvement between field strengths. Conclusion We demonstrate significantly increased reliability of resting state connectivity at ultrahigh field strengths over conventional field strengths using a novel hybrid seed-based analysis. This result adds to the growing body of work supporting the migration of functional imaging studies to ultrahigh fields. This article is protected by copyright. All rights reserved
, Kai Ding, Daniel Song, Amol Narang, John Wong, Yu Rong, Daniel Bliss
Published: 4 September 2021
Abstract:
Purpose: We explore the potential use of radar technology for fiducial marker tracking for monitoring of respiratory tumor motion during radiotherapy. Historically microwave radar technology has been widely deployed in various military and civil aviation applications to provide detection, position and tracking of single or multiples objects from far away and even through barriers. Recently, due to many advantages of the microwave technology, it has been successfully demonstrated to detect breast tumor, and to monitor vital signs in real-time such as breathing signals or heart rates. We demonstrate a proof-of-concept for radar-based fiducial marker tracking through the synthetic human tissue phantom. Methods: We performed a series of experiments with the vector network analyzer (VNA) and wideband directional horn antenna. We considered the frequency range from 2.0 to 6.0 GHz with a maximum power of 3 dBm. A horn antenna, transmitting and receiving radar pulses, was connected to the vector network analyzer to probe a gold fiducial marker through a customized synthetic human tissue phantom, consisting of 1-mm thickness of skin, 5-mm fat, and 25-mm muscle layers. A 1.2 x 10-mm gold fiducial marker was exploited as a motion surrogate which was placed behind the phantom and statically positioned with an increment of 12.7-mm to simulate different marker displacements. The returned signals from the marker were acquired and analyzed to evaluate the localization accuracy as a function of the marker position. Results: The fiducial marker was successfully localized at various measurement positions through a simplified phantom study. The averaged localization accuracy across measurements was 3.5 ± 1.3 mm with a minimum error of 1.9 mm at the closest measurement location and a maximum error of 4.9 mm at the largest measurement location. Conclusion: We demonstrated that the 2-6 GHz radar can penetrate through the attenuating tissues and localize a fiducial marker. This successful feasibility study establishes a foundation for further investigation of radar technology as a non-ionizing tumor localization device for radiotherapy. This article is protected by copyright. All rights reserved
, Tianyu Xiong, Jian Lu, Shengwei Li, Xiangzhi Bai, Fugen Zhou, Qiuwen Wu
Published: 4 September 2021
Abstract:
Purpose: The safety and clinical efficacy of 125I seed-loaded stent for the treatment of portal vein tumor thrombosis (PVTT) have been shown. Accurate and fast dose calculation of the 125I seeds with the presence of the stent is necessary for the plan optimization and evaluation. However, the dosimetric characteristics of the seed-loaded stents remain unclear and there is no fast dose calculation technique available. This paper aims to explore a fast and accurate analytical dose calculation method based on Monte Carlo (MC) dose calculation, which takes into account the effect of stent and tissue inhomogeneity. Methods: A detailed model of the seed-loaded stent was developed using 3D modeling software and subsequently used in MC simulations to calculate the dose distribution around the stent. The dose perturbation caused by the presence of the stent was analyzed and dose perturbation kernels (DPKs) were derived and stored for future use. Then, the dose calculation method from AAPM TG-43 was adapted by integrating the DPK and appropriate inhomogeneity correction factors (ICF) to calculate dose distributions analytically. To validate the proposed method, several comparisons were performed with other methods in water phantom and voxelized CT phantoms for three patients. Results: The stent has a considerable dosimetric effect reducing the dose up to 47.2% for single seed stent and 11.9%-16.1% for 16-seed stent. In a water phantom, dose distributions from MC simulations and TG-43-DP-ICF showed a good agreement with the relative error less than 3.3%. In voxelized CT phantoms, taking MC results as the reference, the relative errors of TG-43 method can be up to 33% while those of TG-43-DP-ICF method were less than 5%. For a dose matrix with 256 × 256 × 46 grid (corresponding to a phantom of 17.2 × 17.2 × 11.5 cm3) for 16 seeds-loaded stent, it only takes 17 seconds for TG-43-DP-ICF to compute, compared to 25 hours for the full MC calculation. Conclusions: The combination of DPK and inhomogeneity corrections is an effective approach to handle both the presence of stent and tissue heterogeneity. Exhibiting good agreement with MC calculation and computational efficiency, the proposed TG-43-DP-ICF method is adequate for dose evaluation and optimization in seed-loaded stent implantation treatment planning. This article is protected by copyright. All rights reserved
Qinghao Chen, Shuang Zhou, Yuewen Tan, Hao Yao, Zhongwei Zhang, Tom Mazur, Tiezhi Zhang
Published: 4 September 2021
Abstract:
Purpose A tetrahedron beam (TB) x-ray system with a linear x-ray source array and a linear detector array positioned orthogonal to each other may overcome the x-ray scattering problem of traditional cone-beam x-ray systems. We developed a TB imaging benchtop system using a linear array x-ray source to demonstrate the principle and benefits of TB imaging. Methods A multi-pixel thermionic emission x-ray (MPTEX) source with 48 focal spots in 4 mm spacing was developed in-house. The x-ray beams are collimated to a stack of fan beams that are converged to a 6 mm wide multi-row photon-counting detector (PCD). The data collected with a sequential scan of the sources at a fixed view angle were synthesized to a 2D radiography image by a shift-and-add algorithm. The data collected with a full rotation of the system were reconstructed into 3D tetrahedron beam CT (TBCT) images using an FDK-based CT algorithm modified for the TB geometry. Results With an 18.8 cm long source array and a 35 cm long detector array, the TB benchtop system provides a 25 cm cross-sectional and 8 cm axial field of view (FOV). The scatter-to-primary ratio (SPR) was approximately 17% for TB, as compared with 120% for cone beam geometry. The TBCT system enables reconstructions in two-dimensional radiography and three-dimensional volumetric CT. The TBCT images were free of “cupping” artifacts and have similar image quality as diagnostic helical CT. Conclusions A TB imaging benchtop imaging system was successfully developed with MPTEX source and PCD. Phantom and animal cadaver imaging demonstrated that the TB system can produce satisfactory radiographic x-ray images and 3D CT images with image quality comparable to diagnostic helical CTs. This article is protected by copyright. All rights reserved
Yang Lei, Tonghe Wang, Yabo Fu, Justin Roper, Ashesh B. Jani, Tian Liu, Pretesh Patel,
Published: 4 September 2021
Abstract:
Purpose High-dose-rate (HDR) prostate brachytherapy involves treatment catheter placement, which is currently empirical and physician dependent. The lack of proper catheter placement guidance during the procedure has left the physicians to rely on a heuristic thinking-while-doing technique, which may cause large catheter placement variation and increased plan quality uncertainty. Therefore, the achievable dose distribution could not be quantified prior to the catheter placement. To overcome this challenge, we proposed a learning-based method to provide HDR catheter placement guidance for prostate cancer patients undergoing HDR brachytherapy. Methods The proposed framework consists of deformable registration via registration network (Reg-Net), multi-atlas ranking and catheter regression. To model the global spatial relationship among multiple organs, binary masks of the prostate and organs-at-risk are transformed into distance maps which describe the distance of each local voxel to the organ surfaces. For a new patient, the generated distance map is used as fixed image. Reg-Net is utilized to deformably register the distance maps from multi-atlas set to match this patient's distance map and then bring catheter maps from multi-atlas to this patient via spatial transformation. Several criteria, namely prostate volume similarity, multi-organ semantic image similarity and catheter positions criteria (far from the urethra and within the partial prostate), are used for multi-atlas ranking. The top-ranked atlas’ deformed catheter positions are selected as the predicted catheter position for this patient. Finally, catheter regression is used to refine the final catheter positions. A retrospective study on 90 patients with a five-fold cross validation scheme was used to evaluate the proposed method's feasibility. In order to investigate the impact of plan quality from the predicted catheter pattern, we optimized the source dwell position and time for both the clinical catheter pattern and predicted catheter pattern with the same optimization settings. Comparisons of clinically relevant dose volume histogram (DVH) metrics were completed. Results For all patients, on average, both the clinical plan dose and predicted plan dose meet the common dose constraints when prostate dose coverage is kept at V100 = 95%. The plans from predicted catheter pattern have slightly higher hotspot in terms of V150 by 5.0% and V200 by 2.9% on average. For bladder V75, rectum V75 and urethra V125, the average difference is close to zero, and the range of most patients is within ±1 cc. Conclusion We developed a new catheter placement prediction method for HDR prostate brachytherapy based on a deep-learning-based multi-atlas registration algorithm. It has great clinical potential since it can provide catheter location estimation prior to catheter placement, which could reduce the dependence on physicians’ experience in catheter implantation and improve the quality of prostate HDR treatment plans. This approach merits further clinical evaluation and validation as a method of quality control for HDR prostate brachytherapy. This article is protected by copyright. All rights reserved
Weibin Zhang, ShuSen Zhao, Huiying Pan, Yunsong Zhao,
Published: 1 September 2021
Abstract:
Purpose Dual-energy computed tomography (DECT) scans objects using two different X-ray spectra to acquire more information, which is also called dual spectral CT (DSCT) in some articles. Compared to traditional CT, DECT exhibits superior material distinguishability. Therefore, DECT can be widely used in the medical and industrial domains. However, owing to the nonlinearity and ill condition of DECT, studies are underway on DECT reconstruction to obtain high quality images and achieve fast convergence speed. Therefore, in this study, we propose an iterative reconstruction method based on monochromatic images to rapidly obtain high-quality images in DECT reconstruction. Methods An iterative reconstruction method based on monochromatic images is proposed for DECT. The proposed method converts DECT reconstruction problem from the basis material images decomposition to monochromatic images decomposition to significantly improve the convergence speed of DECT reconstruction by changing the coefficient matrix of the original equations to increase the angle of the high- and low-energy projection curves or reduce the condition number of the coefficient matrix. The monochromatic images were then decomposed into basis material images. Furthermore, we conducted numerical experiments to evaluate the performance of the proposed method. Results The decomposition results of the simulated data and real data experiments confirmed the effectiveness of the proposed method. Compared to the E-ART method, the proposed method exhibited a significant increase in the convergence speed by increasing the angle of polychromatic projection curves or decreasing the condition number of the coefficient matrix, when choosing the appropriate monochromatic images. Therefore, the proposed method is also advantageous in acquiring high quality and rapidly converged images. Conclusions We developed an iterative reconstruction method based on monochromatic images for the material decomposition for DECT. The numerical experiments using the proposed method validated its capability of decomposing the basis material images. Furthermore, the proposed method achieved faster convergence speed compared to the E-ART method. This article is protected by copyright. All rights reserved
, , Xiaoping Yang
Published: 1 September 2021
Abstract:
Purpose: Fully automatic lumen segmentation in intravascular optical coherence tomography (OCT) images can assist physicians in quickly estimating the health status of vessels. However, OCT images are usually degraded by residual blood, catheter walls, guide wire artifacts, etc., which significantly reduce the quality of segmentation. To achieve accurate lumen segmentation in low-quality images, we propose a novel segmentation algorithm named SPACIAL: Shape Prior generation and geodesic Active Contour Interactive iterAting aLgorithm, which is guided by an adaptively generated shape prior. Methods: In this framework, the active contour evolves under the guidance of shape prior, while the shape prior is automatically and adaptively generated based on the active contour. The active contour and the shape prior interactively iterate each other, which can generate the adaptive shape prior and consequently lead to accurate segmentation results. In addition, a fast algorithm is introduced to accelerate the segmentation in 3D images. Results: The validity of the model is verified in 3240 images from 12 OCT pullbacks. The experimental results show satisfactory segmentation accuracy and time efficiency: the average Dice coefficient of SPACIAL is 93.6(2.4)%, and 5.7 times faster than that of the classical level set method. Conclusion: The proposed SPACIAL can quickly and efficiently perform accurate lumen segmentation on low quality OCT images, which is of great importance to cardiovascular disease diagnosis . The SPACIAL method shows great potential in clinical applications. This article is protected by copyright. All rights reserved
Shenyan Zong, Guofeng Shen, Chang‐Sheng Mei
Published: 1 September 2021
Abstract:
Purpose : The proton resonance frequency (PRF)-based thermometry uses heating-induced phase variations to reconstruct MR temperature maps. However, the measurements of the phase differences may be corrupted by the presence of fat due to its phase being insensitive to heat. The work aims to reconstruct the PRF-based temperature maps for tissues containing fat. Methods : This work proposes a PRF-based method that eliminates the fat's phase contribution by estimating the temperature-insensitive fat vector. A vector in a complex domain represents a given voxel's magnetization from an acquired, complex MR image. In this method, a circle was fit to a time series of vectors acquired from a heated region during a heating experiment. The circle center served as the fat vector, which was then subtracted from the acquired vectors, leaving only the temperature-sensitive vectors for thermal mapping. This work was verified with the gel phantoms of 10%, 15%, and 20% fat content and the ex vivo phantom of porcine abdomen tissue during water-bath heating. It was also tested with an ex vivo porcine tissue during focused ultrasound (FUS) heating. Results : A good agreement was found between the temperature measurements obtained from the proposed method and the optical fiber temperature probe in the verification experiments. In the gel phantoms, the linear regression provided a slope of 0.992 and an R2 of 0.994. The Bland-Altman analysis gave a bias of 0.49 ℃ and a 95% confidence interval of ±1.60 ℃. In the ex vivo tissue, the results of the linear regression and Bland-Altman methods provided a slope of 0.979, an intercept of 0.353, an R2 of 0.947, and a 95% confidence interval of ±3.26 ℃ with a bias of -0.14 ℃. In FUS tests, a temperature discrepancy of up to 28% was observed between the proposed and conventional PRF methods in ex vivo tissues containing fat. Conclusions : The proposed PRF-based method can improve the accuracy of the temperature measurements in tissues with fat, such as breast, abdomen, prostate, and bone marrow. This article is protected by copyright. All rights reserved
Shadab Momin, Yang Lei, Zhen Tian, Tonghe Wang, Justin Roper, Aparna H. Kesarwala, Kristin Higgins, Jeffrey D. Bradley, Tian Liu,
Published: 1 September 2021
Abstract:
Purpose : Manual delineation on all breathing phases of lung cancer 4D CT image datasets can be challenging, exhaustive, and prone to subjective errors because of both the large number of images in the datasets and variations in the spatial location of tumors secondary to respiratory motion. The purpose of this work is to present a new deep learning-based framework for fast and accurate segmentation of lung tumors on 4D CT image sets. Methods The proposed DL framework leverages motion region convolutional neural network (R-CNN). Through integration of global and local motion estimation network architectures, the network can learn both major and minor changes caused by tumor motion. Our network design first extracts tumor motion information by feeding 4D CT images with consecutive phases into an integrated backbone network architecture, locating volume-of-interest (VOIs) via a regional proposal network and removing irrelevant information via a regional convolutional neural network. Extracted motion information is then advanced into the subsequent global and local motion head network architecture to predict corresponding deformation vector fields (DVFs) and further adjust tumor VOIs. Binary masks of tumors are then segmented within adjusted VOIs via a mask head. A self-attention strategy is incorporated in the mask head network to remove any noisy features that might impact segmentation performance. We performed two sets of experiments. In the first experiment, a five-fold cross validation on 20 4D CT datasets, each consisting of 10 breathing phases (i.e., 200 3D image volumes in total). The network performance was also evaluated on an additional unseen 200 3D images volumes from 20 hold-out 4D CT datasets. In the second experiment, we trained another model with 40 patients’ 4D CT datasets from experiment 1 and evaluated on additional unseen 9 patients’ 4D CT datasets. The Dice similarity coefficient (DSC), center of mass distance (CMD), 95th percentile Hausdorff distance (HD95), mean surface distance (MSD), and volume difference (VD) between the manual and segmented tumor contour were computed to evaluate tumor detection and segmentation accuracy. The performance of our method was quantitatively evaluated against four different methods (VoxelMorph, U-Net, network without global and local networks, and network without attention gate strategy) across all evaluation metrics through a paired t-test. Results : The proposed fully automated DL method yielded good overall agreement with the ground truth for contoured tumor volume and segmentation accuracy. Our model yielded significantly better values of evaluation metrics (P < 0.05) than all four competing methods in both experiments. On hold-out datasets of experiment 1 and 2, our method yielded DSC of 0.86 and 0.90 compared to 0.82 and 0.87, 0.75 and 0.83, 081 and 0.89, and 0.81 and 0.89 yielded by VoxelMorph, U-Net, network without global & local networks, and networks without attention gate strategy. Tumor VD between ground truth and our method was the smallest with the value of 0.50 compared to 0.99, 1.01, 0.92, and 0.93 for between ground truth and VoxelMorph, U-Net, network without global & local networks, and networks without attention gate strategy, respectively. Conclusion : Our proposed DL framework of tumor segmentation on lung cancer 4D CT datasets demonstrate a significant promise for fully automated delineation. The promising results of this work provide impetus for its integration into the 4D CT treatment planning workflow to improve the accuracy and efficiency of lung radiotherapy. This article is protected by copyright. All rights reserved
, Devon Richtsmeier, Christopher Dydula, James A Day, Kris Iniewski, Magdalena Bazalova‐Carter
Published: 30 August 2021
Abstract:
Purpose: Spectroscopic x-ray detectors (SXDs) are under development for x-ray imaging applications. Recent efforts to extend the detective quantum efficiency (DQE) to SXDs impose a barrier to experimentation and/or do not provide a task-independent measure of detector performance. The purpose of this article is to define a task-independent DQE for SXDs that can be measured using a modest extension of established DQE-metrology methods. Methods: We defined a task-independent spectroscopic DQE and performed a simulation study to determine the relationship between the zero-frequency DQE and the ideal-observer signal-to-noise ratio (SNR) of low-frequency soft-tissue, bone, iodine and gadolinium signals. In our simulations, we used calibrated models of the spatio-energetic response of cadmium telluride (CdTe) and cadmium-zinc-telluride (CdZnTe) SXDs. We also measured the zero-frequency DQE of a CdTe detector with two energy bins and of a CdZnTe detector with up to six energy bins for an RQA9 spectrum and compared with model predictions. Results: The spectroscopic DQE accounts for spectral distortions, energy-bin-dependent spatial resolution, inter-bin spatial noise correlations, and intra-bin spatial noise correlations; it is mathematically equivalent to the squared signal-to-noise ratio per unit fluence of the generalized least squares estimate of the height of an x-ray impulse in a uniform noisy background. The zero-frequency DQE has a strong linear relationship with the ideal-observer SNR of low-frequency soft-tissue, bone, iodine and gadolinium signals, and can be expressed in terms of the product of the quantum efficiency and a Swank noise factor that accounts for DQE degradation due to (for example) charge sharing and electronic noise. The spectroscopic Swank noise factor of the CdTe detector was measured to be 0.81±0.04 and 0.83±0.04 with and without anti-coincidence logic for charge-sharing suppression, respectively. The spectroscopic Swank noise factor of the CdZnTe detector operated with four energy bins was measured to be 0.82±0.02 which is within 5% of the theoretical value. Conclusions: The spectroscopic DQE defined here is (1) task-independent, (2) can be measured using a modest extension of existing DQE-metrology methods, and (3) is predictive of the ideal-observer SNR of soft-tissue, bone, iodine and gadolinium signals. For CT applications, the combination of charge sharing and electronic noise in CdZnTe spectroscopic detectors will degrade the zero-frequency DQE by 10 % to 20 % depending on the electronic noise level and pixel size.
, Wendy Kennan, Larry A. DeWerd
Published: 30 August 2021
Abstract:
Purpose Previous publications have described how the standard temperature and pressure correction will overcorrect measurements with a low-energy photon low-dose rate brachytherapy source at low ambient air pressures. To account for this effect, an additional correction factor is applied after the standard temperature and pressure correction. This additional correction is dependent on the source being measured and the chamber it is measured in. Well chamber corrections for two sources and findings regarding aspects that may affect the altitude response of the sources are presented. Methods A purpose-built pressure vessel was constructed previously which could achieve pressures ranging from 74.661 kPa to 106.66 kPa (560 mmHg to 800 mmHg). Three Cesium Blu sources (131Cs) from Isoray Inc. and three CivaDots (103Pd) from CivaTech Oncology Inc. were tested over this pressure range in increments of 2.7 kPa (20 mmHg) in three HDR 1000 Plus chambers, and the Cesium Blu sources were also tested in two IVB 1000 chambers. Both chamber models are air communicating well-type ionization chambers produced by Standard Imaging Inc. Multiple runs of each source/chamber combination were completed, corrected with the standard temperature and pressure correction, normalized to the result at 101.325 kPa, and averaged with runs of the same combination. The chamber response was also simulated using MCNP6 to validate the experimental results. Results Measurements of both sources in all chambers followed the expected power dependence on ambient pressure as seen in previous studies. The Cesium Blu source, however, demonstrated a significant difference in response in the HDR 1000 Plus chamber versus the IVB 1000 chamber. For an altitude correction factor of the form, PA = k1(P)k2, new coefficients are proposed for both sources for pressure units of kPa and mmHg. The Monte Carlo calculated chamber response agreed with the experimental results within 2% for all sources and chambers at all pressures. Conclusions Altitude correction coefficients for two new low-energy photon low-dose rate brachytherapy sources are provided. The directional dependence of the CivaDot has no bearing on its dependence on pressure, however the difference in construction materials from other 103Pd sources leads to unique correction coefficients. The higher energy of the Cesium Blu source with respect to 103Pd and 125I sources yields a difference in correction factors depending on which model chamber is used for air-kerma strength calculations. Clinics must be careful to select the correct pair of coefficients for the chamber model they used. This article is protected by copyright. All rights reserved
Chengzhu Zhang, Yinsheng Li,
Published: 29 August 2021
Abstract:
Background: Sparse-view CT image reconstruction problems encountered in dynamic CT acquisitions are technically challenging. Recently, many deep learning strategies have been proposed to reconstruct CT images from sparse-view angle acquisitions showing promising results. However, two fundamental problems with these deep learning reconstruction methods remain to be addressed: 1) limited reconstruction accuracy for individual patients and 2) limited generalizability for patient statistical cohorts. Purpose: The purpose of this work is to address the previously mentioned challenges in current deep learning methods. Methods: A method that combines a deep learning strategy with prior image constrained compressed sensing (PICCS) was developed to address these two problems. In this method, the sparse-view CT data were reconstructed by the conventional filtered backprojection (FBP) method first, and then processed by the trained deep neural network to eliminate streaking artifacts. The outputs of the deep learning architecture were then used as the needed prior image in PICCS to reconstruct the image. If the noise level from the PICCS reconstruction is not satisfactory, another light duty deep neural network can then be used to reduce noise level. Both extensive numerical simulation data and human subject data have been used to quantitatively and qualitatively assess the performance of the proposed DL-PICCS method in terms of reconstruction accuracy and generalizability. Results: Extensive evaluation studies have demonstrated that: 1) quantitative reconstruction accuracy of DL-PICCS for individual patient is improved when it is compared with the deep learning methods and CS based methods; 2) the false positive lesion-like structures and false negative missing anatomical structures in the deep learning approaches can be effectively eliminated in the DL-PICCS reconstructed images; and 3) DL-PICCS enables a deep learning scheme to relax its working conditions to enhance its generalizability. Conclusions: DL-PICCS offers a promising opportunity to achieve personalized reconstruction with improved reconstruction accuracy and enhanced generalizability. This article is protected by copyright. All rights reserved
, Carsten Eickhoff, Krishna Juluru
Published: 29 August 2021
Abstract:
Purpose: Automatic localization of pneumonia on chest X-rays (CXRs) is highly desirable both as an interpretive aid to the radiologist and for timely diagnosis of the disease. However, pneumonia's amorphous appearance on CXRs and complexity of normal anatomy in the chest present key challenges that hinder accurate localization. Existing studies in this area are either not optimized to preserve spatial information of abnormality or depend on expensive expert annotated bounding boxes. We present a novel generative adversarial network (GAN) based machine learning approach for this problem, that is weakly supervised (does not require any location annotations), was trained to retain spatial information, and can produce pixel-wise abnormality maps highlighting regions of abnormality (as opposed to bounding boxes around abnormality). Methods: Our method is based on the Wasserstein GAN framework and to the best of our knowledge, the first application of GANs to this problem. Specifically, from an abnormal CXR as input, we generated the corresponding pseudo normal CXR image as output. The pseudo normal CXR is the “hypothetical” normal, if the same abnormal CXR were not to have any abnormalities. We surmise that the difference between the pseudo normal and the abnormal CXR highlights the pixels suspected to have pneumonia and hence is our output abnormality map. We trained our algorithm on an ”unpaired” dataset of abnormal and normal CXRs and did not require any location annotations such as bounding boxes/segmentations of abnormal regions. Furthermore, we incorporated additional prior knowledge/constraints into the model and showed that they help improve localization performance. We validated the model on a data set consisting of 14,184 CXRs from the RSNA pneumonia detection challenge. Results: We evaluated our methods by comparing the generated abnormality maps with radiologist annotated bounding boxes using ROC analysis, image similarity metrics such as normalized cross correlation/mutual information, and abnormality detection rate. We also present visual examples of the abnormality maps, covering various scenarios of abnormality occurrence. Results demonstrate the ability to highlight regions of abnormality with the best method achieving an AUC of 0.77 and a detection rate of 85%. The GAN tended to perform better as prior knowledge/constraints were incorporated into the model. Conclusions: We presented a novel GAN based approach for localizing pneumonia on chest X-rays that 1) does not require expensive hand annotated location ground truth; 2) was trained to produce abnormality maps at the pixel level as opposed to bounding boxes. We demonstrated the efficacy of our methods via quantitative and qualitative results.
Yidong Wan, Pengfei Yang, Lei Xu, Jing Yang, Chen Luo, Jing Wang, Feng Chen, Yan Wu, Yun Lu, , et al.
Published: 28 August 2021
Abstract:
Purpose To study and investigate the synergistic benefit of incorporating both conventional handcrafted and learning-based features in disease identification across a wide range of clinical setups. Methods and Materials In this retrospective study, we collected 170/150/209/137 patients with four different disease types associated with identification objectives of: lymph node metastasis status of gastric cancer (GC), 5-year survival status of patients with high-grade osteosarcoma (HOS), early recurrence status of intrahepatic cholangiocarcinoma (ICC), and pathological grades of pancreatic neuroendocrine tumors (pNETs). CT and MR were used to derive image features for GC/HOS/pNETs and ICC respectively. In each study, 67 universal handcrafted features and study-specific features based on sparse autoencoder (SAE) method were extracted and fed into the subsequent feature selection and learning model to predict the corresponding disease identification. Models using handcrafted alone, SAE alone, and hybrid features were optimized and their performance was compared. Prominent features were analyzed both qualitatively and quantitatively to generate study-specific and cross-study insight. In addition to direct performance gain assessment, correlation analysis was performed to assess the complementarity between handcrafted features and SAE features. Results On the independent hold-off test, the handcrafted, SAE, and hybrid features based prediction yielded AUC of 0.761 vs 0.769 vs 0.829 for GC, 0.629 vs 0.740 vs 0.709 for HOS, 0.717 vs 0.718 vs 0.758 for ICC, and 0.739 vs 0.715 vs 0.771 for pNETs studies respectively. In three out of the four studies, prediction using the hybrid features yields the best performance, demonstrating the general benefit in using hybrid features. Prediction with SAE features alone had the best performance in the HOS study, which may be explained by the complexity of HOS prognosis and the possibility of a slight overfit due to higher correlation between handcrafted and SAE features. Conclusion This study demonstrated the general benefit of combing handcrafted and learning-based features in radiomics modelling. It also clearly illustrates the task-specific and data-specific dependency on the performance gain and suggests that while the common methodology of feature combination may be applied across various studies and tasks, study-specific feature selection and model optimization is still necessary to achieve high accuracy and robustness. This article is protected by copyright. All rights reserved
, Chuangui Cao, Tongtong Li, Yongchun Cao, Zhengxing Man, Haijun Wang
Published: 28 August 2021
Abstract:
Purpose : A self-defined convolutional neural network is developed to automatically classify whole-body scintigraphic images of concern (i.e., the normal, metastasis, arthritis, and thyroid carcinoma), automatically detecting diseases with whole-body bone scintigraphy. Methods : A set of parameter transformation operations are first used to augment the original dataset of whole-body bone scintigraphic images. A hybrid attention mechanism including the spatial and channel attention module is then introduced to develop a deep classification network, Dscint, which consists of eight weight layers, one hybrid attention module, two normalization modules, two fully connected layers, and one softmax layer. Results : Experimental evaluations conducted on a set of whole-body scintigraphic images show that the proposed deep classification network, Dscint, performs well for automated detection of diseases by classifying the images of concerns, with achieving the accuracy, precision, recall, specificity, and F-1 score of 0.9801, 0.9795, 0.9791, 0.9933, and 0.9792, respectively, on the test data in the augmented dataset. A comparative analysis of Dscint and several classical deep classification networks (i.e., AlexNet, ResNet, VGGNet, DenseNet, and Inception-v4) reveals that our self-defined network, Dscint, performs best on classifying whole-body scintigraphic images on the same dataset. Conclusions : The self-defined deep classification network, Dscint, can be utilized to automatically determine whether a whole-body scintigraphic image either is normal or contains diseases of concerns. Specifically, better performance of Dscint is obtained on images with lesions that are present in relatively fixed locations like thyroid carcinoma than those with lesions occurring in non-fixed locations of bone tissue. This article is protected by copyright. All rights reserved
, Jana G. Delfino, Kalina V. Jordanova, Megan E. Poorman, Prathyush Chirra, Akshay S. Chaudhari, Bettina Baessler, Jessica Winfield, Satish E. Viswanath, Nandita M. Desouza
Published: 28 August 2021
Abstract:
Image quantitation methods including quantitative MRI, multiparametric MRI, and radiomics, offer great promise for clinical use. However, many of these methods have limited clinical adoption, in part due to issues of generalizability, i.e., the ability to translate methods and models across institutions. Researchers can assess generalizability through measurement of repeatability and reproducibility, thus quantifying different aspects of measurement variance. In this article, we review the challenges to ensuring repeatability and reproducibility of image quantitation methods as well as present strategies to minimize their variance to enable wider clinical implementation. We present possible solutions for achieving clinically acceptable performance of image quantitation methods and briefly discuss the impact of minimizing variance and achieving generalizability towards clinical implementation and adoption. This article is protected by copyright. All rights reserved
Min Lu, , Guancong Liu,
Published: 27 August 2021
Abstract:
Purpose Ultra-Wide Band (UWB) microwave breast cancer detection is a promising new technology for routine physical examination and home monitoring. The existing microwave imaging algorithms for breast tumor detection is complex and the effect is still not ideal, due to the heterogeneity of breast tissue, skin reflection and fibroglandular tissue reflection in backscatter signals. This study aims to develop a machine learning method to accurately locate breast tumor. Methods A microwave-based breast tumor localization method is proposed by time-frequency feature extraction and neural network technology. Firstly, the received microwave array signals are converted into representative and compact features by 4-level Discrete Wavelet Transform (DWT) and Principal Component Analysis (PCA). Then, the Genetic Algorithm-Neural Network (GA-NN) is developed to tune hyper-parameters of the neural network adaptively. The neural network embedded in the GA-NN algorithm is a 4-layer architecture and 10-fold cross validation is performed. Through the trained neural network, the tumor localization performance is evaluated on four datasets which are created by FDTD simulation method from 2-D MRI-derived breast models with varying tissue density, shape, and size. Each dataset consists of 1000 backscatter signals with different tumor positions, in which the ratio of training set to test set is 9:1. In order to verify the generalizability and scalability of the proposed method, the tumor localization performance is also tested on a 3-D breast model. Results For these 2-D breast models with unknown tumor locations, the evaluation results show that the proposed method has small location errors, which are 0.6076 mm, 3.0813 mm, 2.0798 mm, and 3.2988 mm, respectively and high accuracy, which is 99%, 80%, 94%, and 85%, respectively. Furthermore, the location error and the prediction accuracy of the 3-D breast model are 3.3896 mm and 81%. Conclusions These evaluation results demonstrate that the proposed machine learning method is effective and accurate for microwave breast tumor localization. The traditional microwave-based breast cancer detection method is to reconstruct the entire breast image to highlight the tumor. Compared with the traditional method, our proposed method can directly get the breast tumor location by applying neural network to the received microwave array signals, and circumvent any complicated image reconstruction processing. This article is protected by copyright. All rights reserved
Takaaki Sugino, Holger R. Roth, Masahiro Oda, Taichi Kin, Nobuhito Saito, Yoshikazu Nakajima,
Published: 27 August 2021
Abstract:
Purpose For the planning and navigation of neurosurgery, we have developed a fully convolutional network (FCN)-based method for brain structure segmentation on magnetic resonance (MR) images. The capability of an FCN depends on the quality of the training data (i.e., raw data and annotation data) and network architectures. The improvement of annotation quality is a significant concern because it requires much labor for labeling organ regions. To address this problem, we focus on skip connection architectures and reveal which skip connections are effective for training FCNs using sparsely annotated brain images. Methods We tested 2D FCN architectures with four different types of skip connections. The first was a U-Net architecture with horizontal skip connections that transfer feature maps at the same scale from the encoder to the decoder. The second was a U-Net++ architecture with dense convolution layers and dense horizontal skip connections. The third was a full-resolution residual network (FRRN) architecture with vertical skip connections that pass feature maps between each downsampled scale path and the full-resolution scale path. The last one was a hybrid architecture with a combination of horizontal and vertical skip connections. We validated the effect of skip connections on medical image segmentation from sparse annotation based on these four FCN architectures, which were trained under the same conditions. Results For multi-class segmentation of the cerebrum, cerebellum, brainstem, and blood vessels from sparsely annotated MR images, we performed a comparative evaluation of segmentation performance among the above four FCN approaches: U-Net, U-Net++, FRRN, and hybrid architectures. The experimental results show that the horizontal skip connections in the U-Net architectures were effective for the segmentation of larger-sized objects, while the vertical skip connections in the FRRN architecture improved the segmentation of smaller-sized objects. The hybrid architecture with both horizontal and vertical skip connections achieved the best results of the four FCN architectures. We then performed an ablation study to explore which skip connections in the FRRN architecture contributed to the improved segmentation of blood vessels. In the ablation study, we compared the segmentation performance between architectures with a horizontal path (HP), a horizontal path and vertical up paths (HP+VUPs), a horizontal path and vertical down paths (HP+VDPs), and a horizontal path and vertical up and down paths (FRRN). We found that the vertical up paths were effective in improving the segmentation of smaller-sized objects. Conclusions This paper investigated which skip connection architectures were effective for multi-class brain segmentation from sparse annotation. Consequently, using vertical skip connections with horizontal skip connections allowed FCNs to improve segmentation performance. This article is protected by copyright. All rights reserved
Yuhang Sun, Qiaoyun Zhu, Meiyan Huang, Dinggang Shen, Yujia Zhou, Qianjin Feng
Published: 27 August 2021
Abstract:
Purpose : Dynamic contrast-enhanced MRI (DCE-MRI) registration is a challenging task because of the effect of remarkable intensity changes caused by contrast agent injections. Unrealistic deformation usually occurs by using traditional intensity-based algorithms. To alleviate the effect of contrast agent on registration, we proposed a DCE-MRI registration strategy and investigated the registration performance on the clinical DCE-MRI time series of liver. Method We reconstructed the time-intensity curves of the contrast agent through sparse representation with a predefined dictionary whose columns were the time-intensity curves with high correlations with respect to a preselected contrast agent curve. After reshaping 1D-reconstructed contrast agent time-intensity curves into a 4D contrast agent time series, we aligned the original time series to the reconstructed contrast agent time series through traditional free-form deformation (FFD) registration scheme combined with a residual complexity (RC) similarity and an iterative registration strategy. This study included the DCE-MRI time series of 20 patients with liver cancer. Results Qualitatively, the time-cut images and subtraction images of different registration methods did not obviously differ. Quantitatively, the mean (standard deviation) of temporal intensity smoothness of all the patients achieved 54.910 (18.819), 54.609 (18.859), 53.391 (19.031) in FFD RC, RDDR, Zhou et al.’s method and the proposed method, respectively. The mean (standard deviation) of changes in the lesion volume were 0.985 (0.041), 0.983 (0.041), 0.981 (0.046) and 0.989 (0.036) in FFD RC, RDDR, Zhou et al.’s method and the proposed method. Conclusion Our proposed method would be an effective registration strategy for DCE-MRI time series, and its performance was comparable with that of three advanced registration methods. This article is protected by copyright. All rights reserved
, Wei Zhang, Ibrahim Oraiqat, Dale W Litzenberg, Kwok Leung Lam, Kyle Cuneo, Jean M Moran‐Ebm, Paul L. Carson, Xueding Wang, Shaun D Clarke, et al.
Published: 25 August 2021
Abstract:
Purpose Electron-based ultra-high dose rate radiation therapy (UHDR-RT), also known as Flash-RT, has shown the ability to improve the therapeutic index in comparison to conventional radiotherapy (CONV-RT) through increased sparing of normal tissue. However, the extremely high-dose rates in UHDR-RT have raised the need for accurate real-time dosimetry tools. This work aims to demonstrate the potential of the emerging technology of Ionized Radiation Acoustic Imaging (iRAI) through simulation studies and investigate its characteristics as a promising relative in vivo dosimetric tool for UHDR-RT. Methods The detection of induced acoustic waves following a single UHDR pulse of a modified 6 MeV 21EX Varian Clinac in a uniform porcine gelatin phantom that is brain-tissue equivalent was simulated for an ideal ultrasound transducer. The full 3D dose distributions in the phantom for a 1 × 1 cm2 field were simulated using EGSnrc (BEAMnrc∖DOSXYZnrc) Monte Carlo (MC) codes. The relative dosimetry simulations were verified with dose experimental measurements using Gafchromic films. The spatial dose distribution was converted into an initial pressure source spatial distribution using the medium dependent dose-pressure relation. The MATLAB based toolbox k-Wave was then used to model the propagation of acoustic waves through the phantom and perform time-reversal (TR) based imaging reconstruction. The effect of the various linear accelerator (linac) operating parameters, including linac pulse duration and pulse repetition rate (frequency), were investigated as well. Results The Monte Carlo dose simulation results agreed with the film measurement results, specifically at the central beam region up to 80% dose within approximately 5% relative error for the central profile region and a local relative error of < 6 % for percentage dose depth. IRAI-based FWHM of the radiation beam was within approximately 3 mm relative to the MC simulated beam FWHM at the beam entrance. The real time pressure signal change agreed with the dose changes proving the capability of the iRAI for predicting the beam position. IRAI was tested through 3D simulations of its response to be based on the temporal changes in the linac operating parameters on a dose per pulse basis as expected theoretically from the pressure-dose proportionality. The pressure signal amplitude obtained through 2D simulations was proportional to the dose per pulse. The instantaneous pressure signal amplitude decreases as the linac pulse duration increases, as predicted from the pressure wave generation equations, such that the shorter the linac pulse the higher the signal and the better the temporal (spatial) resolutions of iRAI. The effect of the longer linac pulse duration on the spatial resolution of the 3D constructed iRAI images was corrected for through linac pulse deconvolution. This correction has improved the passing rate of the 1%/1mm gamma test criteria, between the pressure-constructed and dosimetric beam characteristic, to as high as 98%. Conclusions A full simulation workflow was developed for testing the effectiveness of iRAI as a promising relative dosimetry tool for UHDR-RT radiation therapy. IRAI has shown the advantage of 3D dose mapping through the dose signal linearity and hence has the potential to be a useful dosimeter at depth dose measurement and beam localization and hence potentially for in vivo dosimetry in UHDR-RT. This article is protected by copyright. All rights reserved
Chengpeng Wu, Yuxiang Xing, Li Zhang, Zhiqiang Chen, Xiaohua Zhu, Xi Zhang, Hewei Gao
Published: 25 August 2021
Abstract:
Purpose: X-ray phase-contrast imaging (XPCI) can provide multiple contrasts with great potentials for clinical and industrial applications, including conventional attenuation, phase contrast and dark field. Grating-based imaging (GBI) and edge-illumination (EI) are two promising types of XPCI as the conventional x-ray sources can be directly utilized. For GBI and EI systems, the phase-stepping acquisition with multiple exposures at a constant fluence is usually adopted in the literature. This work, however, attempts to challenge such a constant fluence concept during the phase-stepping process and proposes a fluence adaptation mechanism for dose reduction. Method: Given the importance of patient radiation dose for clinical applications, numerous studies have tried to reduce patient dose in XPCI by altering imaging system designs, data acquisition and information retrieval. Recently, analytic multi-order moment analysis has been proposed to improve the computing efficiency. In these algorithms, multiple contrasts can be calculated by summing together the weighted phase-stepping curves (PSCs) with some kernel functions, which suggests us that the raw data at different steps have different contributions for the noise in retrieved contrasts. Therefore, it is possible to improve the noise performance by adjusting the fluence distribution during the phase-stepping process directly. Based on analytic retrieval formulas and the Gaussian noise model for detected signals, we derived an optimal adaptive fluence distribution, which is proportional to the absolute weighting kernel functions and the root of original sample PSCs acquired under the constant fluence. Considering that the original sample PSC might be unavailable, we proposed two practical forms for GBI and EI systems, which are also able to reduce the contrast noise when comparing with the constant fluence distribution. Since the kernel functions are target contrast dependent, our proposed fluence adaptation mechanism provides a way of realizing a contrast-based dose optimization while keeping the same noise level. Results: To validate our analyses, simulations and experiments are conducted for GBI and EI systems. Simulated results demonstrate that the dose reduction ratio between our proposed fluence distributions and the typical constant one can be about 20% for the phase contrast, which is consistent with our theoretical predictions. Although the experimental noise reduction ratios are a little smaller than the theoretical ones, low dose experiments observe better noise performance by our proposed method. Our simulated results also give out the effective ranges of the parameters of the PSCs, such as the visibility in GBI, the standard deviation and the mean value in EI, providing a guidance for the use of our proposed approach in practice. Conclusions: In this paper, we propose a fluence adaptation mechanism for contrast-based dose optimization in XPCI, which can be applied to GBI and EI systems. Our proposed method explores a new direction for dose reduction, and may also be further extended to other types of XPCI systems and information retrieval algorithms. This article is protected by copyright. All rights reserved
Zhicheng Zhang, Xiaokun Liang, Wei Zhao, Lei Xing
Published: 25 August 2021
Abstract:
Computed tomography (CT) has played a vital role in medical diagnosis, assessment, and therapy planning, etc. In clinical practice, concerns about the increase of X-ray radiation exposure attract more and more attention. To lower the X-ray radiation, low-dose CT (LDCT) has been widely adopted in certain scenarios, while it will induce the degradation of CT image quality. In this paper, we proposed a deep learning-based method that can train denoising neural networks without any clean data. In this work, for 3D thin-slice LDCT scanning, we first drive an unsupervised loss function which was equivalent to a supervised loss function with paired noisy and clean samples when the noise in the different slices from a single scan was uncorrelated and zero-mean. Then, we trained the denoising neural network to map one noise LDCT image to its two adjacent LDCT images in a single 3D thin-layer LDCT scanning, simultaneously. In essence, with some latent assumptions, we proposed an unsupervised loss function to train the denoising neural network in an unsupervised manner, which integrated the similarity between adjacent CT slices in 3D thin-layer LDCT. Further experiments on Mayo LDCT dataset and a realistic pig head were carried out. In the experiments using Mayo LDCT dataset, our unsupervised method can obtain performance comparable to that of the supervised baseline. With the realistic pig head, our method can achieve optimal performance at different noise levels as compared to all the other methods that demonstrated the superiority and robustness of the proposed Noise2Context. In this work, we present a generalizable LDCT image denoising method without any clean data. As a result, our method not only gets rid of the complex artificial image priors but also amounts of paired high-quality training datasets.
, Gabriel Ramos‐Llordén, Raúl San José Estépar
Published: 25 August 2021
Abstract:
Purpose: To provide a methodology that removes the spatial variability of in-plane resolution by using different CT reconstructions. The methodology does not require any training, sinogram or specific reconstruction method. Methods: The methodology is formulated as a reconstruction problem. The desired sharp image is modeled as an unobservable variable to be estimated from an arbitrary number of observations with spatially variant resolution. The methodology comprises three steps: 1) Density harmonization, which removes the density variability across reconstructions. 2) PSF estimation, which estimates a spatially variant PSF with arbitrary shape. 3) Deconvolution, which is formulated as a regularized least squares problem. The assessment was performed with CT scans of phantoms acquired with three different Siemens scanners (Definition AS, Definition AS+, Drive). Four low-dose (LD) acquisitions reconstructed with backprojection and iterative methods were used for the resolution harmonization. A sharp, high-dose (HD) reconstruction was used as a validation reference. The different factors affecting the in-plane resolution (radial, angular, and longitudinal) were studied with regression analysis of the edge decay (between 10 and 90 percent of the edge spread function (ESF) amplitude). Results: Results showed that the in-plane resolution improves remarkably and the spatial variability is substantially reduced without compromising the noise characteristics. The modulated transfer function (MTF) also confirmed a pronounced increase in resolution. The resolution improvement was also tested by measuring the wall thickness of tubes simulating airways. In all scanners, the resolution harmonization obtained better performance than the HD, sharp reconstruction used as a reference (up to 50 percent points). The methodology was also evaluated in clinical scans achieving a noise reduction and a clear improvement in thin-layered structures. The estimated ESF and MTF confirmed the resolution improvement. Conclusion: We propose a versatile methodology to reduce the spatial variability of in-plane resolution in CT scans by leveraging different reconstructions available in clinical studies. The methodology does not require any sinogram, training or specific reconstruction, and it is not limited to a fixed number of input images. Therefore, it can be easily adopted in multicenter studies and clinical practice. The results obtained with our resolution harmonization methodology evidence its suitability to reduce the spatially variant in-plane resolution in clinical CT scans without compromising the reconstruction's noise characteristics. We believe that the resolution increase achieved by our methodology may contribute in more accurate and reliable measurements of small structures such as vasculature, airways and wall thickness.
Wenzheng Feng, Mark J. Rivard, Elizabeth M. Carey, Robert A. Hearn, Sujatha Pai, Ravinder Nath, YongBok Kim, Cynthia L. Thomason, Dale E. Boyce,
Published: 25 August 2021
Abstract:
Mesh brachytherapy is a special type of a permanent brachytherapy implant: it uses low-energy radioactive seeds in an absorbable mesh that is sutured onto the tumor bed immediately after a surgical resection. This treatment offers low additional risk to the patient as the implant procedure is carried out as part of the tumor resection surgery. Mesh brachytherapy utilizes identification of the tumor bed through direct visual evaluation during surgery or medical imaging following surgery through radiographic imaging of radio-opaque markers within the sources located on the tumor bed. Thus, mesh brachytherapy is customizable for individual patients. Mesh brachytherapy is an intraoperative procedure involving mesh implantation and potentially real-time treatment planning while the patient is under general anesthesia. The procedure is multidisciplinary and requires the complex coordination of multiple medical specialties. The pre-implant dosimetry calculation can be performed days beforehand or expediently in the operating room with the use of lookup tables. In this report, the American Association of Physicists in Medicine (AAPM) guidelines are presented on the physics aspects of mesh brachytherapy. It describes the selection of radioactive sources, design and preparation of the mesh, pre-implant treatment planning using a Task Group (TG) 43-based lookup table, and post-implant dosimetric evaluation using the TG-43 formalism or advanced algorithms. It introduces quality metrics for the mesh implant and presents an example of a risk-analysis based on the AAPM TG-100 report. Recommendations include that the pre-implant treatment plan be based upon the TG-43 dose calculation formalism with the point source approximation, and the post-implant dosimetric evaluation be performed by using either the TG-43 approach, or preferably the newer model-based algorithms (viz., TG-186 report) if available to account for effects of material heterogeneities. To comply with the written directive and regulations governing the medical use of radionuclides, this report recommends that the prescription and written directive be based upon the implanted source strength, not target-volume dose coverage. The dose delivered by mesh implants can vary and depends upon multiple factors, such as post-surgery recovery and distortions in the implant shape over time. For the sake of consistency necessary for outcome analysis, prescriptions based on the lookup table (with selection of the intended dose, depth, and treatment area) are recommended, but the use of more advanced techniques that can account for real situations, such as material heterogeneities, implant geometric perturbations, and changes in source orientations, are encouraged in the dosimetric evaluation. The clinical workflow, logistics, and precautions are also presented. This article is protected by copyright. All rights reserved
, Sarvesh Periyasamy, Colin Longhurst, Matthew J. McLachlan, Joseph F. Whitehead, Michael A. Speidel, Paul F. Laeseke
Published: 25 August 2021
Abstract:
Purpose : During hepatic arterial interventions, catheter or guidewire position is determined by referencing or overlaying a previously acquired static vessel roadmap. Respiratory motion leads to significant discrepancies between the true position and configuration of the hepatic arteries and the roadmap, which makes navigation and accurate catheter placement more challenging and time consuming. The purpose of this work was to develop a dynamic respiratory motion compensated device guidance system and evaluate the accuracy and real-time performance in an in vivo porcine liver model. Methods : The proposed device navigation system estimates a respiratory motion model for the hepatic vasculature from pre-navigational x-ray image sequences acquired under free breathing conditions with and without contrast enhancement. During device navigation, the respiratory state is tracked based on live fluoroscopic images and then used to estimate vessel deformation based on the previously determined motion model. Additionally, guidewires and catheters are segmented from the fluoroscopic images using a deep learning approach. The vessel and device information are combined and shown in a real-time display. Two different display modes are evaluated within this work: 1) a compensated roadmap display, where the vessel roadmap is shown moving with the respiratory motion 2) an inverse compensated device display, where the device representation is compensated for respiratory motion and overlaid on a static roadmap. A porcine study including 7 animals was performed to evaluate the accuracy and real-time performance of the system. In each pig, a guidewire and microcatheter with a radiopaque marker were navigated to distal branches of the hepatic arteries under fluoroscopic guidance. Motion compensated displays were generated showing real-time overlays of the vessel roadmap and intravascular devices. The accuracy of the motion model was estimated by comparing the estimated vessel motion to the motion of the x-ray visible marker. Results : The median (minimum, maximum) error across animals was 1.08 mm (0.92 mm, 1.87 mm). Across different respiratory states and vessel branch levels, the odds of the guidewire tip being shown in the correct vessel branch were significantly higher (odds ratio = 3.12, p<0.0001) for motion compensated displays compared to a non-compensated display (median probabilities of 86% and 69%, respectively). The average processing time per frame was 17 ms. Conclusions : The proposed respiratory motion compensated device guidance system increased the accuracy of the displayed device position relative to the hepatic vasculature. Additionally, the provided display modes combine both vessel and device information and do not require mental integration of different displays by the physician. The processing times were well within the range of conventional clinical frame rates. This article is protected by copyright. All rights reserved
, Kevin M. Prise, Karl T. Butterworth, Pierre Montay‐Gruel, Vincent Favaudon
Published: 23 August 2021
Abstract:
Radiation exposures at ultra-high dose rates (UHDR) at several orders of magnitude greater than in current clinical radiotherapy have been shown to manifest differential radiobiological responses compared to conventional dose rates (CONV). This has led to studies investigating the application of UHDR for therapeutic advantage (FLASH-RT) which have gained significant interest since the initial discovery in 2014 that demonstrated reduced lung toxicity with equivalent levels of tumour control compared with conventional dose-rate radiotherapy. Many subsequent studies have demonstrated the potential protective role of FLASH-RT in normal tissues, yet the underlying molecular and cellular mechanisms of the FLASH effect remain to be fully elucidated. Here, we summarise the current evidence of the FLASH effect and review FLASH-RT studies performed in preclinical models of normal tissue response. To critically examine the underlying biological mechanisms of responses to UHDR radiation exposures, we evaluate in vitro studies performed with normal and tumour cells. Differential responses to UHDR vs CONV irradiation recurrently involve reduced inflammatory processes and differential expression of pro- and anti-inflammatory genes. In addition, frequently reduced levels of DNA damage or misrepair products are seen after UHDR irradiation. So far, it is not clear what signal elicits these differential responses, but there are indications for involvement of reactive species. Different susceptibility to FLASH effects observed between normal and tumour cells may result from altered metabolic and detoxification pathways and/or repair pathways used by tumour cells. We summarize the current theories that may explain the FLASH effect and highlight important research questions which are key to a better mechanistic understanding and, thus, the future implementation of FLASH-RT in the clinic. This article is protected by copyright. All rights reserved
Lucas W. Remedios, Sneha Lingam, Samuel W. Remedios, Riqiang Gao, Stephen W. Clark, Larry T. Davis, Bennett A. Landman
Published: 22 August 2021
Abstract:
Artificial intelligence diagnosis and triage of large vessel occlusion may quicken clinical response for a subset of time-sensitive acute ischemic stroke patients, improving outcomes. Differences in architectural elements within data-driven convolutional neural network (CNN) models impact performance. Foreknowledge of effective model architectural elements for domain-specific problems can narrow the search for candidate models and inform strategic model design and adaptation to optimize performance on available data. Here, we study CNN architectures with a range of learnable parameters and which span inclusion of architectural elements, such as parallel processing branches and residual connections with varying methods of recombining residual information. We compare five CNNs: ResNet-50, DenseNet-121, EfficientNet-B0, PhiNet, and an Inception module-based network, on a computed tomography angiography large vessel occlusion detection task. The models were trained and preliminarily evaluated with 10-fold cross-validation on preprocessed scans (n=240). An ablation study was performed on PhiNet due to superior cross-validated test performance across accuracy, precision, recall, specificity, and F1 score. The final evaluation of all models was performed on a withheld external validation set (n=60) and these predictions were subsequently calibrated with sigmoid curves. Uncalibrated results on the withheld external validation set show that DenseNet-121 had the best average performance on accuracy, precision, recall, specificity, and F1 score. After calibration DenseNet-121 maintained superior performance on all metrics except recall. The number of learnable parameters in our five models and best-ablated PhiNet directly related to cross-validated test performance-the smaller the model the better. However, this pattern did not hold when looking at generalization on the withheld external validation set. DenseNet-121 generalized the best; we posit this was due to its heavy use of residual connections utilizing concatenation, which causes feature maps from earlier layers to be used deeper in the network, while aiding in gradient flow and regularization.
, Cyril Riddell, Yves Trousset, Emilie Chouzenoux, Jean‐Christophe Pesquet
Published: 22 August 2021
Abstract:
Purpose: Discretizing tomographic forward and backward operations is a crucial step in the design of model-based reconstruction algorithms. Standard projectors rely on linear interpolation whose adjoint introduces discretization errors during backprojection. More advanced techniques are obtained through geometric footprint models that may present a high computational cost and an inner logic not suitable for implementation on massively parallel computing architectures. In this work, we take a fresh look on the discretization of resampling transforms and focus on the issue of magnification-induced local sampling variations by introducing a new “magnification-driven” interpolation approach for tomography. Methods: Starting from the existing literature on spline interpolation for magnification purposes, we provide a mathematical formulation for discretizing a one-dimensional homography. We then extend our approach to two-dimensional representations in order to account for the geometry of cone-beam computed tomography with a flat panel detector. Our new method relies on the decomposition of signals onto a space generated by non-uniform B-splines so as to capture the spatially varying magnification that locally affects sampling. We propose various degrees of approximations for a fast implementation of the proposed approach. Our framework allows us to define a novel family of pairs of projector and backprojector parameterized by the order of the employed B-splines. The state-of-the-art distance-driven interpolation turns out to fit into this family and we therefore provide new insight and computational scheme of this scheme. The question of data resampling at the detector level is handled and integrated with reconstruction in a single framework Results: Results on both synthetic data and real data using a quality assurance phantom, were performed to validate our approach. We show experimentally that our approximate implementations are associated with a reduced complexity while achieving a near-optimal performance. In contrast with linear interpolation, B-splines guarantee full usage of all data samples and thus of the X-ray dose leading to more uniform noise properties. In addition, higher order B-splines allow analytical and iterative reconstruction to reach higher resolution. These benefits appear more significant when downsampling frames acquired by X-ray flat-panel detectors with small pixels. Conclusions: “Magnification-driven” B-spline interpolation is shown to provide high-accuracy projection operators with good quality adjoints for iterative reconstruction. It equally applies to backprojection for analytical reconstruction and detector data downsampling.
Timothy Wong, Nicola Schieda, Paul Sathiadoss, Mohammad Haroon, Jorge Abreu‐Gomez,
Published: 21 August 2021
Abstract:
Purpose Accurate detection of transition zone (TZ) prostate cancer (PCa) on magnetic resonance imaging (MRI) remains challenging using clinical subjective assessment due to overlap between PCa and benign prostatic hyperplasia (BPH). The objective of this paper is to describe a deep-learning-based framework for fully automated detection of PCa in the TZ using T2-weighted (T2W) and apparent diffusion coefficient (ADC) map MR images. Method This was a single-center IRB-approved cross-sectional study of men undergoing 3T MRI on two systems. The dataset consisted of 196 patients (103 with and 93 without clinically significant [Grade Group 2 or higher] TZ PCa) to train and test our proposed methodology, with an additional 168 patients with peripheral zone PCa used only for training. We proposed an ensemble of classifiers in which multiple U-Net-based models are designed for prediction of TZ PCa location on ADC map MR images, with initial automated segmentation of the prostate to guide detection. We compared accuracy of ADC alone to T2W and combined ADC+T2W MRI for input images, and investigated improvements using ensembles over their constituent models with different methods of diversity in individual models by hyperparameter configuration, loss function and model architecture. Results Our developed algorithm reported sensitivity and precision of 0.829 and 0.617 in 56 test cases containing 31 instances of TZ PCa and in 25 patients without clinically significant TZ tumors. Patient-wise classification accuracy had an area under receiver operator characteristic curve (AUROC) of 0.974. Single U-Net models using ADC alone (sensitivity 0.829, precision 0.534) outperformed assessment using T2W (sensitivity 0.086, precision 0.081) and assessment using combined ADC+T2W (sensitivity 0.687, precision 0.489). While the ensemble of U-Nets with varying hyperparameters demonstrated the highest performance, all ensembles improved PCa detection compared to individual models, with sensitivities and precisions close to the collective best of constituent models. Conclusion We describe a deep-learning-based method for fully automated TZ PCa detection using ADC map MR images that outperformed assessment by T2W and ADC+T2W.
, Djamel Dabli, Julien Frandon, Aymeric Hamard, Asmaa Belaouni, Philippe Akessoul, Yannick Fuamba, Julien Le Roy, Boris Guiu, Jean‐Paul Beregi
Published: 21 August 2021
Abstract:
Purpose To compare the impact on CT image quality and dose reduction of two versions of a Deep Learning Image Reconstruction algorithm. Material and methods Acquisitions on the CT ACR 464 phantom were performed at five dose levels (CTDIvol: 10/7.5/5/2.5/1 mGy) using chest or abdomen pelvis protocol parameters. Raw data were reconstructed using the filtered-back projection (FBP), the enhanced level of AIDR 3D (AIDR De) and the three levels of AiCE (Mild, Standard and Strong) for the two versions (AiCE V8 vs AiCE V10). The noise power spectrum (NPS) and task-based transfer function (TTF) for bone (high-contrast insert) and acrylic (low-contrast insert) were computed. To quantify the changes of noise magnitude and texture, the square root of the area under the NPS curve and the average spatial frequency (fav) of the NPS curve were measured. The detectability index (d’) was computed to model the detectability of either a large mass in the liver or lung or a small calcification or high contrast tissue boundaries. Results The noise magnitude was lower with both AiCE versions than AIDR 3De. The noise magnitude was lower with AiCE V10 than with AiCE V8 (-4±6% for Mild, -14±3% for Standard, and -48±1% for Strong levels). fav and TTF50% values for both inserts shifted towards higher frequencies with AiCE than with AIDR 3De. Compared to AiCE V08, fav shifted towards higher frequencies with AiCE V10 (45 ±4%, 36±4%, and 5±4% for all levels, respectively). The TTF50% values shifted towards higher frequencies with AiCE V10 as compared with AiCE V8 for both inserts, except for the Strong level for the acrylic insert. Whatever the dose and AiCE levels, d’ values were on average 10±3% higher with AiCE V10 than with AiCE V8 for the small object/calcification and by 11±5% for the large object/lesion. Conclusion As compared to AIDR 3De, lower noise magnitude and higher spatial resolution and detectability index were found with both versions of AiCE. As compared to AiCE V8, AiCE V10 reduced noise and improved spatial resolution and detectability without changing the noise texture in a simple geometric phantom, except for the Strong level. AiCE V10 seemed to have a greater potential for dose reduction than AiCE V8. This article is protected by copyright. All rights reserved
, , Sung Jin Kim, , Won Park,
Published: 21 August 2021
Abstract:
Purpose Megavoltage computed tomography (MVCT) offers an opportunity for adaptive helical tomotherapy. However, high noise and reduced contrast in the MVCT images due to a decrease in the imaging dose to patients limits its usability. Therefore, we propose an algorithm to improve the image quality of MVCT. Methods The proposed algorithm generates kilovoltage CT (kVCT)-like images from MVCT images using a cycle-consistency generative adversarial network (cycleGAN)-based image synthesis model. Data augmentation using an affine transformation was applied to the training data to overcome the lack of data diversity in the network training. The mean absolute error (MAE), root-mean-square error (RMSE), peak signal-to-noise ratio (PSNR), and structural similarity index measure (SSIM) were used to quantify the correction accuracy of the images generated by the proposed algorithm. The proposed method was validated by comparing the images generated with those obtained from conventional and deep learning-based image processing method through non-augmented datasets. Results The average MAE, RMSE, PSNR, and SSIM values were 18.91 HU, 69.35 HU, 32.73 dB, and 95.48 using the proposed method, respectively, whereas cycleGAN with non-augmented data showed inferior results (19.88 HU, 70.55 HU, 32.62 dB, 95.19, respectively). The voxel values of the image obtained by the proposed method also indicated similar distributions to those of the kVCT image. The dose-volume histogram of the proposed method was also similar to that of electron density corrected MVCT. Conclusions The proposed algorithm generates synthetic kVCT images from MVCT images using cycleGAN with small patient datasets. The image quality achieved by the proposed method was correspondingly improved to the level of a kVCT image while maintaining the anatomical structure of an MVCT image. The evaluation of dosimetric effectiveness of the proposed method indicates the applicability of accurate treatment planning in adaptive radiation therapy.
Niloofar Yousefi Moteghaed, , Payam Azadeh
Published: 21 August 2021
Abstract:
Purpose The substitution of computerized tomography (CT) with magnetic resonance imaging (MRI) has been investigated for external radiotherapy treatment planning. The present study aims to use pseudo-CT (P-CT) images created by MRI images to calculate the dose distribution for facilitating the treatment planning process. Methods In this work, following image segmentation with a fuzzy clustering algorithm, an adaptive neuro-fuzzy algorithm was utilized to design the Hounsfield unit (HU) conversion model based on the features vector of MRI images. The model was generated on the set of extracted features from the gray-level co-occurrence matrices and the gray-level run-length matrices for 14 arbitrarily selected patients with brain malady. The performance of the algorithm was investigated on blind datasets through dose-volume histogram and isodose curve evaluations, using the RayPlan treatment planning system (TPS), along with the gamma analysis and statistical indices. Results In the proposed approach, the mean absolute error within the range of 45.4 HU was found among the test data. Also, the relative dose difference between the planning target volume region of the CT and the P-CT was 0.5%, and the best gamma pass rate for 3%/3 mm was 97.2%. Conclusion The proposed method provides a satisfactory average error rate for the generation of P-CT data in the different parts of the brain region from a collection of MRI series. Also, dosimetric parameters evaluation shows good agreement between reference CT and related P-CT images.
Mahdieh Dayyani, Elie Hoseinian‐Azghadi, Hashem Miri‐Hakimabad, , Sara Abdollahi, Najmeh Mohammadi
Published: 20 August 2021
Abstract:
Purpose This study aimed to compare the biological effective doses (BED) to clinical target volume (CTV) and organs at risk (OARs) for cervical cancer patients treated with high-dose-rate (HDR) 192Ir or 60Co brachytherapy (BT) boost and to determine if the radiobiological differences between the two isotopes are clinically relevant. Methods Considering all radiosensitivity parameters and their reported variations, the BEDs to CTV and OARs during HDR 60Co/192Ir BT boost were evaluated at the voxel level. The anatomical differences between individuals were also taken into account by retrospectively considering 25 cervical cancer patients. The intrafraction repair, proliferation, hypoxia-induced radiosensitivity heterogeneity, relative biological effectiveness (RBE), and source aging dose-rate variation were also taken into account. The comparisons in CTV were performed based on equivalent uniform BED (EUBED). Results Considering nominal parameters with no RBE correction, the CTV EUBEDs were almost similar with a median ratio of ∼1.00 (p<0.00001), whereas RBE correction resulted in 3.9-5.5% (p = 0.005, median = 4.8%) decrease for 60Co with respect to 192Ir. For OARs, the median values of D2cc (in EQD23) for 60Co were lower than that of 192Ir up to 9.2% and 11.3% (p<0.00001) for nominal parameters and fast repair conditions, respectively. In addition, for a nominal value (reported range) of radiosensitive parameters, the CTV EUBED differences of up to 6% (5-10%) were assessed for HDR-BT component. Conclusions The RBE values are the most important cause of discrepancies between the two sources. By comparing BED/EUBEDs to CTV and OARs between 60Co and 192Ir sources, this numerical study suggests that a dose escalation to ∼4% is feasible and safe while sparing well the surrounding normal tissues. This 4% dose escalation should be benchmarked with clinical evidences (such as the results of clinical trials) before it can be used in clinical practice. This article is protected by copyright. All rights reserved
, Vito Gagliardi, Joseph Stancanello, Oliver Blanck, Giovanni Pirrone, Issam El Naqa, Alberto Revelant, Giovanna Sartor
Published: 20 August 2021
Abstract:
Purpose To improve performance of machine learning (ML) models in predicting response of non-small cell lung cancer (NSCLC) to stereotactic body radiation therapy (SBRT) by integrating image features from pre-treatment CT with features from the biologically effective dose (BED) distribution. Materials and Methods Image features, consisting of crafted radiomic features or machine learnt features extracted using a convolutional neural network (CNN), were calculated from pre-treatment CT data and from dose distributions converted into biologically effective dose (BED) for 80 NSCLC lesions over 76 patients treated with robotic guided SBRT. ML models using different combinations of features were trained to predict complete or partial response according to response criteria in solid tumors (RECIST), including radiomics CT (RadCT), radiomics CT and BED (RadCT,BED), deep learning CT (DLCT), and deep learning CT and BED (DLCT,BED). Training of ML included feature selection by Neighborhood Component Analysis followed by Ensemble Machine Learning (EML) using robust boosting. A model was considered as acceptable when the sum of average sensitivity and specificity on test data in repeated cross validations (CVs) was at least 1.5. Results Complete or partial response occurred in 58 out of 80 lesions. The best models to predict tumor response were those using BED variables, achieving significantly better AUC and accuracy than those using only features from CT, including a RadCT,BED model using three radiomic features from BED, which scored an accuracy of 0.799 (95% confidence intervals (0.75-0.85)) and AUC of 0.773 (0.688-0.846), and a DLCT,BED model also using three variables with an accuracy of 0.798 (0.649-0.829) and AUC of 0.812 (0.755-0.867). Conclusion According to our results, the inclusion of BED features improves the response prediction of ML models for lung cancer patients undergoing SBRT, regardless of use of radiomic or deep learning features. This article is protected by copyright. All rights reserved
, Lucas C. Mendez, Douglas A. Hoover, Jeffrey Bax, David D'Souza, Aaron Fenster
Published: 20 August 2021
Abstract:
Purpose In this study, we propose combining three-dimensional (3D) transrectal ultrasound (TRUS) and 3D transabdominal ultrasound (TAUS) images of gynecologic brachytherapy applicators to leverage the advantages of each imaging perspective, providing a broader field-of-view and allowing previously obscured features to be recovered. The aim of this study was to evaluate the feasibility of fusing these 3D ultrasound (US) perspectives based on the applicator geometry in a phantom prior to clinical implementation. Methods In proof-of-concept experiments, 3D US images of application-specific multimodality pelvic phantoms were acquired with tandem-and-ring and tandem-and-ovoids applicators using previously validated imaging systems. Two TRUS images were acquired at different insertion depths and manually fused based on the position of the ring/ovoids to broaden the TRUS field-of-view. The phantom design allowed “abdominal thickness” to be modified to represent different body habitus and TAUS images were acquired at three thicknesses for each applicator. The merged TRUS images were then combined with TAUS images by rigidly aligning applicator components and manually refining the registration using the positions of source channels and known tandem length, as well as the ring diameter for the tandem-and-ring applicator. Combined 3D US images were manually, rigidly registered to images from a second modality (magnetic resonance (MR) imaging for the tandem-and-ring applicator and x-ray computed tomography (CT) for the tandem-and-ovoids applicator (based on applicator compatibility)) to assess alignment. Four spherical fiducials were used to calculate target registration errors (TREs), providing a metric for validating registrations, where TREs were computed using root-mean-square distances to describe the alignment of manually identified corresponding fiducials. An analysis of variance (ANOVA) was used to identify statistically significant differences (p < 0.05) between the TREs for the three abdominal thicknesses for each applicator type. As an additional indicator of geometry accuracy, the bladder was segmented in the 3D US and corresponding MR/CT images and volumetric differences and Dice similarity coefficients (DSCs) were calculated. Results For both applicator types, the combination of 3D TRUS with 3D TAUS images allowed image information obscured by the shadowing artifacts under single imaging perspectives to be recovered. For the tandem-and-ring applicator, the mean ± one standard deviation (SD) TREs from the images with increasing thicknesses were 1.37 ± 1.35 mm, 1.84 ± 1.22 mm, and 1.60 ± 1.00 mm. Similarly, for the tandem-and-ovoids applicator, the mean ± SD TREs from the images with increasing thicknesses were 1.37 ± 0.35 mm, 1.95 ± 0.90 mm, and 1.61 ± 0.76 mm. No statistically significant difference was detected in the TREs for the three thicknesses for either applicator type. The mean volume differences for the bladder segmentations were 3.14% and 2.33% and mean DSCs were 87.8% and 87.7% for the tandem-and-ring and tandem-and-ovoids applicators, respectively. Conclusions In this proof-of-concept study, we demonstrated the feasibility of fusing 3D TRUS and 3D TAUS images based on the geometry of tandem-and-ring and tandem-and-ovoids applicators. This represents a step toward an accessible and low-cost 3D imaging method for gynecologic brachytherapy, with the potential to extend this approach to other intracavitary configurations and hybrid applicators. This article is protected by copyright. All rights reserved
Hao Gong, Liqiang Ren, Scott S. Hsieh, Cynthia H. McCollough,
Published: 20 August 2021
Abstract:
Objective In X-raycomputed tomography (CT), many important clinical applications may benefit from a fast acquisition speed. The helical scan is the most widely used acquisition mode in clinical CT, where a fast helical pitch can improve the acquisition speed. However, on a typical single-source helical CT (SSCT) system, the helical pitch p typically cannot exceed 1.5; otherwise, reconstruction artifacts will result from data insufficiency. The purpose of this work is to develop a deep convolutional neural network (CNN) to correct for artifacts caused by an ultra-fast pitch, which can enable faster acquisition speed than what is currently achievable. Methods A customized CNN (denoted as ultra-fast-pitch network (UFP-net)) was developed to restore the underlying anatomical structure from the artifact-corrupted post-reconstruction data acquired from SSCT with ultra-fast pitch (i.e., p ≥ 2). UFP-net employed residual learning to capture the features of image artifacts. UFP-net further deployed in-house-customized functional blocks with spatial-domain local operators and frequency-domain non-local operators, to explore multi-scale feature representation. Images of contrast-enhanced patient exams (n = 83) with routine pitch setting (i.e., p < 1) were retrospectively collected, which were used as training and testing datasets. This patient cohort involved CT exams over different scan ranges of anatomy (chest, abdomen, and pelvis) and CT systems (Siemens Definition, Definition Flash, Definition AS+, Siemens Healthcare, Inc.), and the corresponding base CT scanning protocols used consistent settings of major scan parameters (e.g., collimation and pitch). Forward projection of the original images was calculated to synthesize helical CT scans with one regular pitch setting (p = 1) and two ultra-fast-pitch setting (p = 2 and 3). All patient images were reconstructed using the standard filtered-back-projection (FBP) algorithm. A customized multi-stage training scheme was developed to incrementally optimize the parameters of UFP-net, using ultra-fast-pitch images as network inputs and regular pitch images as labels. Visual inspection was conducted to evaluate image quality. Structural similarity index (SSIM) and relative root-mean-square error (rRMSE) were used as quantitative quality metrics. Results The UFP-net dramatically improved image quality over standard FBP at both ultra-fast-pitch settings. At p = 2, UFP-net yielded higher mean SSIM (> 0.98) with lower mean rRMSE (< 2.9%), compared to FBP (mean SSIM < 0.93; mean rRMSE > 9.1%). Quantitative metrics at p = 3: UFP-net—mean SSIM [0.86, 0.94] and mean rRMSE [5.0%, 8.2%]; FBP—mean SSIM [0.36, 0.61] and mean rRMSE [36.0%, 58.6%]. Conclusion The proposed UFP-net has the potential to enable ultra-fast data acquisition in clinical CT without sacrificing image quality. This method has demonstrated reasonable generalizability over different body parts when the corresponding CT exams involved consistent base scan parameters.
Chengcheng Liu, Mengyun Qiao, Fei Jiang, , Zhendong Jin, Yuanyuan Wang
Published: 19 August 2021
Abstract:
Purpose Accurate quantification of gastrointestinal stromal tumors’ (GISTs) risk stratification on multicenter endoscopic ultrasound (EUS) images plays a pivotal role in aiding the surgical decision-making process. This study focuses on automatically classifying higher-risk and lower-risk GISTs in the presence of a multicenter setting and limited data. Methods In this study, we retrospectively enrolled 914 patients with GISTs (1824 EUS images in total) from 18 hospitals in China. We propose a triple normalization-based deep learning framework with ultrasound-specific pretraining and meta attention, namely, TN-USMA model. The triple normalization module consists of the intensity normalization, size normalization, and spatial resolution normalization. First, the image intensity is standardized and same-size regions of interest (ROIs) and same-resolution tumor masks are generated in parallel. Then, the transfer learning strategy is utilized to mitigate the data scarcity problem. The same-size ROIs are fed into a deep architecture with ultrasound-specific pretrained weights, which are obtained from self-supervised learning using a large volume of unlabeled ultrasound images. Meanwhile, tumors’ size features are calculated from the same-resolution masks individually. Afterward, the size features together with two demographic features are integrated to the model before the final classification layer using a meta attention mechanism to further enhance feature representations. The diagnostic performance of the proposed method was compared with one radiomics-based method and two state-of-the-art deep learning methods. Four evaluation metrics, namely, the accuracy, the area under the receiver operator curve, the sensitivity, and the specificity were used to evaluate the model performance. Results The proposed TN-USMA model achieves an overall accuracy of 0.834 (95% confidence interval [CI]: 0.772, 0.885), an area under the receiver operator curve of 0.881 (95% CI: 0.825, 0.924), a sensitivity of 0.844 (95% CI: 0.672, 0.947), and a specificity of 0.832 (95% CI: 0.762, 0.888). The AUC significantly outperforms other two deep learning approaches (p < 0.05, DeLong et al). Moreover, the performance is stable under different variations of multicenter dataset partitions. Conclusions The proposed TN-USMA model can successfully differentiate higher-risk GISTs from lower-risk ones. It is accurate, robust, generalizable, and efficient for potential clinical applications.
, Dylan O'Connell, Michael Lauria, Bradley Stiehl, Louise Naumann, Percy Lee, John Hegde, Igor Barjaktarevic, Jonathan Goldin, Anand Santhanam
Published: 19 August 2021
Abstract:
Purpose To examine the use of multiple fast-helical free breathing CT (FHFBCT) scans for ventilation measurement Methods Ten patients were scanned 25 times in alternating directions using a FHFBCT protocol. Simultaneously, an abdominal pneumatic bellows was used as a real-time breathing surrogate. Regions-of-interest (ROIs) were selected from the upper right lungs of each patient for analysis. The ROIs were first registered using a published registration technique (pTV). A subsequent followup registration employed an objective function with two terms, a ventilation-adjusted Hounsfield Unit difference and a conservation-of-mass term labeled ΔΓ that denoted the difference between the deformation Jacobian and the tissue density ratio. The ventilations were calculated voxel-by-voxel as the slope of a first-order fit of the Jacobian as a function of the breathing amplitude. Results The ventilations of the 10 patients showed different patterns and magnitudes. The average ventilation calculated from the DVFs of the pTV and secondary registration were nearly identical, but the standard deviation of the voxel-to-voxel differences were approximately 0.1. The mean of the 90th percentile values of ΔΓ were reduced from 0.153 to 0.079 between the pTV and secondary registration, implying first that the secondary registration improved the conservation-of-mass criterion by almost 50% and that on average the correspondence between the Jacobian and density ratios as demonstrated by ΔΓ were less than 0.1. This improvement occurred in spite of the average of the 90th percentile changes in the DVF magnitudes being only 0.58 mm. Conclusions This work introduces the use of multiple free-breathing CT scans for free-breathing ventilation measurements. The approach has some benefits over the traditional use of 4DCT or breath-hold scans. The benefit over 4DCT is that FHFBCT does not have sorting artifacts. The benefits over breath-hold scans include the relatively small motion induced by quiet respiration versus deep-inspiration breath hold and the potential for characterizing dynamic breathing processes that disappear during breath hold. This article is protected by copyright. All rights reserved
Page of 816
Articles per Page
by
Show export options
  Select all
Back to Top Top