Automated glioma grading on conventional MRI images using deep convolutional neural networks

Abstract
Purpose Gliomas are the most common primary tumor of the brain and are classified into grades I-IV of the World Health Organization (WHO), based on their invasively histological appearance. Gliomas grading plays an important role to determine the treatment plan and prognosis prediction. In this study we propose two novel methods for automatic, non-invasively distinguishing low-grade (Grades II and III) glioma (LGG) and high-grade (grade IV) glioma (HGG) on conventional MRI images by using deep convolutional neural networks (CNNs). Methods All MRI images have been preprocessed first by rigid image registration and intensity inhomogeneity correction. Both proposed methods consist of two steps: (a) three-dimensional (3D) brain tumor segmentation based on a modification of the popular U-Net model; (b) tumor classification on segmented brain tumor. In the first method, the slice with largest area of tumor is determined and the state-of-the-art mask R-CNN model is employed for tumor grading. To improve the performance of the grading model, a two-dimensional (2D) data augmentation has been implemented to increase both the amount and the diversity of the training images. In the second method, denoted as 3DConvNet, a 3D volumetric CNNs is applied directly on bounding image regions of segmented tumor for classification, which can fully leverage the 3D spatial contextual information of volumetric image data. Results The proposed schemes were evaluated on The Cancer Imaging Archive (TCIA) low grade glioma (LGG) data, and the Multimodal Brain Tumor Image Segmentation (BraTS) Benchmark 2018 training datasets with fivefold cross validation. All data are divided into training, validation, and test sets. Based on biopsy-proven ground truth, the performance metrics of sensitivity, specificity, and accuracy are measured on the test sets. The results are 0.935 (sensitivity), 0.972 (specificity), and 0.963 (accuracy) for the 2D Mask R-CNN based method, and 0.947 (sensitivity), 0.968 (specificity), and 0.971 (accuracy) for the 3DConvNet method, respectively. In regard to efficiency, for 3D brain tumor segmentation, the program takes around ten and a half hours for training with 300 epochs on BraTS 2018 dataset and takes only around 50 s for testing of a typical image with a size of 160 x 216 x 176. For 2D Mask R-CNN based tumor grading, the program takes around 4 h for training with around 60 000 iterations, and around 1 s for testing of a 2D slice image with size of 128 x 128. For 3DConvNet based tumor grading, the program takes around 2 h for training with 10 000 iterations, and 0.25 s for testing of a 3D cropped image with size of 64 x 64 x 64, using a DELL PRECISION Tower T7910, with two NVIDIA Titan Xp GPUs. Conclusions Two effective glioma grading methods on conventional MRI images using deep convolutional neural networks have been developed. Our methods are fully automated without manual specification of region-of-interests and selection of slices for model training, which are common in traditional machine learning based brain tumor grading methods. This methodology may play a crucial role in selecting effective treatment options and survival predictions without the need for surgical biopsy.
Funding Information
  • National Cancer Institute