Implementation of model explainability for a basic brain tumor detection using convolutional neural networks on MRI slices

Abstract
Purpose While neural networks gain popularity in medical research, attempts to make the decisions of a model explainable are often only made towards the end of the development process once a high predictive accuracy has been achieved. Methods In order to assess the advantages of implementing features to increase explainability early in the development process, we trained a neural network to differentiate between MRI slices containing either a vestibular schwannoma, a glioblastoma, or no tumor. Results Making the decisions of a network more explainable helped to identify potential bias and choose appropriate training data. Conclusion Model explainability should be considered in early stages of training a neural network for medical purposes as it may save time in the long run and will ultimately help physicians integrate the network’s predictions into a clinical decision.