Enabling interpretable machine learning for biological data with reliability scores

Abstract
Machine learning tools have proven useful across biological disciplines, allowing researchers to draw conclusions from large datasets, and opening up new opportunities for interpreting complex and heterogeneous biological data. Alongside the rapid growth of machine learning, there have also been growing pains: some models that appear to perform well have later been revealed to rely on features of the data that are artifactual or biased; this feeds into the general criticism that machine learning models are designed to optimize model performance over the creation of new biological insights. A natural question arises: how do we develop machine learning models that are inherently interpretable or explainable? In this manuscript, we describe the SWIF(r) reliability score (SRS), a method building on the SWIF(r) generative framework that reflects the trustworthiness of the classification of a specific instance. The concept of the reliability score has the potential to generalize to other machine learning methods. We demonstrate the utility of the SRS when faced with common challenges in machine learning including: 1) an unknown class present in testing data that was not present in training data, 2) systemic mismatch between training and testing data, and 3) instances of testing data that have missing values for some attributes. We explore these applications of the SRS using a range of biological datasets, from agricultural data on seed morphology, to 22 quantitative traits in the UK Biobank, and population genetic simulations and 1000 Genomes Project data. With each of these examples, we demonstrate how the SRS can allow researchers to interrogate their data and training approach thoroughly, and to pair their domain-specific knowledge with powerful machine-learning frameworks. We also compare the SRS to related tools for outlier and novelty detection, and find that it has comparable performance, with the advantage of being able to operate when some data are missing. The SRS, and the broader discussion of interpretable scientific machine learning, will aid researchers in the biological machine learning space as they seek to harness the power of machine learning without sacrificing rigor and biological insight. Machine learning methods are incredibly powerful at performing tasks such as classification and clustering, but they also pose unique problems that can limit new insights. Complex machine learning models may reach conclusions that are difficult or impossible for researchers to understand after-the-fact, sometimes producing biased or meaningless results. It is therefore essential that researchers have tools that allow them to understand how machine learning tools reach their conclusions, so that they can effectively design models. This paper builds on the machine learning method SWIF(r), originally designed to detect regions in the genome targeted by natural selection. Our new method, the SWIF(r) Reliability Score (SRS), can help researchers evaluate how trustworthy the prediction of a SWIF(r) model is when classifying a specific instance of data. We also show how SWIF(r) and the SRS can be used for biological problems outside the original scope of SWIF(r). We show that the SRS is helpful in situations where the data used to train the machine learning model fails to represent the testing data in some way. The SRS can be used across many different disciplines, and has unique properties for scientific machine learning research.
Funding Information
  • NIH (R01GM118652)
  • NIH (R35GM139628)
  • Wimmer Family Foundation
  • NIH (5T32GM007601)