Data strip mining for the virtual design of pharmaceuticals with neural networks

Abstract
A novel neural network based technique, called "data strip mining" extracts predictive models from data sets which have a large number of potential inputs and comparatively few data points. This methodology uses neural network sensitivity analysis to determine which predictors are most significant in the problem. Neural network sensitivity analysis holds all but one input to a trained neural network constant while varying each input over its entire range to determine its effect on the output. The least sensitive variables are iteratively removed from the input set. For each iteration, model cross-validation uses multiple splits of training and validation data to determine an estimate of the model's ability to predict the output for data points not used during training. Elimination of variables through neural network sensitivity analysis and predicting performance through model cross-validation allows the analyst to reduce the number of inputs and improve the model's predictive ability at the same time. This paper illustrates this technique using a cartoon problem from classical physics. It then demonstrates its effectiveness on a pair of challenging problems from combinatorial chemistry with over 400 potential inputs each. For these data sets, model selection by neural sensitivity analysis outperformed other variable selection methods including forward selection and a genetic algorithm.

This publication has 2 references indexed in Scilit: