Abstract
Summary form only given. In classifier neural nets, each input variable usually has some stand-alone predictive capability with respect to a predicted class outcome. In medicine, receiver operating characteristic (ROC) methodology is used to use classifier variables optimally. We view the output value of a classifier net as a composite test value or a composite index that reflects the combined influence of all individual input predictive variables, and we apply ROC methodology to this composite index to monitor and evaluate neural net development, to re-calibrate net output in terms of prevalence imbalances between data used in developing the nets and data encountered in the application environment, and to adjust net output for cost-gain considerations. We discuss two different ways to adjust output for prevalence and cost-gain considerations. The first optimizes global performance, i.e. it is concerned with overall accuracy and efficiency related to the entire application population. The second optimizes local performance, i.e. it is concerned with accuracy and efficiency related to individual events in the population. Since neural nets can be designed to predict outcomes for individual patients, this second type of optimization is important in that it can potentially customize patient diagnoses and patient management decisions to the individual patient. This kind of customizing is particularly important when classification is less than perfect, as is usually the case, even with neural nets.

This publication has 8 references indexed in Scilit: