A simulation study of the model evaluation criterion mmre

Abstract
The mean magnitude of relative error, MMRE, is probably the most widely used evaluation criterion for assessing the performance of competing software prediction models. One purpose of MMRE is to assist us to select the best model. In this paper, we have performed a simulation study demonstrating that MMRE does not always select the best model. Our findings cast some doubt on the conclusions of any study of competing software prediction models that use MMRE as a basis of model comparison. We therefore recommend not using MMRE to evaluate and compare prediction models. At present, we do not have any universal replacement for MMRE. Meanwhile, we therefore recommend using a combination of theoretical justification of the models that are proposed together with other metrics proposed in this paper.

This publication has 35 references indexed in Scilit: