Abstract
The law of likelihood explains how to interpret statistical data as evidence. Specifically, it gives to the discipline of statistics a precise and objective measure of the strength of statistical evidence supporting one probability distribution vis-à-vis another. That measure is the likelihood ratio. But evidence, even when properly interpreted, can be misleading—observations can truly constitute strong evidence supporting one distribution when the other is true. What makes statistical evidence valuable to science is that this cannot occur very often. Here we examine two bounds on the probability of observing strong misleading evidence. One is a universal bound, applicable to every pair of probability distributions. The other bound, much smaller, applies to all pairs of distributions within fixed-dimensional parametric models in large samples. The second bound comes from examining how the probability of strong misleading evidence varies as a function of the alternative value of the parameter. We show that in large samples one curve describes how this probability first rises and then falls as the alternative moves away from the true parameter value for a very wide class of models. We also show that this large-sample curve, and the bound that its maximum value represents, applies to profile likelihood ratios for one-dimensional parameters in fixed-dimensional parametric models, but does not apply to the estimated likelihood ratios that result from replacing the nuisance parameters by their global maximum likelihood estimates.