Abstract
‘If the first button is buttoned wrongly, the whole vest sits askew’ The main object of this paper is to propound and discuss a new indifference rule for the prior probabilities in the theory of inverse probability. Being invariant in form on transformation, this new rule avoids the mathematical inconsistencies associated with the classical rule of ‘uniform distribution of ignorance’ and yields results which, particularly in certain critical extreme cases, do not appear to be unreasonable. Such a rule is, of course, a postulate and is not susceptible of proof; its object is to enable inverse probability to operate as a unified principle upon which methods may be devised of allowing a set of statistics to tell their complete and unbiased story about the parameters of the distribution law of the population from which they have been drawn, without the introduction of any knowledge beyond and extraneous to the statistics themselves. The forms appropriate for the prior probabilities in certain other circumstances are also discussed, including the important case where the unknown parameter is a probability, or proportion, for which it is desired to allow for prior bias. Before proceeding to the main purpose of the paper, however, it is convenient to provide some background to the subject. Reference is first made to certain modern writers to indicate how the problem with which inverse probability is concerned occupies a central place in the foundations of scientific method and in modern philosophy. In quoting from these writers I am not to be taken as suggesting that they necessarily support the inverse probability approach to the problem. The next section of the paper contains some brief comments on the direct statistical methods which have been devised in recent times to side-step induction and inverse probability, and this is followed by a few remarks on the various definitions of probability.