Abstract
Studies of diagnostic accuracy are the most commonly performed diagnostic tests. These are carried out by using Decision Matrix tables in which sensitivities, specificities, predictive values and other ratios are calculated and compared. Various recoverable pitfalls and limitations of this method have been reported. This study reports additional further limitations of using this method as a statistical analytical tool. Decision protocol and formulae are presented to show how the sensitivities and specificities of tests are compared in order to make a decision. The study also shows how special tables can be constructed for the four results of comparative diagnostic tests (true positive, true negative, false positive and false negative), and cautions against the use of some 2 x 2 contingency tables. The procedures of how to use these special tables and formulae for comparing sensitivity and specificity and deriving confidence intervals for their difference are presented. It is also shown how it is possible to make a single inference from diagnostic test performance which will permit the determination of which test is better.