Evaluation of Procedures for Adjusting Problem-Discovery Rates Estimated From Small Samples
- 1 December 2001
- journal article
- Published by Informa UK Limited in International Journal of Human–Computer Interaction
- Vol. 13 (4), 445-479
- https://doi.org/10.1207/s15327590ijhc1304_06
Abstract
There are 2 excellent reasons to compute usability problem-discovery rates. First, an estimate of the problem-discovery rate is a key component for projecting the required sample size for a usability study. Second, practitioners can use this estimate to calculate the proportion of discovered problems for a given sample size. Unfortunately, small-sample estimates of the problem-discovery rate suffer from a serious overestimation bias. This bias can lead to serious underestimation of required sample sizes and serious overestimation of the proportion of discovered problems. This article contains descriptions and evaluations of a number of methods for adjusting small-sample estimates of the problem-discovery rate to compensate for this bias. A series of Monte Carlo simulations provided evidence that the average of a normalization procedure and Good-Turing (Jelinek, 1997; Manning & Schutze, 1999) discounting produces highly accurate estimates of usability problem-discovery rates from small sample sizes.Keywords
This publication has 9 references indexed in Scilit:
- Sample Sizes for Usability Studies: Additional ConsiderationsHuman Factors: The Journal of the Human Factors and Ergonomics Society, 1994
- A mathematical model of the finding of usability problemsPublished by Association for Computing Machinery (ACM) ,1993
- Refining the Test Phase of Usability Evaluation: How Many Subjects Is Enough?Human Factors: The Journal of the Human Factors and Ergonomics Society, 1992
- Finding usability problems through heuristic evaluationPublished by Association for Computing Machinery (ACM) ,1992
- Streamlining the Design Process: Running Fewer SubjectsProceedings of the Human Factors Society Annual Meeting, 1990
- Heuristic evaluation of user interfacesPublished by Association for Computing Machinery (ACM) ,1990
- Some Generalizations about GeneralizationHuman Factors: The Journal of the Human Factors and Ergonomics Society, 1988
- Design rules based on analyses of human errorCommunications of the ACM, 1983
- Testing Small System Customer Set-UpProceedings of the Human Factors Society Annual Meeting, 1982