Abstract
The reliability of a computerized analysis system (CAS) for determining muscle burst onset was compared with the subjective assessments of three trained examiners. A sample of 154 randomly selected, full-wave rectified and filtered electromyographic recordings was evaluated using a test-retest paradigm. Percentages of agreement, Pearson product-moment correlations, analyses of variance (ANOVAs), and intraclass correlation coefficients (ICCs) were used to measure the reliability. The between-rater agreement, which included the computerized EMG assessments, was only 23%. Within-rater agreement and Pearson correlation coefficients were perfect for CAS. The trained examiners' within-rater assessments averaged only 51% agreement, but test-retest correlations were high (r = .78 - r = .82). All ICCs were statistically significant, ranged from .46 to .60, and tended to be higher when the CAS onset determinations were deleted from the analysis. The ANOVAs revealed that trained examiners were more consistent among each other than when their assessments were compared with CAS assessments of EMG recordings. This finding, however, may be facility-specific in that any generalization to other examiners was limited. In contrast to trained examiners, the CAS was free of variations in judgment, ensured perfect reproducibility of trial assessments, and was highly useful for analyzing multichannel EMG recordings. Although the CAS ensures perfect reliability, validity determinations require visual inspection of trial data.