Abstract
Various school systems are developing proficiency tests which are conceptualized as representing a variety of skills with one or more items per skill. This paper discusses how certain recent technical advances might be extended to examine these tests. In contrast to previous analyses, errors at the item level are included; and it is shown that inclusion of these errors implies that a substantially longer test might be needed. One approach to this problem is described, and directions for future research are suggested.