Quality Management of Student-Student Evaluations

Abstract
Classes delivered via the World Wide Web (WWW) have the ability to access large amounts of hypermedia. They can also be designed to present course work in small, orderly steps. Many learning theorists hypothesize that it is important to provide timely feedback which acts as a reinforcer if answers are good and as a corrective measure if answers are inadequate. However, it may not be practical for an instructor to give timely feedback on each submission done by all students in the class. One possible solution is to combine peer-peer evaluations with timely computer generated reports to help the instructor manage such a course. The peer evaluations may replace some or all of the traditional “grading” done by the professor. This process can contribute to higher developmental levels of understanding and students collaborative work skills may be honed by the requirements of the course. We hypothesize that instructors can adopt graphical methods of data presentation and quality improvement to help monitor the peer evaluation process in a timely and adequate fashion. Three such methods were applied to a class at Washington State University. Pseudo R-charts were used to track when comment scores by peers varied widely on an exercise submission. Pseudo X-bar charts helped identify exercise answers with unusually low average comment scores. Finally, relative frequency histograms were used to compare the frequency of questions asked to the frequency of questions answered when categorized using Bloom's taxonomy. Such tools were used during the class and were valuable input to the instructor.

This publication has 3 references indexed in Scilit: