Comparing Methods of Learning Clinical Prediction from Case Simulations

Abstract
Feedback to physicians about how they use information in making judgments can improve the quality of their judgments, but questions remain about which types of feedback are most effective. The authors conducted a controlled study of feedback in 60 medical students learning to predict the risk of cardiovascular death based on the presence or absence of five risk factors. After a pretest of 40 cases abstracted from patient records, the students worked through 173 computer-simulated cases and a posttest of 40 patient cases. The students received no feedback, probability feedback (correct probability of cardiac death for each case), cognitive feedback (the correct cue weights compared with their own weights derived from the previous set of cases), or both types of feedback. Students who received probability feedback markedly improved both base rate calibration and discrimination. Those who received only cognitive feedback showed no improvement over control on any of the measures of learning. All subjects were highly consistent in their weightings. The superiority of probability feedback differed from previous findings that cognitive feedback was essential for mastery of multiple-cue-probability learning tasks. The information on cue-outcome relationships given by cognitive feedback may be more useful when these relationships are complex and the combining rule is not known, while the precise outcome information provided by probabilistic feedback is more useful when the combining rule is known and the cue- outcome relationships are straightforward. Thus, the optimal method of learning depends on the nature of the task. Key words: risk factors; computer simulations; heart disease. (Med Decis Making 1992;12:213-221)