The Feasibility of Sharing Simulation-Based Evaluation Scenarios in Anesthesiology

Abstract
We prospectively assessed the feasibility of international sharing of simulation-based evaluation tools despite differences in language, education, and anesthesia practice, in an Israeli study, using validated scenarios from a multi-institutional United States (US) study. Thirty-one Israeli junior anesthesia residents performed four simulation scenarios. Training sessions were videotaped and performance was assessed using two validated scoring systems (Long and Short Forms) by two independent raters. Subjects scored from 37 to 95 (70 ± 12) of 108 possible points with the “Long Form” and “Short Form” scores ranging from 18 to 35 (28.2 ± 4.5) of 40 possible points. Scores >70% of the maximal score were achieved by 61% of participants in comparison to only 5% in the original US study. The scenarios were rated as very realistic by 80% of the participants (grade 4 on a 1–4 scale). Reliability of the original assessment tools was demonstrated by internal consistencies of 0.66 for the Long and 0.75 for the Short Form (Cronbach α statistic). Values in the original study were 0.72–0.76 for the Long and 0.71–0.75 for the Short Form. The reliability did not change when a revised Israeli version of the scoring was used. Interrater reliability measured by Pearson correlation was 0.91 for the Long and 0.96 for the Short Form (P < 0.01). The high scores for plausibility given to the scenarios and the similar reliability of the original assessment tool support the feasibility of using simulation-based evaluation tools, developed in the US, in Israel. The higher scores achieved by Israeli residents may be related to the fact that most Israeli residents are immigrants with previous training in anesthesia.