Evidence based practice in postgraduate healthcare education: A systematic review
Open Access
- 26 July 2007
- journal article
- review article
- Published by Springer Science and Business Media LLC in BMC Health Services Research
- Vol. 7 (1), 119
- https://doi.org/10.1186/1472-6963-7-119
Abstract
Background Training in Evidence-Based Practice (EBP) has been widely implemented throughout medical school and residency curricula. The aim of this study is to systematically review studies that assessed the effectiveness of EBP teaching to improve knowledge, skills, attitudes and behavior of postgraduate healthcare workers, and to describe instruments available to evaluate EBP teaching. Methods The design is a systematic review of randomized, non-randomized, and before-after studies. The data sources were MEDLINE, Cochrane Library, EMBASE, CINAHL and ERIC between 1966 and 2006. Main outcomes were knowledge, skills, attitudes and behavior towards EBP. Standardized effect sizes (E-S) were calculated. The E-S was categorized as small (E-S < 0.2), small to moderate (E-S between 0.2 and 0.5), moderate to large (E-S between 0.51 and 0.79), large (E-S > 0.79). Reliability and validity of instruments for evaluating education were assessed. Studies excluded were those that were not original, performed in medical students, focused on prescribing practices, specific health problems, theoretical reviews of different components of EBP, continuing medical education, and testing the effectiveness of implementing guidelines. Results Twenty-four studies met our inclusion criteria. There were 15 outcomes within the 10 studies for which E-S could be calculated. The E-S ranged from 0.27 (95%CI: -0.05 to 0.59) to 1.32 (95%CI: 1.11 to 1.53). Studies assessing skills, behavior and/or attitudes had a "small to moderate" E-S. Only 1 of the 2 studies assessing knowledge had E-S of 0.57 (95 CI: 0.32 to 0.82) and 2 of the 4 studies that assessed total test score outcomes had "large" E-S. There were 22 instruments used, but only 10 had 2 or more types of validity or reliability evidence. Conclusion Small improvements in knowledge, skills, attitudes or behavior are noted when measured alone. A large improvement in skills and knowledge in EBP is noted when measured together in a total test score. Very few studies used validated measures tests.This publication has 47 references indexed in Scilit:
- Instruments for Evaluating Education in Evidence-Based PracticeJAMA, 2006
- Evaluating the teaching of evidence based medicine: conceptual frameworkBMJ, 2004
- Teaching practicing surgeons critical appraisal skills with an Internet-based journal club: a randomized, controlled trialSurgery, 2004
- Integrating an Evidence-Based Medicine Rotation into an Internal Medicine Residency ProgramAcademic Medicine, 2004
- Adherence to practice guidelines: The role of specialty society guidelinesAmerican Heart Journal, 2003
- Graduate medical education training in clinical epidemiology, critical appraisal, and evidence-based medicineAcademic Medicine, 1999
- Distilling the literatureAcademic Medicine, 1999
- Distilling the literatureAcademic Medicine, 1999
- Evaluation of a programme of workshops for promoting the teaching of critical appraisal skillsMedical Education, 1998
- Meta-analytic procedures for combining studies with multiple effect sizes.Psychological Bulletin, 1986