Getting to the Bottom Line: A Method for Synthesizing Findings Within Mixed-method Program Evaluations

Abstract
Evaluators who are concerned more with pragmatics than with competing epistemologies have brought multi- and mixed-method evaluations into common practice. Program evaluators commonly use multiple methods and mixed data to capture both the breadth and depth of information pertaining to the evaluand, and to strengthen the validity of findings. However, multiple or mixed methods may yield incongruent results, and evaluators may find themselves reporting seemingly conflicting findings to program staff, policy makers, and other stakeholders. Our purpose is to offer a method for synthesizing findings within multi- or mixed-method evaluations to reach defensible evaluation (primarily summative) conclusions. The proposed method uses a set of criteria and analytic techniques to assess the worth of each data source or type and to establish what each says about program effect. Once on a common scale, simple math allows synthesis across data sources or types. The method should prove a useful tool for evaluators.

This publication has 10 references indexed in Scilit: