Abstract
Including all relevant material - good, bad, and indifferent - in meta -analysis admits the subjective judgments that meta-analysis was designed to avoid. Several problems arise in meta-analysis: regressions are often non -linear; effects are often multivariate rather than univariate; coverage can be restricted; bad studies may be included; the data summarised may not be homogeneous; grouping different causal factors may lead to meaningless estimates of effects; and the theory-directed approach may obscure discrepancies. Meta-analysis may not be the one best method for studying the diversity of fields for which it has been used. Why do we undertake systematic reviews of a given field? The most important reason is perhaps that we are concerned about a particular theory and wish to know how the evidence for and against stacks up. There are also practical reasons; single studies often use small numbers of subjects, and basing our estimates of effect sizes on large numbers of studies drastically lowers the fiducial limits around our estimates. Systematic reviews can be of several different kinds: traditional reviews, often not very systematic, and frequently biased; meta-analyses, including (we hope) all relevant material, good, bad, and different, and leading to an estimate of effect size*RF 1-3*; best-evidence synthesis4; and the hypothetico-deductive approach,5 in which the effort is directed at evaluating the evidence for and against a given theory, in an attempt to solve the problem of why contradictory results appear, rather than simply averaging often incompatible data. Critics may object to my statement that meta-analysis involves material good, bad, and indifferent, but consider the study by Smith et al (discussed in more detail later), which numbered among its authors the originator of the term.6 The authors complained about the subjectivity that had led previous reviewers of studies assessing the effects of psychotherapy …