Sometimes when I give talks, I like to use this catchphrase: For almost all of human history, our major problem was a lack of information, but over the past half century the problem has become an overabundance of information. Not only can you access multiple opinions on any topic, but the scientific evidence can seem to be all over the place. For any social or health problem humans experience, there are typically hundreds of studies. Making things worse, the studies can seem to contradict one another.
So people have come up with a solution. Rather than simply summarizing research findings in narrative form (remember what you used to do for a high school or college term paper?), researchers conduct systematic evidence-based reviews, in which they use sophisticated methods to bring together and evaluate the dozens, hundreds, or even thousands of articles on a topic. (There’s a good summary of the methods for systematic reviews on the Bandolier site.
When thinking about evidence-based reviews, you have to decide whether you agree with one basic proposition. This proposition holds that the findings of sound scientific research studies provide more credible evidence about solving human problems than personal opinion, anecdotes, or “gut feelings.” Not everyone believes this (or at least not all the time). For those who do agree (as we do at Evidence-Based Living), then what is required is a systematic review of the research evidence, leading to guidelines for program development and their use in actual programs with audiences we serve.
The authors of a systematic review will tell you exactly what methods they used to identify articles for the review, how the articles were critically assessed, and how the results were synthesized across studies. Then the systematic review itself is peer-reviewed at a scientific journal, providing even more scrutiny of its findings. In some cases, the authors will use highly technical mathematical methods to synthesize the findings of studies, producing what is called a meta-analysis.
A systematic review has many benefits over the kind of review that simply summarizes a bunch of studies. You’ve seen this kind of review, which usually runs something like: “Smith and Wesson found this finding, but see Abbot and Costello for a different finding” or “Although most research shows this finding, there are a number of studies that fail to support it.” The systematic review looks at why the studies differ, and can exclude those studies that have inadequate samples or methods. And by looking at many studies, it allows us to make general conclusions, even though participants and study settings might be different.
Let’s take one example of how a systematic review is different from other reviews. If you read statements by groups advocating one perspective, they usually cite just the research articles that support their position. The hallmark of a systematic review, on the other hand, is a search for all articles on a topic. They go to great lengths to find every study done, so all can be evaluated. For this reason, systematic reviews are usually done by teams, since it’s rare that an individual has the time to find all the available research. By looking at all studies, a systematic review can come to conclusions like: “All studies using method X found this result, but studies using method Y did not.”
Systematic reviews can be disappointing, because they often come up with the conclusion that the research isn’t sufficient to come to a conclusion. But that in itself can be useful, especially if there’s one published study that has gotten a lot of attention in the media, but isn’t supported by other research.
The best library of systematic reviews has been described in a previous post: The Cochrane Collaboration.” But there are plenty of systematic reviews published each year from other sources. An example is in our post on antidepressants. If you want the most definitive evidence available as to whether a program or practice works, look to systematic reviews.