Evidence-based programming: What does it actually mean?

Anyone who loves detective novels (like I do) winds up being fascinated by evidence. I remember discovering the Sherlock Holmes stories as a teenager, reading how the great detective systematically used evidence to solve perplexing crimes. Holmes followed the scientific method, gathering together a large amount of evidence, deducing several possible explanations, and then finding the one that best fits the facts of the case. As everyone knows, often the “evidence-based” solution was very different from what common sense told theo other people involved in the case.

In our efforts to solve human problems, we also search for evidence, but the solutions rarely turn up in such neat packages. Whether it’s a solution to teen pregnancy, drug abuse, family violence, poor school performance, wasteful use of energy, or a host of other problems – we wish we had a Sherlock Holmes around to definitively tell us which solution really works.

Over the past decade, efforts have grown to systematically take the evidence into consideration when developing programs to help people overcome life’s challenges. But what does “evidence-based” really mean?

Take a look at these three options: Which one fits the criteria for an evidence-based program?

1. A person carefully reviews the literature on a social problem. Based on high-quality research, she designs a program that follows the recommendations and ideas of researchers.

2. A person creates a program to address a problem. He conducts an evaluation of the program in which participants rate their experiences in the program and their satisfaction with it, both of which are highly positive.

3. An agency creates a program to help its clients. Agency staff run the program and collect pretest and post-test data on participants and a small control group. The group who did the program had better outcomes than the control group.

If you answered “None of the above,” you are correct. Number 3 is closest, but still doesn’t quite make it. Although many people don’t realize it, the term “evidence-based program” has a very clear and specific meaning.

To be called “evidence-based,” the following things must happen:

1. The program is evaluated using an experimental design. In such a design, people are assigned randomly into the treatment group (these folks get the program) or a control group (these folks don’t). When the program is done, both groups are compared. This design helps us be more certain that the results came from the program, and not some other factor (e.g., certain types of people decided to do the program, thus biasing the results). Sometimes this true experimental design isn’t possible, and a “quasi-experimental” design is used (more on that in a later post). Importantly, the program results should be replicated in more than one study.

2. The evaluation studies are submitted to peer review by other scientists, and often are published in peer-reviewed journals. After multiple evaluations, the program is often submitted to a federal agency or another scientific organization that endorses the program as evidence-based.

3. The program is presented in a manual so that it can be implemented locally, as close as possible to the way the program was designed. This kind of “treatment fidelity” is very important to achieve the demonstrated results of the program.

As you might already be thinking, a lot of issues come up when you consider implementing an evidence-based program. On the one hand, they have one enormous advantage: The odds are that they will work. That is, you can be reasonably confident that if implemented correctly, the program will achieve the results it says it will. A big problem, on the other hand, is that a program must meet local needs, and an evidence-based program may not be available on the topic you are interested in.

We’ll come back to these issues in later posts. In the meantime, I recommend this good summary prepared by extension staff at the University of Wisconsin. In addition, I’d suggest viewing this presentation by Jutta Dutterweich from Cornell’s Family Life Development Center, on “Planning for Evidence-Based Programs. And check out our web links for some sites that register and describe evidence-based programs.

Evidence-based systematic reviews: As close to certainty as it gets

Sometimes when I give talks, I like to use this catchphrase: For almost all of human history, our major problem was a lack of information, but over the past half century the problem has become an overabundance of information. Not only can you access multiple opinions on any topic, but the scientific evidence can seem to be all over the place. For any social or health problem humans experience, there are typically hundreds of studies. Making things worse, the studies can seem to contradict one another.

So people have come up with a solution. Rather than simply summarizing research findings in narrative form (remember what you used to do for a high school or college term paper?), researchers conduct systematic evidence-based reviews, in which they use sophisticated methods to bring together and evaluate the dozens, hundreds, or even thousands of articles on a topic. (There’s a good summary of the methods for systematic reviews on the Bandolier site.

When thinking about evidence-based reviews, you have to decide whether you agree with one basic proposition. This proposition holds that the findings of sound scientific research studies provide more credible evidence about solving human problems than personal opinion, anecdotes, or “gut feelings.”  Not everyone believes this (or at least not all the time). For those who do agree (as we do at Evidence-Based Living), then what is required is a systematic review of the research evidence, leading to guidelines for program development and their use in actual programs with audiences we serve.

The authors of a systematic review will tell you exactly what methods they used to identify articles for the review, how the articles were critically assessed, and how the results were synthesized across studies. Then the systematic review itself is peer-reviewed at a scientific journal, providing even more scrutiny of its findings. In some cases, the authors will use highly technical mathematical methods to synthesize the findings of studies, producing what is called a meta-analysis.

A systematic review has many benefits over the kind of review that simply summarizes a bunch of studies. You’ve seen this kind of review, which usually runs something like: “Smith and Wesson found this finding, but see Abbot and Costello for a different finding” or “Although most research shows this finding, there are a number of studies that fail to support it.” The systematic review looks at why the studies differ, and can exclude those studies that have inadequate samples or methods. And by looking at many studies, it allows us to make general conclusions, even though participants and study settings might be different.

Let’s take one example of how a systematic review is different from other reviews. If you read statements by groups advocating one perspective, they usually cite just the research articles that support their position. The hallmark of a systematic review, on the other hand, is a search for all articles on a topic. They go to great lengths to find every study done, so all can be evaluated. For this reason, systematic reviews are usually done by teams, since it’s rare that an individual has the time to find all the available research. By looking at all studies, a systematic review can come to conclusions like: “All studies using method X found this result, but studies using method Y did not.”

Systematic reviews can be disappointing, because they often come up with the conclusion that the research isn’t sufficient to come to a conclusion. But that in itself can be useful, especially if there’s one published study that has gotten a lot of attention in the media, but isn’t supported by other research.

The best library of systematic reviews has been described in a previous post: The Cochrane Collaboration.” But there are plenty of systematic reviews published each year from other sources. An example is in our post on antidepressants. If you want the most definitive evidence available as to whether a program or practice works, look to systematic reviews.

Food Revolution or Evidence-Based Solutions?

I tuned in to Jamie Oliver’s Food Revolution the other night. I’m not a lover of reality shows, but, in this case, my curiosity got the best of me. For those of you who haven’t watched TV in the past few months, Food Revolution is a show that documents the antics of celebrity chef, Jamie Oliver, as he rides into the “fattest city in the US” and turns the population (especially the school kids) into healthy eaters. All this in slick, sensationalistic, sixty-minute segments!

As we all know, childhood obesity is taking a terrible toll on our kids. There’s no doubt that a crisis of this magnitude requires us to enact policy changes and programs aimed at addressing the problem. But do programs like Oliver’s Food Revolution really work? How do educators, concerned citizens, and policy makers know which programs will give us the best return on our investment?

John Cawley, a professor in the College of Human Ecology’s Policy Analysis and Management department, has recently published a study that addresses this question. Cawley, an economist, examined recent studies of several programs to reduce obesity, and found that CATCH (Coordinated Approach to Child Health), a multistate program that teaches elementary schoolchildren how to eat well and exercise regularly, is the most cost-effective. On the other hand, the study found that many other popular programs are not as effective and were much more costly than CATCH. Cawley’s study can be found here.

Cawley, who has served on the Institute of Medicine’s committee to prevent childhood obesity, says “It’s a bit of a Wild West, anything-goes environment when it comes to creating anti-obesity programs and policies. With limited resources, it would be counterproductive to rush into programs that are not cost-effective and won’t provide the greatest return on investment.

So, what does any of this have to do with Oliver’s Food Revolution? It suggests that policy makers need to look beyond the glitz when they consider which programs to invest in. It’s important to investigate which programs are “evidence-based” and which are merely entertainment. Food Revolution has not been rigorously evaluated. A preliminary study conducted by the West Virginia University Health Research Center to investigate the program suggests that the program had few positive impacts and a negative impact on meal participation and milk consumption.

In the end, as with most persistent societal challenges, the obesity epidemic is a complex problem best addressed by concerned citizens and policy makers who are committed to finding the best evidence-based solutions. And, unfortunately, it’ll probably take us longer than the sixty-minute segments of a reality TV show to fix the problem.

Skip to toolbar