Evidence-based programming: What does it actually mean?

Anyone who loves detective novels (like I do) winds up being fascinated by evidence. I remember discovering the Sherlock Holmes stories as a teenager, reading how the great detective systematically used evidence to solve perplexing crimes. Holmes followed the scientific method, gathering together a large amount of evidence, deducing several possible explanations, and then finding the one that best fits the facts of the case. As everyone knows, often the “evidence-based” solution was very different from what common sense told theo other people involved in the case.

In our efforts to solve human problems, we also search for evidence, but the solutions rarely turn up in such neat packages. Whether it’s a solution to teen pregnancy, drug abuse, family violence, poor school performance, wasteful use of energy, or a host of other problems – we wish we had a Sherlock Holmes around to definitively tell us which solution really works.

Over the past decade, efforts have grown to systematically take the evidence into consideration when developing programs to help people overcome life’s challenges. But what does “evidence-based” really mean?

Take a look at these three options: Which one fits the criteria for an evidence-based program?

1. A person carefully reviews the literature on a social problem. Based on high-quality research, she designs a program that follows the recommendations and ideas of researchers.

2. A person creates a program to address a problem. He conducts an evaluation of the program in which participants rate their experiences in the program and their satisfaction with it, both of which are highly positive.

3. An agency creates a program to help its clients. Agency staff run the program and collect pretest and post-test data on participants and a small control group. The group who did the program had better outcomes than the control group.

If you answered “None of the above,” you are correct. Number 3 is closest, but still doesn’t quite make it. Although many people don’t realize it, the term “evidence-based program” has a very clear and specific meaning.

To be called “evidence-based,” the following things must happen:

1. The program is evaluated using an experimental design. In such a design, people are assigned randomly into the treatment group (these folks get the program) or a control group (these folks don’t). When the program is done, both groups are compared. This design helps us be more certain that the results came from the program, and not some other factor (e.g., certain types of people decided to do the program, thus biasing the results). Sometimes this true experimental design isn’t possible, and a “quasi-experimental” design is used (more on that in a later post). Importantly, the program results should be replicated in more than one study.

2. The evaluation studies are submitted to peer review by other scientists, and often are published in peer-reviewed journals. After multiple evaluations, the program is often submitted to a federal agency or another scientific organization that endorses the program as evidence-based.

3. The program is presented in a manual so that it can be implemented locally, as close as possible to the way the program was designed. This kind of “treatment fidelity” is very important to achieve the demonstrated results of the program.

As you might already be thinking, a lot of issues come up when you consider implementing an evidence-based program. On the one hand, they have one enormous advantage: The odds are that they will work. That is, you can be reasonably confident that if implemented correctly, the program will achieve the results it says it will. A big problem, on the other hand, is that a program must meet local needs, and an evidence-based program may not be available on the topic you are interested in.

We’ll come back to these issues in later posts. In the meantime, I recommend this good summary prepared by extension staff at the University of Wisconsin. In addition, I’d suggest viewing this presentation by Jutta Dutterweich from Cornell’s Family Life Development Center, on “Planning for Evidence-Based Programs. And check out our web links for some sites that register and describe evidence-based programs.

Evidence-based systematic reviews: As close to certainty as it gets

Sometimes when I give talks, I like to use this catchphrase: For almost all of human history, our major problem was a lack of information, but over the past half century the problem has become an overabundance of information. Not only can you access multiple opinions on any topic, but the scientific evidence can seem to be all over the place. For any social or health problem humans experience, there are typically hundreds of studies. Making things worse, the studies can seem to contradict one another.

So people have come up with a solution. Rather than simply summarizing research findings in narrative form (remember what you used to do for a high school or college term paper?), researchers conduct systematic evidence-based reviews, in which they use sophisticated methods to bring together and evaluate the dozens, hundreds, or even thousands of articles on a topic. (There’s a good summary of the methods for systematic reviews on the Bandolier site.

When thinking about evidence-based reviews, you have to decide whether you agree with one basic proposition. This proposition holds that the findings of sound scientific research studies provide more credible evidence about solving human problems than personal opinion, anecdotes, or “gut feelings.”  Not everyone believes this (or at least not all the time). For those who do agree (as we do at Evidence-Based Living), then what is required is a systematic review of the research evidence, leading to guidelines for program development and their use in actual programs with audiences we serve.

The authors of a systematic review will tell you exactly what methods they used to identify articles for the review, how the articles were critically assessed, and how the results were synthesized across studies. Then the systematic review itself is peer-reviewed at a scientific journal, providing even more scrutiny of its findings. In some cases, the authors will use highly technical mathematical methods to synthesize the findings of studies, producing what is called a meta-analysis.

A systematic review has many benefits over the kind of review that simply summarizes a bunch of studies. You’ve seen this kind of review, which usually runs something like: “Smith and Wesson found this finding, but see Abbot and Costello for a different finding” or “Although most research shows this finding, there are a number of studies that fail to support it.” The systematic review looks at why the studies differ, and can exclude those studies that have inadequate samples or methods. And by looking at many studies, it allows us to make general conclusions, even though participants and study settings might be different.

Let’s take one example of how a systematic review is different from other reviews. If you read statements by groups advocating one perspective, they usually cite just the research articles that support their position. The hallmark of a systematic review, on the other hand, is a search for all articles on a topic. They go to great lengths to find every study done, so all can be evaluated. For this reason, systematic reviews are usually done by teams, since it’s rare that an individual has the time to find all the available research. By looking at all studies, a systematic review can come to conclusions like: “All studies using method X found this result, but studies using method Y did not.”

Systematic reviews can be disappointing, because they often come up with the conclusion that the research isn’t sufficient to come to a conclusion. But that in itself can be useful, especially if there’s one published study that has gotten a lot of attention in the media, but isn’t supported by other research.

The best library of systematic reviews has been described in a previous post: The Cochrane Collaboration.” But there are plenty of systematic reviews published each year from other sources. An example is in our post on antidepressants. If you want the most definitive evidence available as to whether a program or practice works, look to systematic reviews.

Evidence-based practice with children and adolescents: A great resource

Let’s say you have a long lunch hour (hey, it’s spring, so why not take, say, 90 minutes?). You could put that time to good use reading an excellent publication on evidence-based practice and what it means for kids. This was published a little over a year ago, but only recently came to my attention. I’m sorry I didn’t see it sooner, because it helps answer a lot of questions about what “evidence-based” really means –  whether or not you happen to be interested in children.

It’s the American Psychological Association’s Disseminating Evidence-Based Practice for Children & Adolescents, available here.

The report begins with a wake-up call:

The prevalence of children’s behavioral disorders is well documented, with 10 to 20% of youth (about 15 million children) in the United States meeting diagnostic criteria for a mental health disorder. Many more are at risk for escalating problems with long-term individual, family, community, and societal implications.

 It then moves to a nice summary of the varying contexts in which children’s problems arise and the systems for dealing with them. It also uses an inclusive definition of Evidence-Based Practice (EBP). They look at EBP as way of moving tested practices into real-world settings. However, they also emphasize the importance of integrating these approaches with practice expertise. Evidence-based interventions for children and youth are critically important so that practitoners can draw on programs that have “track records” – that is there is longitudinal data for short-term and long-term outcomes, showing that the program reduces problems or symptoms.

The report highlights four “guiding principles” for evidence-based approaches with children and youth:

 1. Children and adolescents should receive the best available care based on scientific knowledge and integrated with clinical expertise in the context of patient characteristics, culture, and preferences. Quality care should be provided as consistently as possible with children and their caregivers and families across clinicians and settings.

2. Care systems should demonstrate responsiveness to youth and their families through prevention, early intervention, treatment, and continuity of care.

3. Equal access to effective care should cut across age, gender, sexual orientation, and disability, inclusive of all racial, ethnic, and cultural groups.

4. Effectively implemented EBP requires a contextual base, collaborative foundation, and creative partnership among families, practitioners, and researchers.

All points worth thinking about!

If you don’t have time for the entire report, some interesting sections are: a review of the history of the “evidence-based” concept (for all of us who wonder where this came from all of a sudden), a good discussion of definitions, and a review of what the evidence shows about prevention programs.

 Happy reading!

Scientific Fact-Checking is a Click Away: The Amazing Cochrane Collaboration

There’s a famous scene in the film Annie Hall, where Woody Allen is standing in line in a movie theater. Behind him, a pretentious professor is loudly proclaiming his opinions about the famous media thinker Marshall McLuhan. Allen’s character reaches the boiling point and from behind a film poster produces Marshall McLuhan himself, who proclaims to the pompous intellectual: “You know nothing of my work! How you got to teach a course in anything is totally amazing!” Woody tells the camera: “Boy, if life were only like this!”

We all wish that we had an impeccable source of information like that at our fingertips, especially when it comes to research on human health and well being. Imagine if you were in a debate – at work or with family and friends – about an issue pertaining to health. What if you could pull up a website and say: “I have the definitive scientific opinion right here!”

Actually, you can. It’s called the Cochrane Collaboration. I urge you to make the first of what I am sure will be many visits today. It is the true mother lode for objective scientific evidence on hundreds of issues relevant to mental and physical health and human development. You really can know what science has to say about many issues.

In the Cochrane Collaboration, teams of scientific experts from around the world synthesize the research information and issue reports offering guidance for what both professionals and the general public should do. It’s a non-profit, entirely independent organization, and that lets it provide up-to-date, unbiased information about the effects of health care practices and interventions.

The site is organized so you can, free of charge, get the abstract of any Cochrane review. What you will get is a clearly-written abstract of the review, written in layperson’s language. These can be used in to help answer your clients’ questions and in any situation where it helps to show the scientific consensus on an issue. They even have podcasts you can download of the reviews.

The number and scope of reviews is mind-boggling, and the Cochrane reviews take a very broad view of health (so you are sure to find ones relevant to your work). Here are just a few examples of the conclusions of reviews:

The media are taking notice of the Cochrane Collaboration, in part because these objective reviews can help figure out what our health care system should be paying for — a nice report appeared in Sharon Begley’s Newsweek blog.

So hey – why are you still here and not looking at the reviews? The easiest place to start is on the review page, where you can search for topics or just browse through the reviews.

What the heck is evidence-based extension?

Evidence-based this and evidence-based that: Every field seems to be talking about practices and programs that are evidence-based. The term is popping up everywhere, from medicine, to social work, to education, to physical therapy, to nursing. Where, you may well ask, does Cooperative Extension fit in? In an article appropriately titled: “Evidence-Based Extension,” in the Journal of Extension, Rachel Dunifon and colleagues explain it for you.

http://www.joe.org/joe/2004april/a2.php

The authors note that an evidence-based approach “entails a thorough scientific review of the research literature, the identification of the most effective interventions or strategies, and a commitment to translating the results into guidelines for practice.” Their point? That “extension can improve its use of research-based practice and also inform and advance the ongoing evidence-based work occurring in the scientific community. “

Got 10 minutes? Brush up on your “research-readiness.”

Everyone knows it’s important to be “ready” to read and understand research reports, and to be able to evaluate research findings to use in their jobs. But how can we do a quick tone-up of our understanding about research evidence?

There’s an easy solution. Cornell Professor Rachel Dunifon (Department of Policy Analysis and Management) and Laura Colossi have prepared a set of “briefs” that take about 15 minutes each to read. They cover critically important basics of using and understanding research (geared to Cooperative Extension personnel but relevant to human service workers in any field), and are useful even to those of us who consider ourselves already “research ready.”

Here’s the site: http://www.parenting.cit.cornell.edu/research_briefs.html

Topics include:

How to Read A Research Article. This brief provides information on how to navigate through academic research articles, and also emphasizes the importance of staying up to date on the research in your chosen field of work.

Resources for Doing Web Research. This brief is designed to provide educators with the tools needed to conduct web based research effectively. Instructions on how to obtain scholarly research via the web are provided, in addition to links to longer resource guides on assessing the value of information on the web.

Designing an Effective Questionnaire. This research brief provides some basic ideas on how to best write a questionnaire and capture the information needed to assess program impact.

What’s the Difference? “Post then Pre” and “Pre then Post” This brief highlights the strengths and weaknesses of two popular evaluation designs, lists possible criteria to choose a design, as well as the importance of reducing threats to validity when conducting an evaluation.

Measuring Evaluation Results with Microsoft Excel. This brief illustrates one method for calculating mean scores among responses to evaluation instruments, and provides educators with a tutorial on how to perform basic functions using Microsoft Excel.

Happy reading – I think you will find these briefs very useful roadmaps in the sometimes confusing task of applying research findings to your work. Are there any other topics you’d like this kind of information on? If so, post a comment!

Skip to toolbar