When studies collide: Making sense of contradictory research findings

I know blogs are supposed to be current – otherwise, what’s the point of posting entries that get archived after a few weeks? However, every once in a while I come across a resource from a year or two back, which is so useful I feel the need to share it. Such is the case with this article from the New York Times Science Times. It shows how a journalist can do a superlative job of helping the public understand the complexities of science.

NYT Science Times published an invaluable special issue in 2008 entitled “Decoding Your Health.” The issue responded to the huge amount of medical information available now to consumers on the web, in the press, and in the doctor’s office. The articles are very helpful in “decoding” all this information, and deciding what is useful and what isn’t.

One particular article, however, really grabbed me: “Searching for Clarity: A Primer on Medical Studies.” I’ve rarely seen such a good job of laying out the kinds of studies we should trust, and how medical evidence accumulates to create guidelines for what people should do.

They take an example which could serve as the poster child for the dilemmas consumers face. In the 1990s, everyone was enthusiastic about the idea that the antioxidant beta carotene, which is found in certain fruits and vegetables (such as carrots, squash, apricots, and green peppers), could be good for your health. And this idea was backed up by some animal and observational studies suggesting that beta carotene protected against cancer. Supplement makers had a heyday selling beta carotene capsules.

Then it happened: results were published from three large, very well-done clinical trials, in which people were randomly assigned to take beta carotene or a placebo. These findings showed that beta carotene supplementation not only didn’t prevent disease, but it might even place people at greater risk of cancer.

If you were watching TV back then, you may remember seeing Frankie Avalon on a commercial (for you youngsters, Frankie was a 50’s teen idol with such hits as “Cupid,” “De-De-Dinah,” and “Tuxedo Junction”). As the article notes, he sat in front of a big pile of papers that said “beta carotene works,” and a tiny pile representing the three studies showing it doesn’t. The message: Who are you going to believe?

The answer is: the clinical trials. The article lays it out clearly, showing that there are three fundamental principles that make a more definitive study:

  • You have to compare like with like: “the groups you are comparing must be the same except for one factor — the one you are studying.”
  • The bigger the group studied, the more reliable its conclusions. They make a very helpful point: scientific studies don’t come up with a single number; instead, they come up with a margin of error (like you have a 10-20 percent reduction in risk). Larger numbers = greater certainty.
  • And the finding should be plausible. There should be some supporting evidence for the finding, such that it doesn’t come out of nowhere.

This is a good article to pass along when you are presenting scientific findings that contradict deeply-held beliefs. It shows that when it comes to research on health, more studies aren’t necessarily better – it’s having the right kinds of studies.

Behave! Using the science of behavior change

There are some problems we can’t do much about — hurricaines and earthquakes, for example. But a vast amount of things that make life tough — and sometimes miserable — relate to the choices human beings make and the way we behave. For this reason, a whole science of behavior change has grown up, focusing both on theoretical models and empirical studies of how to change damaging human behaviors, ranging from smoking, to crime, to overeating, to taking excessive risks.

A very helpful new article reviews models to promote positive behavior change that are highly relevant to people designing or implementing interventions. The authors note that getting individuals to make lasting changes in problem behaviors is no easy matter. They synthesize various models of behavior change “to provide a more comprehensive understanding of how educators can promote behavior change among their clientele.”

The authors apply their framework to the issue of financial management. Very interesting reading, available here.

(While you’re at it, take a look at other issues of this free on-line journal, called the The Forum for Family and Consumer Issues, published by North Carolina State University Extension — many interesting articles related to program development and evaluation.)

Is it okay to tinker? Evidence-based programs and “fidelity”

There is a lot to be said in favor of using evidence-based programs. They have been rigorously tested, and for that reason we can be pretty sure they will have the effects we want them to have. But often an agency or community educator will find an evidence-based program, try it out, and then want to tweak it in one way or the other. It may seem like it doesn’t quite fit your audience, or you might feel like replacing one component with something else, or skipping part of the program.

If you do that, is the program still “evidence-based?” How much can you change a program without making it less effective? The term scientists use is “fidelity” – that is, the faithfulness with which a practitioner implements a program. If you are engaging in fidelity to the program, you are implementing it pretty much as it is written, without changing its core components.

What some people do, however, is adaptation – making changes in the program to make it fit your clientele or the organization you work in. This isn’t necessarily a bad thing, and it might be necessary to fit a program into a given time frame, to accommodate people of different cultures or with different languages, or even just to have more “ownership” of the curriculum. However, if the program is changed too much, it can reduce the strength of the program. (The University of Wisconsin Extension has a helpful fact sheet that differentiates between “acceptable” and “risky” adaptation of programs.)

A recent talk given at Cornell addresses these issues in a very interesting and informative way. The wonderful Cornell Human Development Multimedia Website offers a video of Lori Rollen presenting on “Making Informed Adaptations to Evidence-based Sex and HIV Education Programs.”

(While you are there, take a look at the dozens of other videos with speakers discussing their research. It is an amazing site. I take the occasional lunch at my desk and use this site to catch up on what’s new in the world of research on human development.)

Lori helps you think out when adaptation of a program is a good idea and when it isn’t. She uses a clear “green light, yellow light, red light” system to show when it’s okay to adapt, when you should be cautious, and when it’s best to leave the program just as it is. And the programs she reviews on sex and HIV education are interesting in and of themselves.

Have you had any experience in adapting programs? We’re interested to hear from you.

Local Foods: Research and policy reviewed in new resource

I love the Ithaca Farmer’s market. It’s a regular Sunday ritual in our household to drive down to the market’s home on the shore of Cayuga Lake, listen to local musicians, have breakfast courtesy of the baked goods booth, and of course fill our re-usable bags with local produce. And we’re not the only ones: The “buy local” movement is rapidly growing nationally, based on the idea that we can reduce energy use and enjoy fresher food by purchasing items grown near our home towns.

 For those of you interested in research and evidence-based policy on this topic, I recommend to you the most recent issue of Choices Magazine, published by the Agricultural and Applied Economics Association. Unlike some other journals, Choices Magazine is available on-line, free of charge. The issue — Local Food—Perceptions, Prospects, and Policies — presents survey data, review articles, and policy analyses about local food, from a variety of perspectives.

One question taken up by several authors is: What does “local” mean, exactly? Although “local food” is typically defined along the lines of a “geographic production area that is circumscribed by boundaries and in close proximity to the consumer,” the article by Michael S. Hand and Stephen Martinez shows that consensus stops there.

I found the article by Yuko Onozaka, Gretchen Nurse, and Dawn Thilmany McFadden among the most interesting. They conducted a national survey to better understand the underlying factors that motivate consumers to buy local food. They also looked how these motivations vary among buyers living in different market venues.

Why do people buy local food? Somewhat surprisingly, they found the major motivation to be an interest in health benefits, followed by several “altruistic” reasons, like supporting the local economy and helping local farmers (see figure below).

Overall, the take-home message is that most consumers think highly of locally grown products, and there is a large and growing market for food grown close to home. And hey, it gets people like me out of the house on Sunday morning!

Research re-imagined at USDA: New “Roadmap” published

The venerable U. S. Department of Agriculture (USDA) has pioneered agricultural research for more than a century (see related post). Over the past several years,  the USDA has been reshaping its research priorities and funding programs, in part through the creation of the new National Institute of Food and Agriculture. NIFA has the mission to “advance knowledge for agriculture, the environment, human health and well-being” through funding research, education, and extension projects.

 USDA has just published a “Roadmap for USDA Science,” that is worthwhile reading. It calls for new approaches to foster robust food, agricultural, and natural resource science.

 The report begins in an interesting way. It asks us to:    

 Imagine a world in which…    

  …Radically improved children’s diets and nutrition slash long-term health care costs in the United States;

  …Farmers, ranchers, and forest landowners are recognized as significant contributors to large and sustainable reductions in global greenhouse gases;  

  …Farmers in sub-Saharan Africa have easy, affordable access to new seeds and animal breeds so well adapted to local conditions and so resilient to changing conditions that they feed five times as many people domestically and eliminate persistent hunger;  

  …Trends in availability of high-quality water and new options for watershed management outpace increasing demand for water even as climate change alters the geography of water resources; and

  …Technologically advanced production, processing, and foodborne pathogen detection methods make food product recalls nonexistent.  

 Farfetched, ask the authors of the Roadmap? Not at all, according to them — They believe that these goals are achievable through the kind of science the USDA will now promote. 

Among other things, the Roadmap calls for a focus on a limted number of “outcome-driven priorities,” cooperation with other agencies and institutions, concentration on both fundamental science and extension, and a “rejuvenation” of the USDA competitive grant system.

All in all, a very interesting read.

Drugs, Medicare, and the older consumer: Economics to the rescue

Okay, let’s have a show of hands. First, how many of you have a relative or someone you care about who is age 65 or older? Thanks.

Now, how many of you tried to help one of these beloved relatives or friends understand and choose a plan under Medicare Part D, the prescription drug benefit for older Americans? Thanks again.

My final question: How many of you who tried to help someone understand their options under Medicare Part D sighed, wept, and eventually wanted to pound your head against the wall in an attempt to lose consciousness? I thought so.

I had this experience myself, trying to help my 80-year old mother-in-law decide which program was best for her. I’m a gerontologist, for heaven’s sake, and I tore out what little hair I have left trying to figure out what her best option was.

 To the rescue comes a highly innovative and effective translational research project, led by Cornell Professor Kosali Simon (Department of Policy Analysis and Management). An economist, Prof. Simon’s desire to apply her expertise to this real-world problem has helped people in New York and across the country make this complex and important decision.

 Medicare Part D was passed in 2003 and is the federal program that subsidizes the costs of prescription drugs for people on Medicare (the federal health insurance program for Americans 65 and over). Some people were basically going broke paying for prescription drugs, and the federal government stepped in.

It sounds good, but here’s the problem: It is extraordinarly difficult to understand the coverage. A beneficiary has to choose among dozens of plans, which include dizzying combinations of deductibles and co-payments, and use different terminology for what they cover.

 

That’s the problem Prof. Simon took on. She had spent her career studying things like the economics of state regulation of private health insurance markets for small employers. But then she did an exercise for one of her classes, and students looked at Medicare Part D. Their work led her to become interested in the topic, and she began to do research on it.

Then she got in touch with psychologist Joe Mikels (Cornell Department of Human Development), who looks at how older people make decisions. Together, they used psychological theory and experimental methods to study older persons’ perceived difficulties of choosing a plan when the number of options available under Medicare Part D is increased in a lab setting. She also studied how seniors may actually benefit from increased breadth of choice in plan offerings using econometric methods and data on plan enrollment.

But here’s where it gets really interesting. Prof. Simon saw that there was practical value in learning how to help older people to understand the differences in medication coverage between plans. She used her data to create guides that can form the basis for choosing the right plans based on examining the coverage of medications, rather than simply going by general marketing materials that were mailed to older people.

Working with Project Manager Robert Harris, an experienced pharmacist, she has expanded the reach of the program in many different ways. Based on the research evidence, they have created a variety of materials such as pocket guides to Medicare Part D, posters, counter cards for pharmacies, customized mailings to residents of nursing homes, and an email newsletter and website with thousands of hits per month. 

All of this is very nicely summarized on her project web site CURxED, which I recommend you visit not just for the information, but as a great example of how complex information can be disseminated on the web.

Prof. Simon summed up the translational research approach very well when she told me: “It is very rewarding to be able to use the same data I collect for my research in ways that are practically useful to actual human beings being served by the program I study.”

Evidence-based programming: What does it actually mean?

Anyone who loves detective novels (like I do) winds up being fascinated by evidence. I remember discovering the Sherlock Holmes stories as a teenager, reading how the great detective systematically used evidence to solve perplexing crimes. Holmes followed the scientific method, gathering together a large amount of evidence, deducing several possible explanations, and then finding the one that best fits the facts of the case. As everyone knows, often the “evidence-based” solution was very different from what common sense told theo other people involved in the case.

In our efforts to solve human problems, we also search for evidence, but the solutions rarely turn up in such neat packages. Whether it’s a solution to teen pregnancy, drug abuse, family violence, poor school performance, wasteful use of energy, or a host of other problems – we wish we had a Sherlock Holmes around to definitively tell us which solution really works.

Over the past decade, efforts have grown to systematically take the evidence into consideration when developing programs to help people overcome life’s challenges. But what does “evidence-based” really mean?

Take a look at these three options: Which one fits the criteria for an evidence-based program?

1. A person carefully reviews the literature on a social problem. Based on high-quality research, she designs a program that follows the recommendations and ideas of researchers.

2. A person creates a program to address a problem. He conducts an evaluation of the program in which participants rate their experiences in the program and their satisfaction with it, both of which are highly positive.

3. An agency creates a program to help its clients. Agency staff run the program and collect pretest and post-test data on participants and a small control group. The group who did the program had better outcomes than the control group.

If you answered “None of the above,” you are correct. Number 3 is closest, but still doesn’t quite make it. Although many people don’t realize it, the term “evidence-based program” has a very clear and specific meaning.

To be called “evidence-based,” the following things must happen:

1. The program is evaluated using an experimental design. In such a design, people are assigned randomly into the treatment group (these folks get the program) or a control group (these folks don’t). When the program is done, both groups are compared. This design helps us be more certain that the results came from the program, and not some other factor (e.g., certain types of people decided to do the program, thus biasing the results). Sometimes this true experimental design isn’t possible, and a “quasi-experimental” design is used (more on that in a later post). Importantly, the program results should be replicated in more than one study.

2. The evaluation studies are submitted to peer review by other scientists, and often are published in peer-reviewed journals. After multiple evaluations, the program is often submitted to a federal agency or another scientific organization that endorses the program as evidence-based.

3. The program is presented in a manual so that it can be implemented locally, as close as possible to the way the program was designed. This kind of “treatment fidelity” is very important to achieve the demonstrated results of the program.

As you might already be thinking, a lot of issues come up when you consider implementing an evidence-based program. On the one hand, they have one enormous advantage: The odds are that they will work. That is, you can be reasonably confident that if implemented correctly, the program will achieve the results it says it will. A big problem, on the other hand, is that a program must meet local needs, and an evidence-based program may not be available on the topic you are interested in.

We’ll come back to these issues in later posts. In the meantime, I recommend this good summary prepared by extension staff at the University of Wisconsin. In addition, I’d suggest viewing this presentation by Jutta Dutterweich from Cornell’s Family Life Development Center, on “Planning for Evidence-Based Programs. And check out our web links for some sites that register and describe evidence-based programs.

Teen Sex and Pregnancy: Evidence from Systematic Reviews

Having just posted on systematic reviews, let’s take a look at some recent examples on a topic of importance in contemporary society: sexual activity and pregnancy on the part of teenagers. We tend to throw up our hands about this problem, but systematic evidence-based reviews show that some intervention programs actually work, and others don’t. Take a look at these as examples of how systematic reviews work and what they can tell us.

A Cochrane Collaboration review team examined the literature on teen pregnancy prevention. They examined studies of primary pregnancy prevention carried out in a variety of settings. Findings from a total of 41 randomized, controlled trials were synthesized. The review team found that programs that combined educational and contraceptive interventions were effective in preventing teen pregnancy.

A systematic review was conducted of programs to promote condom use among teens. This study is a good example of a review that concluded there was insufficient evidence to be definitive. Although many individual intervention studies showed modest effects, the authors noted that the quality of most of the studies was poor. So in this case, we really can’t be sure interventions to promote condom use really work.

What about abstinence? Our friendly neighborhood Cochrane reviewers have taken this on, too. They conducted a systematic review of abstinence-only HIV prevention programs, and found no evidence that such programs protected against sexually transmitted diseases. They also did not prevent teens from engaging in unprotected intercourse, the frequency of intercourse, the number of sexual partners, age of sexual initiation, or condom use.

Evidence-based systematic reviews: As close to certainty as it gets

Sometimes when I give talks, I like to use this catchphrase: For almost all of human history, our major problem was a lack of information, but over the past half century the problem has become an overabundance of information. Not only can you access multiple opinions on any topic, but the scientific evidence can seem to be all over the place. For any social or health problem humans experience, there are typically hundreds of studies. Making things worse, the studies can seem to contradict one another.

So people have come up with a solution. Rather than simply summarizing research findings in narrative form (remember what you used to do for a high school or college term paper?), researchers conduct systematic evidence-based reviews, in which they use sophisticated methods to bring together and evaluate the dozens, hundreds, or even thousands of articles on a topic. (There’s a good summary of the methods for systematic reviews on the Bandolier site.

When thinking about evidence-based reviews, you have to decide whether you agree with one basic proposition. This proposition holds that the findings of sound scientific research studies provide more credible evidence about solving human problems than personal opinion, anecdotes, or “gut feelings.”  Not everyone believes this (or at least not all the time). For those who do agree (as we do at Evidence-Based Living), then what is required is a systematic review of the research evidence, leading to guidelines for program development and their use in actual programs with audiences we serve.

The authors of a systematic review will tell you exactly what methods they used to identify articles for the review, how the articles were critically assessed, and how the results were synthesized across studies. Then the systematic review itself is peer-reviewed at a scientific journal, providing even more scrutiny of its findings. In some cases, the authors will use highly technical mathematical methods to synthesize the findings of studies, producing what is called a meta-analysis.

A systematic review has many benefits over the kind of review that simply summarizes a bunch of studies. You’ve seen this kind of review, which usually runs something like: “Smith and Wesson found this finding, but see Abbot and Costello for a different finding” or “Although most research shows this finding, there are a number of studies that fail to support it.” The systematic review looks at why the studies differ, and can exclude those studies that have inadequate samples or methods. And by looking at many studies, it allows us to make general conclusions, even though participants and study settings might be different.

Let’s take one example of how a systematic review is different from other reviews. If you read statements by groups advocating one perspective, they usually cite just the research articles that support their position. The hallmark of a systematic review, on the other hand, is a search for all articles on a topic. They go to great lengths to find every study done, so all can be evaluated. For this reason, systematic reviews are usually done by teams, since it’s rare that an individual has the time to find all the available research. By looking at all studies, a systematic review can come to conclusions like: “All studies using method X found this result, but studies using method Y did not.”

Systematic reviews can be disappointing, because they often come up with the conclusion that the research isn’t sufficient to come to a conclusion. But that in itself can be useful, especially if there’s one published study that has gotten a lot of attention in the media, but isn’t supported by other research.

The best library of systematic reviews has been described in a previous post: The Cochrane Collaboration.” But there are plenty of systematic reviews published each year from other sources. An example is in our post on antidepressants. If you want the most definitive evidence available as to whether a program or practice works, look to systematic reviews.

Scientific Fact-Checking is a Click Away: The Amazing Cochrane Collaboration

There’s a famous scene in the film Annie Hall, where Woody Allen is standing in line in a movie theater. Behind him, a pretentious professor is loudly proclaiming his opinions about the famous media thinker Marshall McLuhan. Allen’s character reaches the boiling point and from behind a film poster produces Marshall McLuhan himself, who proclaims to the pompous intellectual: “You know nothing of my work! How you got to teach a course in anything is totally amazing!” Woody tells the camera: “Boy, if life were only like this!”

We all wish that we had an impeccable source of information like that at our fingertips, especially when it comes to research on human health and well being. Imagine if you were in a debate – at work or with family and friends – about an issue pertaining to health. What if you could pull up a website and say: “I have the definitive scientific opinion right here!”

Actually, you can. It’s called the Cochrane Collaboration. I urge you to make the first of what I am sure will be many visits today. It is the true mother lode for objective scientific evidence on hundreds of issues relevant to mental and physical health and human development. You really can know what science has to say about many issues.

In the Cochrane Collaboration, teams of scientific experts from around the world synthesize the research information and issue reports offering guidance for what both professionals and the general public should do. It’s a non-profit, entirely independent organization, and that lets it provide up-to-date, unbiased information about the effects of health care practices and interventions.

The site is organized so you can, free of charge, get the abstract of any Cochrane review. What you will get is a clearly-written abstract of the review, written in layperson’s language. These can be used in to help answer your clients’ questions and in any situation where it helps to show the scientific consensus on an issue. They even have podcasts you can download of the reviews.

The number and scope of reviews is mind-boggling, and the Cochrane reviews take a very broad view of health (so you are sure to find ones relevant to your work). Here are just a few examples of the conclusions of reviews:

The media are taking notice of the Cochrane Collaboration, in part because these objective reviews can help figure out what our health care system should be paying for — a nice report appeared in Sharon Begley’s Newsweek blog.

So hey – why are you still here and not looking at the reviews? The easiest place to start is on the review page, where you can search for topics or just browse through the reviews.

What the heck is evidence-based extension?

Evidence-based this and evidence-based that: Every field seems to be talking about practices and programs that are evidence-based. The term is popping up everywhere, from medicine, to social work, to education, to physical therapy, to nursing. Where, you may well ask, does Cooperative Extension fit in? In an article appropriately titled: “Evidence-Based Extension,” in the Journal of Extension, Rachel Dunifon and colleagues explain it for you.

http://www.joe.org/joe/2004april/a2.php

The authors note that an evidence-based approach “entails a thorough scientific review of the research literature, the identification of the most effective interventions or strategies, and a commitment to translating the results into guidelines for practice.” Their point? That “extension can improve its use of research-based practice and also inform and advance the ongoing evidence-based work occurring in the scientific community. “

Got 10 minutes? Brush up on your “research-readiness.”

Everyone knows it’s important to be “ready” to read and understand research reports, and to be able to evaluate research findings to use in their jobs. But how can we do a quick tone-up of our understanding about research evidence?

There’s an easy solution. Cornell Professor Rachel Dunifon (Department of Policy Analysis and Management) and Laura Colossi have prepared a set of “briefs” that take about 15 minutes each to read. They cover critically important basics of using and understanding research (geared to Cooperative Extension personnel but relevant to human service workers in any field), and are useful even to those of us who consider ourselves already “research ready.”

Here’s the site: http://www.parenting.cit.cornell.edu/research_briefs.html

Topics include:

How to Read A Research Article. This brief provides information on how to navigate through academic research articles, and also emphasizes the importance of staying up to date on the research in your chosen field of work.

Resources for Doing Web Research. This brief is designed to provide educators with the tools needed to conduct web based research effectively. Instructions on how to obtain scholarly research via the web are provided, in addition to links to longer resource guides on assessing the value of information on the web.

Designing an Effective Questionnaire. This research brief provides some basic ideas on how to best write a questionnaire and capture the information needed to assess program impact.

What’s the Difference? “Post then Pre” and “Pre then Post” This brief highlights the strengths and weaknesses of two popular evaluation designs, lists possible criteria to choose a design, as well as the importance of reducing threats to validity when conducting an evaluation.

Measuring Evaluation Results with Microsoft Excel. This brief illustrates one method for calculating mean scores among responses to evaluation instruments, and provides educators with a tutorial on how to perform basic functions using Microsoft Excel.

Happy reading – I think you will find these briefs very useful roadmaps in the sometimes confusing task of applying research findings to your work. Are there any other topics you’d like this kind of information on? If so, post a comment!

Skip to toolbar