The state of preschool

Preschool is important to children’s development – the evidence on that is clear.  But since preschool is not required and often not offered by local school systems, not all families have access to quality preschool programs.  [Read more…]

New evidence about the federal food stamps program

Nearly 45 million American receive help purchasing food each year through the Supplemental Nutrition Assistance Program (SNAP), commonly called food stamps.  Here on EBL, we’ve written about the federal program in the past, specifically how it helps keep families out of poverty. [Read more…]

Medical studies: Evidence you can trust?

Evidence-based Living is built around the idea that scientific study should guide our lives – in decisions we make for our families, in community initiatives, and of course in choosing medical treatments.

A new review this month is the Journal of Oncology raises important questions about the validity of medical studies.  The report reviewed 164 trials of treatments for breast cancer including chemotherapy, radiation and surgery conducted from 1995 to 2011.

It concluded that: most of the studies were clouded by overemphasizing the benefits of the treatment, or minimizing potential side effects.

For example, they reported on 92 trials which had a negative primary endpoint – which essentially means the treatment was not found to be effective for the main goal of the study. In 59 percent of those trials, a secondary end point – another goal – was used to suggest the experimental therapy was actually beneficial.

And only 32 percent of the studies reported severe or life-threatening side effects in the abstract – where medical professionals who are scanning the report might miss them. Studies that reported a positive primary endpoint – meaning the treatment was effective for the problem that researchers were targeting – were less likely to report serious side effects.

What does all of this mean?

Elaine Wethington, a medical sociologist at the College of Human Ecology, says the review reveals some important findings about medical studies.

“I would speculate that the findings are due to at least three processes,” she explained.

“First, trial results should be published even if the primary outcome findings are negative, but it can be difficult to find a journal that will publish negative findings,” she said. “As a result, there is a tendency to focus on other outcomes that are secondary in order to justify the work and effort.

“Second, presentation of findings can be influenced by a variety of conflicts of interest. There is a lot of published evidence – and controversy — that scientific data collection and analysis can be affected by the source of funding, private versus public.

“Third, this could also be explained as a problem in scientific peer review.  Reviewers and editors could insist that this type of bias in reporting be controlled,” Wethington said.

In short, she sees the publication of this review as an important step in improving the scientific review process.

The science of political campaigns, Part 2

Next month, President Barack Obama and Republican Presidential Candidate Mitt Romney will face off on national television on four separate occasions to share their ideas for governing America and explain why voters should chose them.

While it’s not clear how many Americans make their voting decisions based on the debates, we do know they are an important part of the campaign. So I was thrilled to find some evidence on how to consider the candidates responses critically.

Todd Rogers, a behavioral psychologist at Harvard University, is among a growing group of researchers applying social science to issues effecting political campaigns. (We’ve written about his work on get-out-the-vote phone calls.)  He wanted to address the issue of how candidates respond when get asked a question that they don’t want to answer – and whether the public notices when politicians dodge a question by talking about a different topic instead.

Rogers and his colleague Michael Norton, an associate professor at the Harvard Business School, designed a study to determine under what conditions people can get away with dodging a question, and under what conditions listeners can detect what’s happening.

In their study, published in the Journal of Experimental Psychology, they recorded a speaker answering a question about universal healthcare. Then they paired that answer with three separate questions: the correction question about health care, one about illegal drug use and another about terrorism.  They showed the three question-and-answer pairings to separate groups of people and asked them to rate the truthfulness of the speaker.

Their research found that when the question and answer sounded somewhat similar – such as in the case where the speaker was asked about drug use but responded about healthcare –  the audience rate the speaker as trustworthy.  (In fact, most of the people who head the answer about illegal drug use couldn’t even remember the question.) But when the answer was very clearly addressing a different topic – such as when the speaker was asked about health care but responded about terrorism – the audience detected the dodge.

In another part of the study, Rogers and Norton used the same questions and answers, but posted the question on the screen in for some viewers. They found  viewers who saw the question posted on the screen while the speaker answered were more than twice as likely to detect a dodge, even in subtle cases.

Rogers advocates for posting the questions on the screen during the presidential candidates debates, although he concedes it’s unlikely to happen this year.

You can hear an interview with Rogers  and learn about other research on political campaigning in last week’s episode of NPR’s Science Friday.

Missing data: The Achilles heel of systematic reviews

If you’re a regular reader of EBL, you know we’re huge fans of systematic reviews – studies in which researchers use sophisticated methods to bring together and evaluate the dozens, hundreds, or even thousands of articles on a topic.

We value these analyses because they collect all of the information available and then look at why and how each study differs. By looking at so many studies, researchers can make general conclusions, even though participants and study settings might be different.

So we took a great interest this week in a series of studies in the British Medical Journal making the case that many medical studies aren’t published, and therefore missing from systematic reviews and the decision-making processes of doctors and patients.

One of the studies found that fewer than half of the clinical trials funded by the National Institutes of Health from 2005 to 2008 were published in peer-reviewed journals within 30 months of study completion, and only 68 percent were published at all.

Another examined trials registered at the federal web site during 2009. Of the 738 studies registered and subject to mandatory reporting guidelines (per the rules of the U.S. Food and Drug Administration), only 22 percent reported results within one year.  (It’s interesting to note that trials of medicines in the later stages of development and those funded by the drug industry were more likely to have results reported.)

A third study re-analyzed 41 systematic reviews of nine different medicines, this time including unpublished clinical trial data from the FDA in each analysis.  For 19 of the systematic reviews, the addition of unpublished data led to the conclusion that the drug was not as effective as originally shown. For 19 other reviews, the additional data led to the conclusion that the drug was more effective than originally shown.

Dr. Harlan Krumholz, a cardiologist at Yale and a internationally-respected expert in outcomes research, summarized the issue in his Forbes magazine blog, including some of the reasons that data goes unreported. (Among them, researchers may not be happy with the results or may shift focus to a new study. And medical journals may not be receptive to negative results.)

Whatever the reasons, the take-home message seems to be that researchers and publishers need to do a better job getting all of the information out in the public domain so that doctors and patients can truly make informed decisions.

More evidence supporting the systematic review

Frequent EBL readers are well aware of the importance we put on systematic reviews, studies that synthesize many articles on a given topic and draw a conclusion about what the body of evidence shows.

So we were excited this week to stumble across a paper funded by the Milbank Memorial Fund and the U.S. Centers for Disease Control extolling the virtues of the systematic review for improving health across populations – especially for our policymakers.

The paper includes case studies on a wide range of topics — underage drinking, tobacco use and traffic safety interventions, to name a few.

And it draws the following conclusions about systematic reviews, in general:

  • Policymakers should feel confident about the findings of systematic reviews because, by definition, they help reduce the bias often present in single studies.
  • Systematic reviews help policymakers work efficiently and reduce the influence of outside interests.
  • Researchers in all fields must make strategic efforts to publicize and implement review findings. (Here at EBL, we’re doing our best in this area!)
  • Enhancing the “literacy” of decision makers and the public about the strengths and weaknesses of different types of evidence can help improve population health policy.

So there you have it: More evidence in support of the systematic review.  The next time you’re thinking about making a health decision, considering checking the body of evidence. Just Google “systematic review” along with the topic you’re interested in and see what you can find.

Randomized, controlled designs: The “gold standard” for knowing what works

You’re having trouble sleeping one night, so you finally give up and turn on the TV. It’s 2 AM, so instead of actual programs, much of what you get are informercials. As you flip through these slick “infotainment” shows, you hear enthusiastic claims about the effectiveness of diet pills, exercise equipment, and a multitude of other products

You will soon see that almost every commercial uses case studies and testimony of individuals for whom the product has supposedly worked. “I lost 50 pounds,” exults a woman who looks like a swimsuit model. “I got ripped abs in 30 days,” crows a man who, well, also looks like a swimsuit model.

The problem is that this kind of case study and individual testimony is essentially worthless in deciding if a product or program works. The main problem is that it’s very hard to disprove case study evidence. Look at the informercials – they seem to have worked for some people, but what about all the people who failed? And how do we know that the people who lost weight, for example, wouldn’t have done so without buying the product?

So case studies and testimonials aren’t worth much because they don’t give us the kind of comparative information needed to rule out alternative explanations.

To the rescue comes experiments using randomized, controlled designs (RCD). Such experiments are rightly called the “gold standard” for knowing whether a treatment will work. In a RCDs, we create a test so that one explanation necessarily disconfirms the other explanation. Think of it like a football game. Both teams can’t win, and one eventually beats the other. It’s the same with science: our knowledge can only progress if one explanation can knock out another explanation.

 The main weapon in our search for truth is control group designs.  Using control groups, we test a product or program (called the “treatment”) against a group that doesn’t get whatever the treatment is.

 Case studies simply don’t have the comparative information needed to prove that a particular treatment is better than another one, or better than just doing nothing. And that’s important because of the “placebo effect.” It turns out that people tend report that a treatment has helped them, whether or not there is any actual therapy delivered. In medicine, placebo effects very strong, and in some cases (like drugs for depression) the placebos have occasionally been found to work more effectively than the drugs.

 So what is a randomized, controlled design? There are four components of RCDs:

 1. There is a treatment to be studied like a program, a drug, or a medical procedure)

 2. There is a control condition. Sometimes, this is a group that doesn’t’ get any treatment at all. Often it is a group that gets some other kind of treatment, but of a different kind or smaller amount.

3.  Now here’s the key point:The participants must be randomly assigned to treatment or control groups. It is critical that nobody – not the researchers, not the people in the experiment – can participate in the decision about which group people fall into. Some kind of randomization procedure is used to put people into groups – flipping a coin, using a computer, or some other method. This is the only way we can make sure that the people who get the intervention will be similar to those who do not.

4. There must be carefully defined outcome measures, and they must be measured before and after the treatment occurs.

Lots of the bogus claims you see on TV and elsewhere look only at people who used the product. Without the control group, however, we can’t know if the participants would have gotten better with no treatment at all, or with some other treatment.

Catherine Greeno, in an excellent article on this topic, sums up why we need to do RCDs if we want to understand if something really does or doesn’t work. She puts it this way:

  • We study a treatment compared to a control group because people may get better on their own.
  • We randomly assign to avoid the problem of giving worse off people the new treatment because we think they need it more.
  • We measure before and after the treatment so that we have measured change with certainty, instead of relying on impressions or memories.

 So when you are wondering if a therapy, treatment, exercise program, product, etc. are likely to work, keep those three little words in mind: Randomized, Controlled Design!

New federal diet guidelines follow the evidence

Here at EBL, we’ve discussed how difficult it is to figure out what nutrition advice to follow, especially when there’s so much health and nutrition advice in the media that refers to anecdotes and simplistic inferences from single studies.

For those looking for real evidence about what to eat, there’s some good news.  The federal government has issued new dietary guidelines based on an extensive evidence-based review.

The U.S. Departments of Agriculture and Health and Human Services appointed 13 nationally-recognized experts in nutrition and health to review the scientific literature on how nutrition impacts health and disease prevention.

The experts worked with a new resource – USDA’s Nutrition Evidence Library, a clearinghouse of systematic reviews designed to inform federal nutrition policy. (You can read more about the process the panel used to create the new nutrition guidelines by clicking here.) The library employs post-graduate level researchers with experience in nutrition or public health to build its content.  The researchers analyze peer-reviewed articles to build bodies of evidence, develop conclusion statements and describe research recommendations.  It’s an EBL dream! 

So what do the new guidelines recommend? 

The entire report from the committee of experts is more than 400 pages long, with specific advice on everything from energy balances to food safety.  Government officials distilled this report into 112 pages of dietary guidelines, and 23 recommendations for the general population. Among them are:

  • Focus on consuming nutrient-dense foods and beverages.
  • Reduce daily sodium intake to less than 2,300 milligrams (about 1 teaspoon).
  • Limit the consumption of foods that contain refined grains, especially refined grain foods that contain solid fats, added sugars and sodium.
  • Eat a variety of vegetables, especially dark-green and red and orange vegetables, and beans and peas.
  • Consume at least half of all grains as whole grains. Increase whole-grain intake by replacing refined grains with whole grains.
  • Increase the amount and variety of seafood consumed by choosing seafood in place of some meat and poultry.

As you can imagine, the EBL team is thrilled that the government is using systematic reviews to make national diet recommendations.  They’re worth reading to see if you can improve your own diet.  Even small changes can make a big difference when you consider the evidence.

Do gun control laws prevent violence?

Gun control laws are in the media spotlight once again in the wake of the Arizona shooting that killed six people and injured 13 including U.S. Rep. Gabrielle Giffords.  Already, the Arizona Legislature has introduced two new bills that would loosen gun controls on college campuses. But what do we really know about gun control laws?  Is there evidence that they reduce violence?

As unsatisfying as it sounds, the answer is that we just don’t know.  One of the only systematic reviews available on this topic was published by the Community Guide, a resource at the U.S. Centers for Disease Control for evidence-based recommendations on improving public health.  It reviewed more than 40 studies on gun control laws ranging from bans to restrictions to waiting periods.  (You can read a summary of the report here.)

The conclusion:  “The evidence available from identified studies was insufficient to determine the effectiveness of any of the firearms laws reviewed singly or in combination.” 

Essentially, the review concludes that there is a lack of high-quality studies that evaluate specific gun control laws.  One challenge is that information about guns and who owns them is limited to protect the privacy of firearms owners.

So what do we know about firearms in the U.S.?

We know that firearms are present in about one-third of U.S. households, and that there are handguns in about half of those homes.

We also have a National Violent Death Reporting System, which collects information from death certificates, medical examiner reports and police reports in 19 states. According to the reporting system, 66 percent of all murders and 51 percent of suicides are committed with guns.  But that doesn’t tell us much – like whether the murders and suicides would occur by other means or, given stricter gun control laws, whether the perpetrators would find a way to obtain guns illegally.

The bottom line is that researchers and government officials need to step up to conduct more research and find a proven way to prevent gun violence from taking the lives of innocent citizens.

How often do scientists cite previous research in their studies?

You’ve heard us tout the benefits of systematic reviews over and over again here at Evidence-based Living.  The truth is they are the best way to evaluate the real evidence available on any topic because they use sophisticated methods to evaluate the dozens of research-based articles. 

They’re also essential for scientists conducting their own research because one of the main premises of scientific study is that new discoveries build on previous conclusions.

So we were disappointed to see an article in the New York Times last week discussing how few research studies cite preceding studies on the same topic. 

The article discussed a study published in the Annals of Internal Medicine that reviewed 227 systematic reviews of medical topics.  In total, the reviews included 1523 trials published from 1963 to 2004. For each clinical trial, the investigators asked how many of the other trials, published before it on the same topic and included with it in the meta-analysis, were cited. They found that fewer than 25 percent of preceding trials were cited.

The results shocked study co- author Dr. Steven N. Goodman of Johns Hopkins University School of Medicine.

“No matter how many randomized clinical trials have been done on a particular topic, about half the clinical trials cite none or only one of them,” he told the New York Times. “As cynical as I am about such things, I didn’t realize the situation was this bad.”

The lack of previous citations could lead to all sorts of problems – from wasted resources to incorrect conclusions, the study concluded.

Here at Evidence-based Living, we’d like to see citations for systematic reviews and previous trials in most scientific articles.

Skip to toolbar