Medical studies: Evidence you can trust?

Evidence-based Living is built around the idea that scientific study should guide our lives – in decisions we make for our families, in community initiatives, and of course in choosing medical treatments.

A new review this month is the Journal of Oncology raises important questions about the validity of medical studies.  The report reviewed 164 trials of treatments for breast cancer including chemotherapy, radiation and surgery conducted from 1995 to 2011.

It concluded that: most of the studies were clouded by overemphasizing the benefits of the treatment, or minimizing potential side effects.

For example, they reported on 92 trials which had a negative primary endpoint – which essentially means the treatment was not found to be effective for the main goal of the study. In 59 percent of those trials, a secondary end point – another goal – was used to suggest the experimental therapy was actually beneficial.

And only 32 percent of the studies reported severe or life-threatening side effects in the abstract – where medical professionals who are scanning the report might miss them. Studies that reported a positive primary endpoint – meaning the treatment was effective for the problem that researchers were targeting – were less likely to report serious side effects.

What does all of this mean?

Elaine Wethington, a medical sociologist at the College of Human Ecology, says the review reveals some important findings about medical studies.

“I would speculate that the findings are due to at least three processes,” she explained.

“First, trial results should be published even if the primary outcome findings are negative, but it can be difficult to find a journal that will publish negative findings,” she said. “As a result, there is a tendency to focus on other outcomes that are secondary in order to justify the work and effort.

“Second, presentation of findings can be influenced by a variety of conflicts of interest. There is a lot of published evidence – and controversy — that scientific data collection and analysis can be affected by the source of funding, private versus public.

“Third, this could also be explained as a problem in scientific peer review.  Reviewers and editors could insist that this type of bias in reporting be controlled,” Wethington said.

In short, she sees the publication of this review as an important step in improving the scientific review process.

Citizen scientists: The new research corps

More often than ever before, people from all walks of life –  from retired senior citizens to young families – are helping scientists collect data that support research projects. This movement of “citizen science” has flourished over the past decade as technology has advanced, allowing volunteers to share information with researchers quickly and accurately.

In fact, there are several interesting examples of “citizen science” here at Cornell University, including a survey of backyard birds and a project called Yardmap that encourages homeowners to map their yards so that researchers can better understand the habitat available to birds.

This month, a group of researchers from the United Kingdom has published a review that details exactly how “citizen science” is working, including summaries of projects across the globe, interviews with scientists who use this data and , and a guide of the best practices for conducting these types of projects. The review found some interesting conclusions. Among them:

  • The motivation for citizen scientists varies greatly. Successful projects tend to take into account the interests and skill-sets of participants, and their expectations.
  • Getting feedback from volunteers is an important component of a sucessful project and is acheived through a wide variety of mediums, including social media and face-to-face interactions.
  • Technologies such as GPS and smart phones have made it easier for citizens to share accurate data, but relying on these devices excludes those who don’t have access to them.

Cornell gerontologist Karl Pillemer is a proponent of “citizen science” for people in their 60s, 70s and 80s. He has conducted research that found that older adults who get involved in creating a sustainable society and conserving natural resources are not only helping the environment, they are also helping themselves.

“Research shows that citizen science activities provide a wonderful opportunity to achieve two goals at once: Adding to our knowledge about areas important to quality of life for people, while also providing opportunities for rewarding and meaningful activity,” he said. “And citizen science activities can be adapted for any life course stage, from elementary school students to retirees.”

In short, projects that use citizen volunteers to collect data are an important part of environmental research today, and understanding the best practices for this type of research is important.

The facts on Social Security

More than 75 years ago, the U.S. government created Social Security, the federal insurance program that provides benefits to individuals and their families who can no longer work because of disability, retirement or death. The program is complex, and its details are often debated among politicians.

Earlier this year, the Economic Policy Institute and the National Academy of Social Insurance published a guide that explains the facts about the Social Security program to young people. The document includes detailed, evidence-based explanations of Social Security’s history, beneficiaries, financing, and shortfalls. It pulls data from the Office of the Chief Actuary of the Social Security Administration, Congressional Budget Office, the Employee Benefits Research Institute, and the Center for Retirement Research.

Here’s a sampling of interesting facts from the document:

  • In 2012, about 159 million individuals or 94% of the American workforce, worked in Social Security-covered employment. (Those not covered include government employees covered by other insurance programs, farm workers who do not meet minimum work requirements and students.)
  • Approximately 55 million Americans received Social Security benefits in 2011. Seventy percent were retirees; 19 percent were disability beneficiaries and 11 percent were survivors of deceased workers.
  • Without Social Security income, it is estimated that nearly half senior citizens would be living in poverty. Instead, fewer than 10 percent of seniors live in poverty.
  • Because the U.S. population is aging and people are living longer, the Social Security program is projected to run up a deficit. The projected shortfall is 2.67% of taxable earnings over the next 75 years.
  • There are a variety of ways to compensate for the deficit including raising taxes, expanding coverage, investing in equities, increasing the retirement age and reducing cost-of-living increases.

The guide concludes that Social Security fulfills an important need in our society as an insurance program for American workers.  To learn more about Social Security benefits and about how your payroll taxes are used, it’s worth checking out this evidence-based document.

A roadmap: How to use research to help people

The idea of translational research initially sprung out of the field of medicine, where doctors and scientists have teamed up to move laboratory discoveries more rapidly into clinical settings to help patients improve their health and recover from ailments.

Since its beginnings several decades ago, researchers working in other disciplines have latched onto the idea of translation. Now a new book offers models for social and behavioral scientists who want to transfer their findings into real world settings.

The book – “Research for the Public Good: Applying the Methods of Translational Research to Improve Human Health and Well-Being” – includes chapters by experts in the fields of psychology, child development, public policy, sociology, gerontology, geriatrics and economics that offer road maps for translating research into policies and programs that improve the well-being of individuals and communities. It  is co-edited by Cornell professors Elaine Wethington and Rachel Dunifon.

The book grew out of second Biennial Urie Bronfenbrenner Conference on translational research held at Cornell and attended by leading experts in the social sciences and medical fields.

“Translational research has gained prominence in biomedical research, where there’s an emphasis on speeding lab findings into practice,” Wethington told the Cornell Chronicle. “It also goes back to the work of Urie Bronfenbrenner and his colleagues, however, who were ahead of their time with an ecological approach to human development that brought together research, policy and practice. This book defines the term in that context and provides practical insights for doing translational research.”

Graduate students and early-career scientists unfamiliar with translational research methods should find the book valuable, Wethington said. “There is a surge of interest in the field right now, so the book should be a great resource,” she said.

A clearinghouse of education evidence

Parents across the nation send their children to public schools with the confidence that principals and teachers are providing an environment where children can learn, grow and thrive.

We hear so much about in the news about ways to improve our education system – especially in this presidential election year, when candidates are offering proposals and counter-proposals to fix our schools.

But is there any evidence as to what really works?  As a parent of young children, our schools are one important place where I want to see evidence-based guidelines put in place.

The best place I’ve found for evidence-based information on education is called the What Works Clearinghouse, an initiative by the U.S. Department of Education that conducts systematic reviews on education research to provide educators with the information they need to make evidence-based decisions.

The project is a true treasure trove of information, with research reviews on a myriad of topics including dropout prevention, school choice, early childhood education and student behavior, to name just a few.

On a recent cruise through the site, several topics piqued my interested including:

I’m certainly going to share this amazing resource with my son’s teachers, and use to gather information about the curriculums he’ll be learning in elementary school.  As a parent, it’s a relief to know there’s a place to look for reliable, evidence-based information on education.

Missing data: The Achilles heel of systematic reviews

If you’re a regular reader of EBL, you know we’re huge fans of systematic reviews – studies in which researchers use sophisticated methods to bring together and evaluate the dozens, hundreds, or even thousands of articles on a topic.

We value these analyses because they collect all of the information available and then look at why and how each study differs. By looking at so many studies, researchers can make general conclusions, even though participants and study settings might be different.

So we took a great interest this week in a series of studies in the British Medical Journal making the case that many medical studies aren’t published, and therefore missing from systematic reviews and the decision-making processes of doctors and patients.

One of the studies found that fewer than half of the clinical trials funded by the National Institutes of Health from 2005 to 2008 were published in peer-reviewed journals within 30 months of study completion, and only 68 percent were published at all.

Another examined trials registered at the federal web site during 2009. Of the 738 studies registered and subject to mandatory reporting guidelines (per the rules of the U.S. Food and Drug Administration), only 22 percent reported results within one year.  (It’s interesting to note that trials of medicines in the later stages of development and those funded by the drug industry were more likely to have results reported.)

A third study re-analyzed 41 systematic reviews of nine different medicines, this time including unpublished clinical trial data from the FDA in each analysis.  For 19 of the systematic reviews, the addition of unpublished data led to the conclusion that the drug was not as effective as originally shown. For 19 other reviews, the additional data led to the conclusion that the drug was more effective than originally shown.

Dr. Harlan Krumholz, a cardiologist at Yale and a internationally-respected expert in outcomes research, summarized the issue in his Forbes magazine blog, including some of the reasons that data goes unreported. (Among them, researchers may not be happy with the results or may shift focus to a new study. And medical journals may not be receptive to negative results.)

Whatever the reasons, the take-home message seems to be that researchers and publishers need to do a better job getting all of the information out in the public domain so that doctors and patients can truly make informed decisions.

More evidence supporting the systematic review

Frequent EBL readers are well aware of the importance we put on systematic reviews, studies that synthesize many articles on a given topic and draw a conclusion about what the body of evidence shows.

So we were excited this week to stumble across a paper funded by the Milbank Memorial Fund and the U.S. Centers for Disease Control extolling the virtues of the systematic review for improving health across populations – especially for our policymakers.

The paper includes case studies on a wide range of topics — underage drinking, tobacco use and traffic safety interventions, to name a few.

And it draws the following conclusions about systematic reviews, in general:

  • Policymakers should feel confident about the findings of systematic reviews because, by definition, they help reduce the bias often present in single studies.
  • Systematic reviews help policymakers work efficiently and reduce the influence of outside interests.
  • Researchers in all fields must make strategic efforts to publicize and implement review findings. (Here at EBL, we’re doing our best in this area!)
  • Enhancing the “literacy” of decision makers and the public about the strengths and weaknesses of different types of evidence can help improve population health policy.

So there you have it: More evidence in support of the systematic review.  The next time you’re thinking about making a health decision, considering checking the body of evidence. Just Google “systematic review” along with the topic you’re interested in and see what you can find.

Video feature: Q&A on decision-making

Most of us have seen it before.  Maybe it was a neighborhood boy riding his bike down the middle of the road, or a group of girls performing stunts on the diving board at the local pool. Whatever the circumstance, it’s fairly common knowledge that young people don’t always make the best decisions. In fact, it’s a topic we’ve written about here on EBL.  But given the stakes, it’s one worth revisiting.

Earlier this month, Cornell professor Valerie Reyna — an expert in decision-making  — was featured in a new video on the topic.  In it, Reyna explains the science behind decision-making in adolescents, as well as how the neuroscience of decision-making plays a role in other areas of our lives including health care and memory.

It’s certainly worth a watch!

What does the evidence say about risk communication?

The U.S. Food and Drug Administration has published a new report that’s right up our alley. It’s called Communicating Risks and Benefits: An Evidence-Based User’s Guide.

The introduction offers an explanation of evidence-based health communications that we believe should be the standard for all organizations, from corporations to government agencies to universities.

“…Sound communications must be evidence-based in two related ways. One is that communications should be consistent with the science — and not do things known not to work nor ignore known problems. The second is communications should be evaluated — because even the best science cannot guarantee results. Rather, the best science produces the best-informed best guesses about how well communications will work. However, even these best guesses can miss the mark, meaning that they must be evaluated to determine how good they are and how they can be improved.”

The report goes onto address the concept of communicating risks and benefits across a wide range of fields – in health provider settings, news coverage and corporate communications to name a few – and offer practical tips about using evidence in all sorts of communications.

Cornell’s own Valerie Reyna, whom we’ve written about before, authored Chapter 12 about communicating risks and benefits to people of all ages, and her work is extensively quoted in other chapters of the report.

The report is chock-full of useful recommendations.  Among them are:

  • Health professionals should receive specific training on how to communicate the risks and benefits of medical procedures and medicines.
  • Provide information along with explaining meaning to help consumers make good decisions.
  • Test the readability of health care messages to ensure they use plain language.

If you work in the field of health care, this report is a must-read!

Randomized, controlled designs: The “gold standard” for knowing what works

You’re having trouble sleeping one night, so you finally give up and turn on the TV. It’s 2 AM, so instead of actual programs, much of what you get are informercials. As you flip through these slick “infotainment” shows, you hear enthusiastic claims about the effectiveness of diet pills, exercise equipment, and a multitude of other products

You will soon see that almost every commercial uses case studies and testimony of individuals for whom the product has supposedly worked. “I lost 50 pounds,” exults a woman who looks like a swimsuit model. “I got ripped abs in 30 days,” crows a man who, well, also looks like a swimsuit model.

The problem is that this kind of case study and individual testimony is essentially worthless in deciding if a product or program works. The main problem is that it’s very hard to disprove case study evidence. Look at the informercials – they seem to have worked for some people, but what about all the people who failed? And how do we know that the people who lost weight, for example, wouldn’t have done so without buying the product?

So case studies and testimonials aren’t worth much because they don’t give us the kind of comparative information needed to rule out alternative explanations.

To the rescue comes experiments using randomized, controlled designs (RCD). Such experiments are rightly called the “gold standard” for knowing whether a treatment will work. In a RCDs, we create a test so that one explanation necessarily disconfirms the other explanation. Think of it like a football game. Both teams can’t win, and one eventually beats the other. It’s the same with science: our knowledge can only progress if one explanation can knock out another explanation.

 The main weapon in our search for truth is control group designs.  Using control groups, we test a product or program (called the “treatment”) against a group that doesn’t get whatever the treatment is.

 Case studies simply don’t have the comparative information needed to prove that a particular treatment is better than another one, or better than just doing nothing. And that’s important because of the “placebo effect.” It turns out that people tend report that a treatment has helped them, whether or not there is any actual therapy delivered. In medicine, placebo effects very strong, and in some cases (like drugs for depression) the placebos have occasionally been found to work more effectively than the drugs.

 So what is a randomized, controlled design? There are four components of RCDs:

 1. There is a treatment to be studied like a program, a drug, or a medical procedure)

 2. There is a control condition. Sometimes, this is a group that doesn’t’ get any treatment at all. Often it is a group that gets some other kind of treatment, but of a different kind or smaller amount.

3.  Now here’s the key point:The participants must be randomly assigned to treatment or control groups. It is critical that nobody – not the researchers, not the people in the experiment – can participate in the decision about which group people fall into. Some kind of randomization procedure is used to put people into groups – flipping a coin, using a computer, or some other method. This is the only way we can make sure that the people who get the intervention will be similar to those who do not.

4. There must be carefully defined outcome measures, and they must be measured before and after the treatment occurs.

Lots of the bogus claims you see on TV and elsewhere look only at people who used the product. Without the control group, however, we can’t know if the participants would have gotten better with no treatment at all, or with some other treatment.

Catherine Greeno, in an excellent article on this topic, sums up why we need to do RCDs if we want to understand if something really does or doesn’t work. She puts it this way:

  • We study a treatment compared to a control group because people may get better on their own.
  • We randomly assign to avoid the problem of giving worse off people the new treatment because we think they need it more.
  • We measure before and after the treatment so that we have measured change with certainty, instead of relying on impressions or memories.

 So when you are wondering if a therapy, treatment, exercise program, product, etc. are likely to work, keep those three little words in mind: Randomized, Controlled Design!

Skip to toolbar