How often do scientists cite previous research in their studies?

You’ve heard us tout the benefits of systematic reviews over and over again here at Evidence-based Living.  The truth is they are the best way to evaluate the real evidence available on any topic because they use sophisticated methods to evaluate the dozens of research-based articles. 

They’re also essential for scientists conducting their own research because one of the main premises of scientific study is that new discoveries build on previous conclusions.

So we were disappointed to see an article in the New York Times last week discussing how few research studies cite preceding studies on the same topic. 

The article discussed a study published in the Annals of Internal Medicine that reviewed 227 systematic reviews of medical topics.  In total, the reviews included 1523 trials published from 1963 to 2004. For each clinical trial, the investigators asked how many of the other trials, published before it on the same topic and included with it in the meta-analysis, were cited. They found that fewer than 25 percent of preceding trials were cited.

The results shocked study co- author Dr. Steven N. Goodman of Johns Hopkins University School of Medicine.

“No matter how many randomized clinical trials have been done on a particular topic, about half the clinical trials cite none or only one of them,” he told the New York Times. “As cynical as I am about such things, I didn’t realize the situation was this bad.”

The lack of previous citations could lead to all sorts of problems – from wasted resources to incorrect conclusions, the study concluded.

Here at Evidence-based Living, we’d like to see citations for systematic reviews and previous trials in most scientific articles.

Video games: Helpful or harmful to the brain?

It’s January, the month when children across the country spend hours in front of the television playing with the millions of video game counsels sold over the holidays.  In fact, you probably personally know a “gamer” yourself. According to the Entertainment Software Association, more than 68 percent of American households play computer or video games.

We hear often about studies demonstrating that too much screen-time – whether television, video games or computers – is associated with attention problems in children.  But it turns out there are some benefits to playing video games, too.

A cadre of researchers in cognitive sciences, psychology and neuroscience are building a body of evidence that shows video gaming (in moderation, of course) helps improve attention, vision, multitasking and other cognitive skills.

A systematic review by researchers at the University of Rochester’s Department of Brain and Cognitive Sciences found that playing action video games significantly reduces reaction times without sacrificing accuracy across a variety of real-world tasks, including looking for a letter in a field of other letters and indicating the direction of an arrow while ignoring arrows pointing in the other direction.

Another study found that video games help improve contrast sensitivity, or the ability to see subtle shades of gray.

“And this is a skill that comes in very handy if you’re driving in fog,” explained Daphene Bavelier, a cognitive researcher at the University of Rochester, who spoke to reporters from National Public Radio for a recent story. “Seeing the car ahead of you is determined by your contrast sensitivity. We looked at the effect of playing action games on this visual skill of contrast sensitivity, and we’ve seen effects that last up to two years.”

Lauren Sergio from York University in Toronto used functional brain scans to find that skilled gamers mainly an area of the brain specialized for planning, attention and multitasking, meaning that they don’t activate as much of their brain to do complex tasks with their hands. Non-gamers, in contrast, predominately use an area called the parietal cortex, the part of the brain specializing in visual spatial functions.

“The non-gamers had to think a lot more and use a lot more of the workhorse parts of their brains for eye-hand coordination,” she says. “Whereas the gamers really didn’t have to use that much brain at all, and they just used these higher cognitive centers to do it.”

In fact, employers including hospitals, the U.S. armed services and many police departments are using video games to help doctors, soldiers and police officers work on skills they use in their jobs everyday.

The bottom line: video games, played in moderation, actually help kids develop some important life skills. Just make sure to set a timer, or find another way to limit screen-time.

Your flu vaccine will help…a little

It’s the time of year when everyone is lining up for the annual flu vaccine.  Doctor’s offices and employers are holding special clinics, and even many drug stores are offering a poke in the arm to prevent influenza this winter.  But do these vaccines actually work?

A systematic review of the literature says they do, a little bit.

The Cochrane Collaboration (one of our favorite resources here at EBL) reviewed 50 reports of the benefits of the influenza vaccine, including 40 randomized-controlled trials involving more than 70,000 people.

Before I explain the results, here’s a little background on the flu:  There are more than 200 different viruses that cause influenza with similar symptoms including fever, headache, cough and body aches.  It is difficult for vaccine-manufacturers to know which of these viruses will be active in any given year.  The World Health Organization does its best to predict what type of flu will be prevalent in a given season, and then recommends which viral strains should be included in vaccinations each year.

Under ideal conditions – meaning that the vaccine completely matches the active flu viruses – 33 healthy adults need to be vaccinated to avoid one person coming down with the flu. But the vaccine rarely matches the active flu viruses entirely. In more realistic conditions where the vaccine partially matches the active flu viruses, 100 people need to be vaccinated to avoid one set of influenza symptoms.

None of the studies showed that vaccines reduced the number of people hospitalized for the flu.  Also, studies show the vaccine caused one case of Guillian-Barré syndrome, a neurological condition leading to paralysis, for every one million vaccinations.

The bottom line:  The flu vaccine will reduce your chances of getting sick this winter, but provides no guarantees of completely avoiding the flu.

Evidence-Based Elections: If the House changes over, is it the President’s fault?

In all of the hubbub about the upcoming elections, Evidence-Based Living had to ask: Is there any research evidence that might help us interpret what’s going on? (And, of course, we always scratch our heads about why there isn’t more discussion of research evidence on something so important.)

One of the few enlightening discussions I’ve seen comes in article by Jonathan Chait. Chait notes the endless debate over “Did Obama Lose the 2010 Elections?” that is roiling in media discussions this week.

Folks on the left say Obama’s responsible because he: 1) didn’t stick more to progressive principles, and 2) didn’t more aggressively tout the Democrats accomplishments. People on the right argue that Obama’s responsible because he 1) is out of step with what the country wants, and 2) has moved too far to the left.

But the blaming in either direction hinges on one question: What if the predicted election results are simply, well, normal? That is, what if the ruling party losing seats in the mid-term election is a predictable, scientific phenomenon, rather than someone’s (Obama’s, the Democrat’s, the media’s, etc.) fault? Of course, if this were the case, major news organizations would have nothing to discuss and pundits would be out of a job. Still, it’s worth considering.

This points us to an analysis by Douglas Hibbs, professor of political economy, in a just-published report from the Center for Public Sector Research. Hibbs, like a good scientist, makes clear that his model isn’t designed to specifically predict the elections, but rather to explain midterm House election outcomes in terms of systematic predetermined and exogenous factors.

Based on prior research, Hibbs tells us there are three fundamental factors that predict midterm elections:

1) the number of House seats won by the party in power in the previous election

2) the margin of votes by which the party in power’s candidates won in the prior presidential election

3) the average growth rate of per capita real disposable personal income during the congressional term (a measure of economic prosperity).

From the available data plugged into this model, Hibbs predicts the Democrats will lose about 45 seats. In other words, based on the model alone, we would expect the Democrats to lose control of the house even if the President made no difference at all. And most predictions show the Democrats losing about this many seats (or 5-10 more, depending on which electoral prediction web sites you look at).

Hibbs provides the necessary caveats about his work not being definitive. But it is certainly strong enough to make us ask: Where’s the science behind a lot of the political debate and punditry? The evidence-based perspective encourages us to be careful in attributing cause and effect where none may exist.

Exciting news for pregnant woman: One cup of Joe is safe

There’s new and exciting new in our family: we’re expecting another child to arrive sometime around mid-March.  My husband and I are thrilled!  The development also brings along a multitude of research topics to make sure I’m keeping up the latest evidence on having a healthy pregnancy.

One of the first things that caught my eye was a note from my doctor’s office about caffeine intake: basically one (normal-sized) cup of coffee a day is safe for the baby.

That was news to me!  I do enjoy wrapping my hands around a steamy cup of dark roast every day. Last time I was pregnant, only two years ago, I cut out caffeine altogether because studies have linked caffeine with low-birth weight. But – as often – the available evidence has changed.

Earlier this year, the American College of Obstetricians and Gynecologists issued guidelines that recommend less than 200 mg of caffeine a day for pregnant women. (An 8-ounce cup of brewed, drip coffee averages 137 mg of caffeine.)

“Finally, we have good evidence to show that having a cup of coffee a day is fine and it poses no risk to the fetus,” Dr. William H. Barth Jr., chairman of the committee on obstetric practice and chief of the division of maternal-fetal medicine at Massachusetts General Hospital in Boston, told U.S. News and World Report.

As for more than that, the jury is still out. The Cochrane Collaboration says that more work needs to be done to determine exactly how much caffeine is safe for a fetus. Their researchonly found one piece of evidence that met the collaboration’s inclusion criteria and provided relevant data: A study in Denmark where women less than 20 weeks pregnant were randomly assigned to drinking caffeinated instant coffee or dedecaffeinated instant coffee. The study found drinking three cups of coffee a day in early pregnancy had no effect on birthweight, preterm births or growth restriction.

For now, I’ll take my one cup a day.  It’s just enough to help me keep up with our two-year-old during those late afternoon periods of low-energy.

Evidence-Based Living Never Takes a Vacation: Resistance to Science

While hanging with my large and boisterous family on the Massachusetts shore this week, the conversation turned to people’s resistance to scientific information. Now this is not actually all that surprising, because my extended family includes an unusual number of individuals who either are or were practicing scientists. Indeed, the gathering over the week involves several psychologists, a research dietician, a sociologist, two young budding researchers (one studying mood disorders, the other conducting research in a business school), a physician, and a historian.

Discussions emerged about issues of barefoot running (see previous post) and athletes’ use of steroids (this trumped our usual Yankees versus Red Sox debate for a while). Niece Julianna then posed the following question: Why are people so resistant to scientific evidence on some issues? Indeed, why does their resistance often approach the first-grade tactic of putting fingers in the ears and singing “I can’t hear you?” Several family members noted that when they have suggested, in the course of an argument, that the scientific evidence be consulted they get responses like: “I don’t care, I just know this is right.”

Of course, scientists haven’t left a topic like that alone. There is a body of research about why individuals reject even what the scientific community views as fundamental facts. An interesting article by Yale psychologists Paul Bloom and Deena Skolnick Weisberg provides a useful review. They begin by noting the prevalence of erroneous beliefs, including the curious finding from a Gallup poll that one-fifth of Americans believe that the Sun revolves around the Earth.

Bloom and Weisberg suggest that a primary reason “people resist certain scientific findings, then, is that many of these findings are unnatural and unintuitive.” Further, science involves asserted information (so we believe that Abraham Lincoln was a U. S. president, even though we can’t validate that information personally). There are few scientific findings we can validate directly – e.g., whether vaccines cause autism, whether natural selection operates, or whether repressed memories exist.

In sum, the data Bloom and Weisberg review suggest that people resist science when:

  • Scientific claims clash with intuitive expectations
  • Scientific claims are contested within society
  • A non-scientific alternative explanation exists that is based in common sense and is championed by people who are believed to be trustworthy and reliable.

A recent study published in the Journal of Applied Social Psychology provides additional explanation. In a series of experiments, Gerald Munro found that when presented with scientific information that contradicts one’s beliefs, people invoke the “impotence of science” hypothesis; that is, they argue that it’s a topic that science can’t effectively study.

When people have very strong beliefs about a topic, research has shown that scientific evidence that is inconsistent with the beliefs has little impact in changing them. But even more problematic, Munro’s research suggests that this inconsistency between beliefs and scientific conclusions actually reduces people’s overall faith in science.  

All this provides interesting challenges for proponents of evidence-based living. We need not only to get scientific information out to the public, but we also need a much better understanding of how beliefs create resistance to information that might improve people’s lives.

New Evidence: TV time leads to attention problems

There is another piece of evidence that supports a long-standing belief among child development experts: Too much TV time is associated with attention problems in youth. The newest piece of proof comes from a study conducted by researchers at Iowa State University and published this month in the journal Pediatrics

The new research found that children who exceeded the two hours per day of screen time recommended by the American Academy of Pediatrics – either in TV-watching or video games – were 1.5 to 2 times more likely to have attention problems in school.

The study followed third-, fourth- and fifth-grade students as well as college-aged students for more than one year. Over that time, participants’ average time using television and video games was 4.26 hours per day, well below the national average of 7.5 hours per day reported in other studies.

Study author Douglas Gentile, an associate professor of psychology at Iowa State, explained the phenomenon for a report in Science Daily.

“Brain science demonstrates that the brain becomes what the brain does,” he said. “If we train the brain to require constant stimulation and constant flickering lights, changes in sound and camera angle, or immediate feedback, such as video games can provide, then when the child lands in the classroom where the teacher doesn’t have a million-dollar-per-episode budget, it may be hard to get children to sustain their attention.”

This phenomenon again raises the question for professionals who coordinate youth intervention programs:  What can be done to capture the attention of youth who are so captivated by electronic media?   The answer is most likely to meet them somewhere in their world.

– Sheri Hall

When studies collide: Making sense of contradictory research findings

I know blogs are supposed to be current – otherwise, what’s the point of posting entries that get archived after a few weeks? However, every once in a while I come across a resource from a year or two back, which is so useful I feel the need to share it. Such is the case with this article from the New York Times Science Times. It shows how a journalist can do a superlative job of helping the public understand the complexities of science.

NYT Science Times published an invaluable special issue in 2008 entitled “Decoding Your Health.” The issue responded to the huge amount of medical information available now to consumers on the web, in the press, and in the doctor’s office. The articles are very helpful in “decoding” all this information, and deciding what is useful and what isn’t.

One particular article, however, really grabbed me: “Searching for Clarity: A Primer on Medical Studies.” I’ve rarely seen such a good job of laying out the kinds of studies we should trust, and how medical evidence accumulates to create guidelines for what people should do.

They take an example which could serve as the poster child for the dilemmas consumers face. In the 1990s, everyone was enthusiastic about the idea that the antioxidant beta carotene, which is found in certain fruits and vegetables (such as carrots, squash, apricots, and green peppers), could be good for your health. And this idea was backed up by some animal and observational studies suggesting that beta carotene protected against cancer. Supplement makers had a heyday selling beta carotene capsules.

Then it happened: results were published from three large, very well-done clinical trials, in which people were randomly assigned to take beta carotene or a placebo. These findings showed that beta carotene supplementation not only didn’t prevent disease, but it might even place people at greater risk of cancer.

If you were watching TV back then, you may remember seeing Frankie Avalon on a commercial (for you youngsters, Frankie was a 50’s teen idol with such hits as “Cupid,” “De-De-Dinah,” and “Tuxedo Junction”). As the article notes, he sat in front of a big pile of papers that said “beta carotene works,” and a tiny pile representing the three studies showing it doesn’t. The message: Who are you going to believe?

The answer is: the clinical trials. The article lays it out clearly, showing that there are three fundamental principles that make a more definitive study:

  • You have to compare like with like: “the groups you are comparing must be the same except for one factor — the one you are studying.”
  • The bigger the group studied, the more reliable its conclusions. They make a very helpful point: scientific studies don’t come up with a single number; instead, they come up with a margin of error (like you have a 10-20 percent reduction in risk). Larger numbers = greater certainty.
  • And the finding should be plausible. There should be some supporting evidence for the finding, such that it doesn’t come out of nowhere.

This is a good article to pass along when you are presenting scientific findings that contradict deeply-held beliefs. It shows that when it comes to research on health, more studies aren’t necessarily better – it’s having the right kinds of studies.

Chemical exposure and health: Excellent science-based resource

I recently discovered the excellent blog “New Voices for Research,” and I recommend it to you. The blog is produced by the organization Research America, a group that advocates for research on health and includes many universities and other science-based organizations. There is a section on the web site called “For the Public” which has a number of terrific resources, including a set of fact sheets demonstrating the way research saves lives and money (each sheet covers a specific health problem, ranging from suicide to pain to global health.

But back to the blog. The writers take on important issues, but do so in reader-friendly blog style. They are doing a series of posts called “Chemical Exposures and Public Health,” which so far has covered the following topics:

Part 1 – From Interest to Passion
Part 2 – An Environmental Health Risk
Part 3 – Lead: A Regulatory Success Story
Part 4 – Something My Body Needs Anyway?
Part 5 – Obesity’s Elephant: Environmental Chemicals
Part 6 – Why Our Approach to Toxicology Must Change

These posts are a good way to get people interested in this critically important topic, and one about which it is sometimes hard to find reliable information.

Chocolate and depression: The study vs. the media

I’m always on the lookout for good studies that are misinterpreted by the media (see here and here for examples). Why is this important? Because those of us whose profession it is to translate research findings to the public tend to get smacked upside the head by media misrepresentations. The public gets so used to duelling research findings that they become skeptical about things we are really certain about (e.g., climate change).

If you read your newspaper or watched TV in the last week or so, you may have seen media reports on the relationship between chocolate and depression. Now I love chocolate, and I’m not ashamed to admit it. I spent a year living next to Switzerland, and I can name every brand produced in that country (and I had the extra pounds to show it).

So I got concerned when I read the headlines like this:

Chocolate May Cause Depression

Chocolate Leads to Depression?

Depressed? You Must Like Chocolate

It was a matter of minutes for us to find the original article in the Archives of Internal Medicine. (The abstract is free; unless you have access to a library, you have to pay for the article.)  It’s clearly written, sound research. And it absolutely does not say that chocolate leads to depression (whew!). Indeed, the authors acknowledge that the study can’t tell us that at all.

The research used a cross-sectional survey of 931 subjects from San Diego, California, in which they asked people about both their chocolate consumption and their depressive symptoms. By “cross-sectional” is meant a survey that takes place at one time point. This is distinguished from a longitudinal survey, where the same people are measured at two or more time-points. Why is that important here?

Here’s why. What epidemiologists call “exposure” – that is, whatever might cause the problem (in this case, chocolate) – is measured at the same point in time as the outcome (in this case, depression). For that reason, we can’t be sure whether the chocolate preceded the depression, or the depression preceded the chocolate. They both are assessed at the same time. So we can never be sure about cause and effect from this kind of study.

Now, a longitudinal study is different. The advantage of a longitudinal study is that you can detect changes over time. In this case, you could establish depression and chocolate consumption levels at Time 1, and keep measuring them as they continued over time. For example, if some people who weren’t depressed at Time 1 started eating chocolate and became depressed at a later point, we have stronger evidence of cause and effect.

As good scientists, the authors acknowledge this fact. They note that depression could stimulate cravings for chocolate as a mood-enhancer, that chocolate consumption could contribute to depression, or that a third factor (unknown at this point) could lead to both depression and chocolate consumption (my own pet theory: Valentine’s Day!).

In the interest of full disclosure, some of the media did get it right, like WebMD’s: succinct More Chocolate Means More Depression, or Vice Versa. But because some media sources jump to the most “newsworthy” (some might say sensationalist) presentation, there’s no substitute for going back to the actual source.

Finally, let me say that there is only one way to really establish cause and effect: a randomized, controlled trial. One group gets chocolate, one doesn’t, and we look over time to see who gets more depressed.

Sign me up for the chocolate group!

Agricultural Extension: The Model for Health Reform?

Atul Gawande is a rare mix: A practicing surgeon who is also a wonderful writer. In thinking about our health care crisis and reform, he started looking for models in American history that have worked to transform systems. In a recent article in the New Yorker entitled “Testing, Testing,” he found his model in a surprising place: Agricultural Extension. His treatment of early success of the extension system makes for fascinating reading (and for those of us working in the system, a nice pat on the back!).

Gawande notes that our health care system lags behind other countries but costs an astronomical amount. He asks: What have we gained by paying more than twice as much for medical care as we did a decade ago? Not much, because the system is fragmented and disorganized. To control costs, the new health reform bill proposes to address many problems through pilot programs: basically, a number of small-scale experiments.

Lest this approach seem absurdly inadequate, Gawande shows that it has worked before – in agriculture. He takes us back to the beginning of the 20th century, when agriculture looked a lot like the current health care system. About 40% of a family’s income was spent on food. Farming tied up half the U. S. workforce. To become an industrial power, policymakers realized that food costs had to be reduced so consumer spending could move to other economic sectors. And more of the workforce needed to move to other industries to build economic growth.

As Gawande sums it up,

The inefficiency of farms meant low crop yields, high prices, limited choice, and uneven quality. The agricultural system was fragmented and disorganized, and ignored evidence showing how things could be done better. Shallow plowing, no crop rotation, inadequate seedbeds, and other habits sustained by lore and tradition resulted in poor production and soil exhaustion. And lack of coordination led to local shortages of many crops and overproduction of others.

Unlike other countries, the U. S. didn’t pursue a top-down, national solution. But government didn’t stay uninvolved either. Gawande tells the intriguing story of Seaman Knapp, the original agricultural extension pioneer. Sent by USDA to Texas as an “agricultural explorer,” he persuaded farmers one-by-one to try scientific methods, using a set of simple innovations (e.g., deeper plowing, application of fertilizer). As other farmers saw the successes (and in particular, that the farmers using extension principles made more money), they bought into the new practices.

Extension agents began to set up demonstration farms in other states, and the program was off and running. In 1914, Congress passed the Smith-Lever Act, which established the Cooperative Extension Service. By 1930 there were more than 750,000 demonstration farms.

The rest is, as they say, history. Agricultural experiment stations were set up in every state that piloted new methods and disseminated them. Data were provided to farmers so they could make better informed planning decisions.

And it worked. Gawande sums up:

What seemed like a hodgepodge eventually cohered into a whole. The government never took over agriculture, but the government didn’t leave it alone, either. It shaped a feedback loop of experiment and learning and encouragement for farmers across the country. The results were beyond what anyone could have imagined.

Gawande profiles Athens, Ohio agricultural extension educator Rory Lewandowski, showing that the system performs the same vital functions it did a hundred years ago. Gawande suggests that the health care system can’t be fixed by one piece of legislation. It will take efforts at the local level that involve “sidestepping the ideological battles, encouraging local change, and following the results.” Impossible, people say? Not really, since it’s been done before – in agricultural extension.

Exercise and health: Another media mix-up

Is there anything more aggravating than when the media take a sound research study and distort the findings just to attract attention? (Okay, this season’s American Idol and the “five-dollar footlong” jingle are probably more aggravating, but still . . .) And it’s even worse when the public may take the incorrect message and change their behavior as a result. I’m thinking we should sponsor a contest for the worst reporting (stay tuned).

So take this article from the London Daily Mail. The headline: “Fitness flop? It’s all down to the genes, say researchers.” The first line of the article carries on the same theme:  ”Spent hours sweating it out in the gym but don’t feel any fitter? Blame your parents.” The article was then picked up by other sources and reported as fact (for example, by Fox News). Much of the reporting seems to suggest that some people shouldn’t exercise, as this cartoon accompanying the Daily Mail suggests.

We at Evidence-Based Living, of course, had to track down the original article and take a look (here’s the reference). Now, a lot of the article is close to unintelligible to the lay person (here’s one for you: “Target areas for the SNP selection was defined as the coding region of the gene 269 plus 20kb upstream of the 5’ end and 10 kb downstream of the 3’ end of the gene.”). However, the major finding is pretty straightforward.

 One important indicator of fitness is oxygen uptake, and exercise such as running and biking can increase your ability to take in oxygen.  This is commonly referred to as “aerobic fitness.” However, in the study, for about 20% of people intense exercise didn’t improve their oxygen uptake. All the subjects (around 600) in this study did a cycling exercise program.  On average, people’s aerobic capacity improved around 15 percent, but in approximately 20 percent of those studied, improvement was minimal (5 percent or less). The failure to improve was related to specific genes. The study will have practical value, because doctors may be able to tailor special programs to people who don’t respond to exercise.

All in all, a nice study. However, when you saw the extensive media coverage, your take-home could easily be: Why exercise? In fact, there is still every reason to hit the gym or track, or get on the bike several times a week. First and foremost, let’s turn it around and note that 80 % of people DID improve aerobic capacity. What the misleading headlines and coverage don’t tell you that for most of us, exercise works, and works well.

And even if you are in the minority, the study only looked at a couple of outcomes. However, exercise has multiple other benefits, from weight loss, to improving mood, to increasing flexibility, to reducing the risk of osteoporosis. The excellent  evidence-based medicine blog Bandolier summarizes all the benefits of exercise concisely.

So for those of you working to promote healthy behaviors like exercise, make sure people know that it’s still definitely good for people. And for everyone involved in disseminating research to the public: Let’s remember to keep a skeptical eye on media one-liners about scientific findings, especially as they relate to human health. It’s almost always more complicated, and there’s no excuse not to go to the source of the information, rather than relying only on the press.

Skip to toolbar