To spray or not to spray?

Lyme disease – an infectious disease spread by ticks that thrive in wooded areas – is on the rise in the Northeast. The disease can be debilitating if undiagnosed, causing chronic fatigue, joint pain andneurological problems.

As a mom, it’s a really worry for me.  My kids are outside every day, often on trails or in wooded areas.  I check them daily for ticks, but one would be easy to miss.

This year, I’ve often debated with other parents the risk and benefits of using bug spray. On one hand, there is clear evidence that the insecticide DEET – or N,N-diethyl-meta-toluamide – effectively repels ticks.  But on the other hand, there are cases where it is clear that DEET has led to health problems including skin problems, hallucinations and seizures.

So I went hunting for some more sweeping analyses on what the evidence says about DEET. The Journal of Family Practice provided a good summary of several systematic reviews on the use of DEET in children. Both found the risk of adverse reactions was low – about 0.1 percent of children exposed experiences an adverse reaction – and that there was no clear dose-dependent relationship between exposure and extent of severity of the reaction.

The U.S. Centers for Disease Control maintains that DEET doesn’t present health concerns if it’s used according to the instructions, including not applying it to open wounds, under clothing, or near eyes or mouth.

As a mother, though, the narrative reports of small children undergoing hospitalization for seizures and neurological problems – even though it’s a very small number of cases over decades – stick in my mind.  So we use bug spray with DEET sparingly.  If I know the kids will be in the woods or fields where there are higher populations of ticks, I’ll give them a light spray – always with a bath that night to wash off all of the spray.  Even though the evidence shows DEET is safe, I still feel uneasy about this issue.

What about you? Are you comfortable using buy spray on a regular basis?

How your working environment impacts your health

Adopting a healthy lifestyle can be tough these days, especially for parents working hard to make ends meet. Yes, there are gyms and organic grocery stores, on-demand yoga and healthy cooking magazines.  But for working parents, long hours and irregular schedules make can make it difficult to eat healthily and exercise.

A cadre of researchers are Cornell’s College of Human Ecology are working on this problem, conducting the research and pulling together the best evidence to help families exercise more and eat healthier.

Among them is nutritional sciences professor Carole Devine, who has created and evaluated a program that helps change workplace environments to support physical activity and healthy eating.

The program, called Small Steps are Easier Together, is an active collaboration between Cornell faculty, Cooperative Extension educators and worksite leadership teams across New York. Pilot studies have been conducted in 23 sites since 2006. It involves worksites creating wellness leadership teams, who work with Cornell researchers to implement evidence-based strategies – like creating walking groups, posting maps, and offering more fruit and vegetable options in the cafeteria – to increase walking and promote healthier eating.

The most recent analysis of the program included 188  participants in 10 rural worksites. It found the percentage of sedentary women had declined to from 42 percent to 26 percent. A total of 35 percent of the women moved to a higher activity level.

Devine is also pulling together the evidence on how working conditions impact food decisions for families at home and on the job.

Her research has found that the stress of a busy job impacts parents’ ability to serve healthy meals, leading them to serve quicker and less healthy meals, such as fast food. She’s investigated a variety of coping strategies such as negotiating a more flexible work schedule and teaming up with a neighbor to take turns preparing meals.

Devine’s work highlights the connections between work environments and health, and provide some evidence-based strategies to improve public health.

Missing data: The Achilles heel of systematic reviews

If you’re a regular reader of EBL, you know we’re huge fans of systematic reviews – studies in which researchers use sophisticated methods to bring together and evaluate the dozens, hundreds, or even thousands of articles on a topic.

We value these analyses because they collect all of the information available and then look at why and how each study differs. By looking at so many studies, researchers can make general conclusions, even though participants and study settings might be different.

So we took a great interest this week in a series of studies in the British Medical Journal making the case that many medical studies aren’t published, and therefore missing from systematic reviews and the decision-making processes of doctors and patients.

One of the studies found that fewer than half of the clinical trials funded by the National Institutes of Health from 2005 to 2008 were published in peer-reviewed journals within 30 months of study completion, and only 68 percent were published at all.

Another examined trials registered at the federal web site ClinicalTrials.gov during 2009. Of the 738 studies registered and subject to mandatory reporting guidelines (per the rules of the U.S. Food and Drug Administration), only 22 percent reported results within one year.  (It’s interesting to note that trials of medicines in the later stages of development and those funded by the drug industry were more likely to have results reported.)

A third study re-analyzed 41 systematic reviews of nine different medicines, this time including unpublished clinical trial data from the FDA in each analysis.  For 19 of the systematic reviews, the addition of unpublished data led to the conclusion that the drug was not as effective as originally shown. For 19 other reviews, the additional data led to the conclusion that the drug was more effective than originally shown.

Dr. Harlan Krumholz, a cardiologist at Yale and a internationally-respected expert in outcomes research, summarized the issue in his Forbes magazine blog, including some of the reasons that data goes unreported. (Among them, researchers may not be happy with the results or may shift focus to a new study. And medical journals may not be receptive to negative results.)

Whatever the reasons, the take-home message seems to be that researchers and publishers need to do a better job getting all of the information out in the public domain so that doctors and patients can truly make informed decisions.

“You can’t say, ‘You can’t play.’”

Over dozens of years in the classroom, author and veteran kindergarten teacher Vivian Paley noticed a disturbing trend among her students: Each year, some children developed the power to create the games, make the rules, and decide who was allowed to play and who would be left out.

So Paley decided to make a new rule in her classroom: “You can’t say, ‘You can’t play.”  Paley documented the children’s reaction to the new rule with audio recordings.  (You can hear some of them in an episode of the NPR show This American Life.)

The following year, Paley’s rule was expanded to her entire school. She’s written a book on the experiment. And, since then, educators across the country have adopted the rule and studied its implications.  My own son’s preschool subscribes to the rule, so I thought I’d do a little digging to find out what the research says about it.

While there is no meta-analysis available to date on “You can’t say, ‘You can’t play,” studies have shown the rule improves social acceptance among kindergarteners.  The non-profit research center Child Trends implemented an intervention program among 144 kindergarteners that involved storytelling and group discussion to help children become more aware the different ways they may exclude their peers and learn ways to act in more accepting ways.  Their study found that children in the program felt more accepted by their peers compared to the control group.

Another study investigated teacher’s perceptions about inclusive play for young children. The found programs to implement the rule must involve training and on-going support to help teachers communicate the rule to students and deal with problems that emerge as students struggle with inclusive play.

On the whole, I’m impressed with the data available on “You can’t say, ‘You can’t play.’”  It seems to be a positive way to teach young children about social acceptance and diversity.  This is one area, though, where I’d love to see some more comprehensive research or a literature review to clarify all of the benefits to our children.

How do I know if a program works? A “CAREful” approach

I was recently giving a talk on intervention research and I was asked: “How do I tell whether the evidence for a particular program is good or not?” I often talk with practitioners in various fields who are struggling with exactly what “evidence-based” means. They will read “evidence” about a program that relies only on whether participants liked it, or they will see an article in the media that recommends a treatment based on a single study. What should you look for when you are deciding: Is the evidence on this program good or not?

I came across a very helpful way of thinking about this issue in the work of educational psychologist Joel R. Levin. He developed the acronym “CAREful research,” which sums up what needs to be done when drawing conclusions from intervention research.

In Levin’s “CAREful” scheme, he identifies four basic components of sound intervention studies.

Comparison – choosing the right comparison group for the test of the intervention. Usually, there needs to be a group that does not receive the program being studied, so one can see if the program works relative to a group that does not receive it. A program description should explain how the comparison was done and why it is appropriate.

Again and again – The intervention program needs to be replicated across multiple studies; one positive finding isn’t enough.

Relationship – There has to be a relationship between the intervention and the outcome. That is, the intervention has to affect the outcome variables. That may seem simple, but it’s important; the program has to have a positive effect on important outcomes, or why should you use it?

Eliminate – The other possible explanations for an effect have to be eliminated, usually through random assignment to experimental and control groups and sound statistical analysis.

 Levin and colleauges sum up the CAREful scheme:

“If an appropriate Comparison reveals Again and again evidence of a direct Relationship between an intervention and a speciried outcome, while Eliminating all other competing explanations for the outcome, then the research yields scientifically confincing evidence of the intervention’s effectiveness.”

To see a good example of an evidence-based approach to intervention that reflects this kind of CAREful research, take a look at the PROSPER program, which takes a similar approach to youth development progams.

So when you are looking at intervention programs, “Be CAREful”: Applying these four criteria for good research can help you decide what works and what doesn’t.

Skip to toolbar