An article from the Associated Press went unnoticed last week. Arturo Casadevall, professor of microbiology at Albert Einstein College of Medicine, set out to unfold the mystery behind 2,047 retractions from published, scientific studies. To his surprise, and that of many researchers: the No. 1 reason for those retractions – was fraud. But it doesn’t necessarily take the pressure of academic tenure to get poor research out the door; sometimes it’s simply poor (or deceitful) reporting. Consider the following cases of misinterpreted research, and tipson how to outsmart the masses:
Tip # 1: Read the METHODS. Is this how you would’ve conducted the study? How do they “define“ the issue?
Example 1: Is My Diet Soda Making Me Fat? The real answer is… “we don’t know.” The problem with epidemiology is it’s large, overarching look on the human population. At times, this is beneficial, but in the case of one highly-touted study from the University of Texas Health Science Center at San Antonio, this choice of measurement misses a vital point. The study led many to believe that soft drinks made with artificial sweeter are a greater contributor to obesity than those made with caloric sweeteners. People who’ve long claimed “I don’t touch the Diet stuff” had their short, moment of pride. But the problem lies in the longitudinal design, meaning participant waist circumference was tracked over time (in this case, 10 years), which doesn’t account for those participants that switched to diet soft drinks precisely because they were gaining weight.
The trouble with research is that researchers know perfectly well when their design is flawed.
Example 2: Many of us remember the Stanford Study, as if none other existed, that recently created a stir in the nutrition community. Mark Bittman long avoided commenting on the study, until a month later when he released an entire column simply titled, “That Flawed Standford Study.” Somehow every researchers knew immediately what he was talking about. The study concluded that there is “no strong evidence that organic foods are significantly more nutritious than conventional foods.” Mark Bittman’s response was well over-due, but his message was clear, “the study narrowly defines ‘nutritious’ as containing more vitamins,” arguing that consumption of organic foods reduces exposure to pesticides and antibiotics, and “that’s largely why people eat organic foods.” His message didn’t come soon enough, and the study quickly erupted into a premature “I told you so” Twitter-frenzy, leaving many organic-believers with the responsibility to sit their friends down themselves to explain the true benefits behind organic.
Tip # 2: Read the DISCUSSION. Does it coincide with what the news story is reporting?
Example: BPA Levels Tied to Obesity in Youth. Seeing this, you might consider buying your son or daughter a Nalgene plastic water bottle, free of the bisphenol-A compound. But this is just another case of poor reporting. A term such as “tied” is rarely used in scientific research, precisely because of its ambiguity with cause and effect. While this article makes a bold claim that bpa is a risk factor for childhood obesity, it fails to mention that obese children may consume more packaged foods, thus leading to higher bpa levels in their urine. To no surprise, a letter to the editor in the New York Times summed up this same argument one week later.
Tip # 3: Read the section on LIMITATIONS. What are some of the short-comings of the study? Are they large enough to raise an eyebrow?
Example: Calories from Soft Drinks Directly Contribute to Obesity. Directly? Possibly the only term worse than tied, “directly” is more often found in mathematics than research dealing with human beings and co-founding variables. Too bad this study on schoolchildren forgets to mention that 26% of the participants did not complete the study. Even researchers hailed for their “rigorously designed randomized, controlled trials” can’t avoid the poor retention rate in studies involving children, which of course can have a drastic effect on study results.
The trouble with research is that researchers know perfectly well when their design is flawed. However, more often than not, they have no choice in what tools, criteria, and measurements they use because they are dealing with new ideas, limited funding, and time constraint. So the study goes on – because bad research is better than no research. Journalists even remotely knowledgeable on the topic would disagree. As for studies that slide past security – it’s in your hands.