5 Facts You Should Know about Nutritional Research
Wednesday, May 7, 2014
LifeTime WeightLoss in Nutrition, Paul Kriegler, Research, dietary recommendations, nutrition research, nutrition studies, research, understanding research

Another day, another nutritional headline… Those newspapers and 24-hour cable channels have to fill their space with something of interest. Yet, the sum of nutritional research from a scientific lens looks very different from the cultivated spin of 20-second reporting spots. What are we to take from the news we see each day? How can we bring a more critical understanding to the hype in the headlines? How do we process conflicting study results and the seemingly endless stream of shock inducing health warnings? Below are 5 facts you should know next time you hear the words, “New research indicates…”.

Our window into nutrition research is created by journalists, not scientists.

While good news organizations rely on credible sources, they're inevitably vying for ratings and readership, which means there's more to the agenda than transferring clear, usable information to audiences (unless it has high entertainment value). Reporters love to point out conflicting research conclusions, such as 2012’s New York Daily News headlineEgg yolks are almost as bad for your heart as smoking, study says” followed by several egg-supporting reports including May 6th, 2014’s Business Insider piece, "10 Scientific Reasons You Should Be Eating More Eggs.” They’re not doing this entirely to confuse us though. The studies being reported often follow the same back-and-forth conclusions described by the researchers themselves. It’s like a game of “telephone": each time the message gets passed along, the verbiage or tone can distort the true meaning and overall context.

Many use surveys that inevitably result in inaccuracies.

One of the cost-saving ways to do research on dietary patterns is to avoid housing and feeding hundreds or thousands of study subjects. In order to conduct research without these pricey conditions, investigators often rely on simple dietary recall surveys or food frequency questionnaires to gather the info for analysis. Researchers understand, however, that these surveys are subject to participant “self-editing” and incomplete gathering of true dietary habits. Such questionnaires often leave too much room for underreporting of food intake and over-reporting of activity habits and also depend largely on the skill of the interviewer to elicit the most accurate information.

For example, when the National Health and Nutrition Examination Surveys are conducted, participants are asked to fill out questionnaires or recall what they ate in the past 24 hours. They are then asked to repeat this information sharing several years later. Researchers from these sessions assess all the surveys for trends and patterns, including what differences there were in dietary choices from the first survey to the last (again, several years later). If you had a cheeseburger the day before your first research interview but a salad before your second interview, it’s assumed you greatly improved your eating habits and lifestyle. In reality, assessing just two days over a span of years cannot tell a very detailed story, but it’s cheaper to conduct than having a researcher follow you around all the time.

Most only demonstrate association but cannot explain causation.

The largest nutrition studies are often conducted using methods described above and are examples of observational research. It’s merely a way to look at large groups of people without introducing an “intervention,” such as making some people take a drug or supplement and others take a placebo in order to see if any patterns or correlations can be observed. Observational studies are helpful to guide future, more detailed research methods but are often used to draw conclusions that imply a cause and effect relationship without great explanation of why or how a certain result occurs. For example, if researchers using the simple cheeseburger and salad example above noticed that people who ate cheeseburgers also reported feeling more depressed, there may be an association between lack of salad and sadness – or could it mean sad people are more likely to eat cheeseburgers and happy people eat salads? Either way, the correlation between one food choice and a certain observed characteristic may be associated, but that doesn’t mean there’s a direct cause-and-effect relationship going on. How do you think the news headline would sound in this case? “New research shows eating cheeseburgers could lead to depression.”

When studies highlight direct cause and effect relationships, they often use specific populations (that may not be related to your age, gender, or situation).

To ensure we can more fully understand direct cause-and-effect relationships in nutrition, researchers opt for randomized, controlled intervention trials, which are generally much smaller in size, more rigorous, and more costly compared to observational studies. “Randomized” describes the method of randomly assigning study participants to different groups (e.g. pill vs. placebo, cheeseburger vs. salad) to ensure minimal bias and maximal randomization of other participant characteristics. For example, if one intervention group had all smokers and the other had none, then it may not be fair or accurate to study the effects of a dietary difference between the groups.

“Controlled” refers to the fact that researchers control as many variables as possible to decrease the influence of non-dietary differences in the study groups, such as age, gender, other known diseases, and habits such as smoking or alcohol use. For this reason, some of the most accurate cause-and-effect types of randomized, controlled trials (RCT’s) study very specific subsets of the population and only introduce one measurable difference or intervention into the groups. While the data can be strong for such studies, it does limit how the results can be applied to other population sub-groups. For example, many sports nutrition studies are conducted on trained male athletes aged 18-24, but the results may be marketed to the masses, including older adults and kids. (Imagine college fitness enthusiasts looking for free food or a small amount of extra cash for participating in an advertised study at their university).

Many conclusions sound impressive because they’re reported using relative numbers instead of absolutes.

Relative and absolute reporting are very different, and reporting one way versus the other can make data appear much more serious or significant. In a previous post, the Headline Games of Health Research, I used the example of heart attack risk being 32% higher in non-vegetarian populations as an inflated, "relative risk" calculation. In reality, for every 100 meat eaters observed, 6.8 of them had heart attacks, while for every 100 vegetarians observed only 4.6 of them suffered heart attacks. The 6.8% vs. 4.6% is an absolute difference of 2.2% actual occurrence, but the 2.2% difference means 6.8 is 32% higher than 4.6%. So, which conclusion (and headline) sounds more compelling: 2.2% more meat eaters had heart attacks - or the likelihood of heart attack was 32% higher for non-vegetarians? Using absolute numbers usually takes the shock value out of health headlines. Relative numbers make for a better story but not necessarily the best nutrition education.

Thanks for reading, everyone. Do you follow the health headlines? Which have made you stop and scrutinize? Offer up your questions and feedback.

Written by Paul Kriegler - Corporate Registered Dietitian

This article is not intended for the treatment or prevention of disease, nor as a substitute for medical treatment, nor as an alternative to medical advice. Use of recommendations in this and other articles is at the choice and risk of the reader.

 

 

Article originally appeared on LifeTime WeightLoss (http://www.lifetime-weightloss.com/).
See website for complete article licensing information.