The Dirty Little Secret About Diet Research

Home/Myths and Truths Articles/The Dirty Little Secret About Diet Research

The Dirty Little Secret About Diet Research

Let’s face it: most diet research sucks.

There, I’ve said it. Now I’ll show you why.

To do so, we have to briefly—and, I promise, painlessly- discuss one or two fundamentals of research design. Only then can you truly know how misleading, inadequate, often irrelevant and sometimes dangerous much of the nutrition research you hear about in the news truly is.

How to Do a Randomized, Controlled Study

Let’s say I’m a drug company and I want to find out if the new blood pressure drug my company has been working on actually lowers blood pressure in humans.

So I design the following study: I take a group of people. I make sure they are as “identical” as people can be—i.e. “30 year old non-smoking men from the northeast with no previous health issues but moderately high blood pressure”.

In other words I match the subjects for age, sex, medical history, and so on, all the things that could likely skewer the results, or at least all the things I can think of.

I don’t really care about how these folks might be different in terms of their television viewing habits, or if they differ in how much they like iPhones, but I do want to make sure these people are similar on any measure that could likely affect blood pressure, so I make sure they’re all non-smokers, not overweight, don’t have previously existing heart disease, have the same level of stress, aren’t taking any other medications, and anything else I can think of.

(If you’re thinking this is pretty hard to do, you’re right, it’s next to impossible, but it’s the research “ideal”, and people who get most of it right publish better research than those who get less of it right.)

So let’s agree that what we’re trying to do here is “match” our subjects, to make sure they’re as similar as possible, like a human equivalent to lab rats with identical genes bred in an identical environment. Yes, yes, I know it’s impossible, but you need to understand why that’s the goal, why “sameness” of subjects is important. And it’s because of what we’re about to do next.

Which is to randomly assign these very similar subjects to one of two groups.

For the length of the study, both groups live identical lives, eat identical food, sleep identical hours.. but with one exception and one exception only. Group one gets the blood pressure medication while group two gets a placebo (basically an empty pill).

If there were any significant differences between the two matched groups in actual blood pressure—like if the blood pressure medicine group had significantly lower blood pressure at the end of the study than the placebo group—we’d have a darn good reason to assume that the blood pressure med was the cause. We had tested the hypothesis that “this blood pressure medication lowers blood pressure, better than what could be predicted by chance”, and, in the case of this hypothetical study, we confirmed the hypothesis. It did indeed perform as hoped.

Now let me tell you how most of the studies involving diet that you hear about in the media resemble that study about as much as West Virginia resembles West Hollywood.

Enter Epidemiology

The vast majority of the studies that make it to the mainstream media are epidemiological studies, which work like this: You take a lot of data from large populations and then you see what things go with what things.

You notice, for example, that in countries where they eat a lot of fiber, there is less incidence of colon cancer. Or that people who have lower levels of vitamin D tend to have higher rates of MS. Or that diabetes incidence exploded upward under the Clinton administration. Or that people who eat more saturated fat have higher total cholesterol.

(Whether these correlations matter at all—and what they actually mean, if anything– is a topic for a different day. Now we’re just talking about the data, not whether or not they’re clinically important.)

So epidemiology is terrific for observing things, noticing what’s found together, and for its prime purpose, which is to generate hypotheses. The idea that smoking causes lung cancer came out of epidemiology. Epidemiologists noticed consistently higher levels of lung cancer among smokers, which was an interesting observation but only because this repeated observation led to the hypothesis that cigarette smoking causes lung cancer. And that hypothesis was then tested in a rigorous way, time and time and time again in study after study around which (unlike cholesterol) there is little controversy, and it is considered to be true that cigarettes wildly increase your risk for lung cancer.

But here’s what happens with epidemiology and diet studies.

Data will show that, for example, over a period of 25 years, saturated fat consumption went up in a population and so did cholesterol. Now, that should generate a hypothesis—i.e. that saturated fat consumption raises cholesterol. That hypothesis can now be tested clinically in a variety of settings (see the blood pressure medication example, above).

(And we would probably find that saturated fat consumption does raise serum cholesterol but by raising HDL and the harmless LDL-A particles while lowering the harmful LDL-B particles, ultimately improving your lipid profile! But I digress.)

The point is that the epidemiological observation generates something that can now be tested.

But that’s not what happens.

What happens is that these observational studies become the basis of health policy. They don’t generate hypothesis that can be tested and either proven or disproven, they generate the assumption of cause and effect, which is reinforced by the media, and  becomes the basis of public health policy.

Egg Eaters Have Higher Rates of Suicide

Take the made-up headline, “Egg eaters have higher rate of suicide, study finds”. Stuff like this comes out every single day. (I’m just waiting for the inevitable CNN story on how “higher intakes of saturated fat” are “associated” with “higher rates of gang violence”. Even if you were absent for critical thinking 101 in school, you should immediately see the problems with this kind of association study.

First of all there are zillions of variables, gazillions of associations. Is saturated fat, for example, a “marker” for the western diet? And what else is in that western diet? Is saturated fat consumption in a country a “marker” for more wealthy nations, and if so, what else is going on in those wealthy nations? More stress? More tobacco? More pollution? Less sleep? Less fiber? Who knows? It would take a computer 12 times the size of the legendary IBM Big Blue to sort out all the confounding variables, things that could account for the associations observed. Famous example taught in every statistics class: Yellow Finger Syndrome.

Yellow Finger Syndrome

There is a statistically significant positive correlation between a noticeable yellowing on the fingertips and lung cancer. For years, those with a strange yellowing on their fingertips developed lung cancer at a much higher rate than those who did not have yellowish fingertips. Beginning statistics students were taught this association to illustrate the concept of a confounding variable. The confounding variable in this case is smoking. Smoking is associated with both lung cancer and with yellow fingers. Yellow fingers don’t cause lung cancer, even though they are frequently found together (correlated).

Researchers love to think they’re very sophisticated, and have all kinds of statistical magic to perform on the data to rule out this kind of “confounding”. I think they’re overly optimistic. I’ve seen association studies miss the most obvious connections and fail to account for many other plausible ones. There’s also a good deal of confirmation bias in research as well—people frequently find what they look for and find what they expect to find, paying close attention to any correlations that support their hypothesis and throwing out the many that don’t (Colin Campbell’s “The China Study”, anyone?)

The Fabulous Punch Line You’ve All Been Waiting For

There are very few writers in the health-and-wellness space that I admire more than Denise Minger. No one I know of can debunk a study better, all the more remarkable because she does it with the kind of style and wit and writing chops rarely seen outside of the essays of Merrill Markoe. And she does all this armed with nothing but absolutely iron clad data, which she is happy to show you.

In a recent trip around internet-land, I came across this chart she had done a couple of years ago in writing about something like what I’m writing about today—the craziness of making assumptions and  health policy– from epidemiological, observational studies.

I’ll let the graph speak for itself. Seems to me it’s perfect evidence that Facebook has been really bad for cholesterol levels. And since we already “know” cholesterol causes heart disease, seems an open and shut prescription.

Wanna wipe out heart disease? Shut down Facebook.

I’ll let you enjoy this little masterpiece from Denise Minger without further comment from me.

After all, none is needed.

 

 

2013-06-03T15:09:27-07:00

About the Author:

10 Comments

  1. Val@holistic mindbody healing June 4, 2013 at 5:45 am - Reply

    This reminds me of the Calcium and weight loss study that was conducted a few years ago. I live near the town of Calcium. Study participants had to take in a certain amount of calcium AND walk 10,000 steps a day. Now, that is more exercise than most of those people were doing. I know one participant, who even though she rode her bicycle miles a day, still made sure her pedometer hit 10,000. The results of the study deemed that calcium promotes weight loss. Does it? I really don’t know, because of the exercise requirement that was also part of the study that we never hear about.

  2. Barb Kloepping June 4, 2013 at 7:22 am - Reply

    Awesome article and so true..thats why I like jonny

  3. Paula June 4, 2013 at 9:22 am - Reply

    Thanks so much for eloquently expressing my thoughts. I am getting ready to embark on a small research project involving burst training and estrogen levels. The biggest problem is putting together the correct patient population. Wish me luck!

  4. Mark June 4, 2013 at 5:33 pm - Reply

    This is great stuff:) Obviously, Justin Bieber is good for our health and Facebook is not. I can only imagine some of the other “cause and effects” correlations that could be explored – fun post – thanks!

  5. Beth Christoffersen June 4, 2013 at 6:17 pm - Reply

    Awesome Jonny! Thanks for the insightful message.

  6. Tracy Kolenchuk June 4, 2013 at 10:32 pm - Reply

    There is another reason diet research sucks big time. No-one bothers to measure healthiness. In theory, it’s simple. Select 100 random people. Record their current diets. Measure their healthiness. Put them on a specified diet for a minimum of 6 weeks. Measure their healthiness.

    Of course the hard part of that ‘simple’ is “measure their healthiness”. We have nothing close to an understanding, much less agreement on how we might measure healthiness – so instead we measure trash-talk like BMI, cholesterol levels, bone density.

    It is a serious error to think that a single measurement, even measurements like weight, cholesterol, etc are actually representative of healthiness. It is relatively easy, for example, to cause weight loss, which seems like success, while reducing healthiness, which is, frankly failure.
    to your health, tracy

  7. Dianne Boulton June 5, 2013 at 5:07 am - Reply

    I totally agree with this article. I would just like to add that there is also an issue with how the media reports scientific studies. I did a subject at University called science and the media. It was very eyeopening. During some of my research I found an article from a science writer who talked about how many press releases and journals they recieved each week. It was huge and he also talked about the way university PR departments reported results to the media. There is so much pressure on academics and universities to show significant results that a scientist who may not have made the claims that his uni releases to the media may be accused of wrong doing. I also agree with the comment about confirmation bias. One of the first things we learn’t about in Science Technology and Society studies was “theory ladden observation” ie. no observation can even be totally unaffected by the observer and their beliefs. Great Article.

  8. Doug Truter June 5, 2013 at 5:53 pm - Reply

    If one were to correlate death with where people die, it would be found that many people die in a bed. Without causation, would it be safe to conclude that it is best to avoid beds? [Perhaps in a few situations ‘yes’, but in most situations ‘no’.]

    • Dr. Jonny Bowden June 21, 2013 at 12:11 am - Reply

      ah, this one is almost as good as my storks and babies correlation! Love it!

      warmly
      jb

  9. Loyd Ball August 2, 2013 at 6:00 pm - Reply

    Compared to a typical American diet, the DASH (Dietary Approaches to Stop Hypertension) eating plan has been shown to significantly lower blood pressure in individuals with hypertension, as well as in those with normal blood pressure. The DASH diet emphasizes fruits, vegetables, whole grains, poultry, fish, nuts, and low-fat dairy products, and compared to the usual American diet, it is markedly higher in potassium and calcium, modestly higher in protein, and lower in total fat, saturated fat, and cholesterol. In the initial DASH trial, sodium levels were kept constant throughout the study in order to better evaluate the effects of other dietary components. The more recent DASH-sodium trial compared the DASH diet with a typical American diet at three levels of salt intake: low (2.9 grams/day), medium (5.8 grams/day, recommended by U.S. dietary guidelines), and high (8.7 grams/day, typical U.S. intake). At each level of salt intake, individuals on the DASH diet had lower SBP and DBP compared to individuals on the typical American diet. This blood pressure reduction was observed in individuals with hypertension and in those with normal blood pressure. The combination of the DASH diet and reduction in salt had an additive effect, lowering blood pressure more than either intervention alone.

Leave A Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.