Good diet trumps genetic risk of heart disease

THANK YOU FOR VISITING SWEATSCIENCE.COM!

My new Sweat Science columns are being published at www.outsideonline.com/sweatscience. Also check out my new book, THE EXPLORER'S GENE: Why We Seek Big Challenges, New Flavors, and the Blank Spots on the Map, published in March 2025.

- Alex Hutchinson (@sweatscience)

***

I posted last week about “epigenetics” — the idea that, while the genes you’re born with are unchangeable, environmental influences can dictate which of your genes are turned “on” or “off.” A few days later, I saw a mention of this PLoS Medicine study in Amby Burfoot’s Twitter feed. It’s not an epigenetic study, but it again reinforces the idea that the “destiny” imprinted in your genes is highly modifiable by how you live your life.

The study mines the data from two very large heart disease studies, analyzing 8,114 people in the INTERHEART study and 19,129 people in the FINRISK prospective trial. They looked at a particular set of DNA variations that increase your risk of heart attack by around 20%. Then they divided up the subjects based their diet, using a measure that essentially looked at either their raw vegetable consumption, or their fresh veg, fruit and berry consumption. Here’s what the key INTERHEART data looked like:

Breaking it down:

  • The squares on the right represent the “odds ratio,” where the farther you are to the right (i.e. greater than one), the more likely you are to have a heart attack.
  • The top three squares represent the people who ate the least vegetables, and the bottom three squares are those who ate the most vegetables.
  • Within each group of three, GG are the people with the “worst” gene variants for heart attack risk, AG are in the middle, and AA are the people with the least risk.

So if we look at the top group first, we see exactly what we’d expect: the people with the bad genes are about twice as likely to suffer a heart attack as the people with the good genes. But if you look at the middle group (i.e. eat more vegetables), the elevated risk from bad genes is down to about 30%. And in the group eating the most vegetables, there’s essentially no difference between the good and bad genes.

How does this work? The researchers don’t know — partly because no one’s even sure exactly how the bad gene variants cause higher risk. (There are some theories, e.g. that it affects the structure of your veins and arteries.) But the practical message is pretty clear: if you eat your veggies, you don’t have to worry about this particular aspect of your genetic “destiny.”

A few thoughts on the paleo diet

THANK YOU FOR VISITING SWEATSCIENCE.COM!

My new Sweat Science columns are being published at www.outsideonline.com/sweatscience. Also check out my new book, THE EXPLORER'S GENE: Why We Seek Big Challenges, New Flavors, and the Blank Spots on the Map, published in March 2025.

- Alex Hutchinson (@sweatscience)

***

I was asked in an interview a few days ago for my take on the “paleo diet,” and I figured I may as well share those thoughts here. I’ll start by saying that I’m not an expert in this area — these are just my impressions from the outside! For anyone who’s interested in the scientific rationale behind it, there’s a very comprehensive review paper that was published earlier this year and is freely available online. Anyway, a few scattered thoughts:

It’s not a diet, it’s a lifestyle. I’ve seen this sentiment expressed on a number of paleo-oriented blogs, and I think it’s a very important point. If you want to argue that humans are uniquely adapted to the paleolithic environment because that’s where we spent the most time, it’s meaningless to just consider one part of that environment. If you spend the day sitting on your couch watching TV, then picking up the phone and ordering an authentic ancestral meal from McPaleo’s isn’t going to make you healthy. The review paper focuses on the following key elements of the paleolithic environment:

  • regular sun exposure for vitamin D
  • plenty of sleep, in synch with light/dark cycles
  • lots of physical activity!
  • no exposure to pollution
  • fresh, unprocessed food
  • short bouts of acute stress (tiger!) rather than chronic stress

All of this stuff sounds great — I’m absolutely in favour of every element of this lifestyle.

Plants vs. animals. In the fantasies of some people, going paleo means you get to eat enormous Fred Flintstone-style chunks of meat — for every meal. Not quite: here’s a passage from a paper published in the British Journal of Nutrition last year:

[I]n contrast to common belief, hunting probably played a less dominant role from a nutritional point of view compared with gathering, and on average, it makes up 35% of the subsistence base for present-day worldwide hunter–gatherers, independent of latitude or environment.

This is a point picked up by David Katz in an article earlier this summer: you’re still going to eat, as Michael Pollan would say, “mostly plants.”

The evils of wheat and dairy, and the pace of evolution. Okay, this is where I believe we start to drift away from well-supported science and into the realm of unsupported hypotheses. The basic idea is that, since humans only started farming about 11,000 years ago, our genome hasn’t had time to adapt to these foods. Moreoever, grains like wheat actually contain “antinutrients” that hinder proper digestion and cause chronic inflammation — in everyone, not just those with celiac disease or gluten sensitivity. These results are not widely accepted — or at least, I personally don’t find the evidence convincing.

Moreover, 11,000 years — or 366 human generations — is actually quite a long time. As a result, for example, it’s well understood that the gene allowing for humans to digest milk was selected through evolutionary pressure in populations that domesticated cows. The review paper I mentioned above notes this as a “key exception” to what they otherwise claim is the rule that humans haven’t had time to adapt to agriculture. I, on the other hand, would view it as “key evidence” that humans have had time to adapt to agriculture. Obviously, not all modern humans can digest milk — and those who can’t shouldn’t drink it! But I see no evidence that those who can drink milk should avoid it. Same goes for wheat: it’s certainly true that some people can’t process it adequately, but I’m not convinced that it’s full of antinutrients that are secretly poisoning the rest of us.

Overall, as Stephan Guyenet pointed out in his discussion of the review paper, the evidence seems to support the idea that “the main detrimental change was not the adoption of agriculture, but the more recent industrialization of the food system.” In other words, the diet we should be seeking to emulate is pre-1850, not pre-10,000 BC — which, not coincidentally, once again sounds a lot like Michael Pollan’s advice: don’t eat anything your grandparents wouldn’t recognize as food.

Post-exercise refuelling: all at once, or spread out?

THANK YOU FOR VISITING SWEATSCIENCE.COM!

My new Sweat Science columns are being published at www.outsideonline.com/sweatscience. Also check out my new book, THE EXPLORER'S GENE: Why We Seek Big Challenges, New Flavors, and the Blank Spots on the Map, published in March 2025.

- Alex Hutchinson (@sweatscience)

***

We’ve all heard about the post-exercise “window” for refuelling to maximize recovery and adaption: you need to take in carbs and protein with 0.5-2 hours. But does the timing really matter for building muscle? A new study from Stuart Phillips’ group at McMaster University compared two tactics for post-workout protein intake. Once group took 25 grams of whey protein immediately after a set of leg-extension exercises; the other group received the same 25 grams of whey protein in 10 2.5-gram doses every 20 minutes for 200 minutes. They measured “muscle protein synthesis” — basically a very accurate way of assessing how well you’re stimulating muscle growth after a single bout rather than having to run the experiment for several months to actually see muscle growth — and found that it was much higher in the group that took their protein all at once. After six hours, protein synthesis was elevated by 193% in the single-shot group and just 121% in the prolonged group.

The question this study was seeking to answer actually relates to the difference between whey protein (which is absorbed quickly) and casein (which is absorbed more slowly: the 2.5 grams of whey every 20 minutes was chosen to mimic the absorption pattern of casein). The problem is that if you compare two different proteins in a study, then you’re changing a bunch of different factors at once — the absorption timing, but also factors like the amount of leucine, a branched-chain amino acid thought to be key for muscle growth. Since both groups received 25 grams of whey (and thus identical amounts of leucine), this shows that absorption rate is key.

Practical takeaway: this was a muscle protein synthesis study, not a training study, so you have take the results cautiously. But it does suggest that if you’re trying to build muscle, taking in a big dose (i.e. 25 grams) of protein as soon as possible is preferable to snacking over the course of a few hours. It also confirms previous findings suggesting that whey (found in dairy products) has some advantages over other protein sources.

Fact-checking the backlash against recent salt studies

THANK YOU FOR VISITING SWEATSCIENCE.COM!

My new Sweat Science columns are being published at www.outsideonline.com/sweatscience. Also check out my new book, THE EXPLORER'S GENE: Why We Seek Big Challenges, New Flavors, and the Blank Spots on the Map, published in March 2025.

- Alex Hutchinson (@sweatscience)

***

Look, I agree that the role of salt in food is complicated. It’s not that I think salt has no possible effect on health, and that people should just eat as much as they want. But I do think the reaction to recent studies questioning salt orthodoxy is ridiculous and closed-minded. I agree entirely with a recent statement from Yoni Freedhoff’s excellent Weighty Matters blog, in discussing a recent Scientific American article on salt:

So while I think healthy debate is in fact healthy, I would have thought that magazines like Scientific American, and many of the intelligent commentators on this and other blogs, would in fact do their due diligence to read and critically appraise studies, before getting on any particular bandwagon.

The thing is, I think SciAm did do its due diligence, and many of its critics didn’t. The most widely linked response to the recent salt studies comes from the Harvard School of Public Health, which posted a piece called “Flawed Science on Sodium from JAMA: Why you should take the latest sodium study with huge grain of salt.” It wastes no time in asserting that conclusions of the latest JAMA study (which I blogged about here) are “most certainly wrong.”

Why should we conclude that the JAMA study is wrong? Harvard doesn’t try to explain the results (which found that a measurement of sodium intake wasn’t linked to blood pressure, hypertension, heart disease in 3,681 healthy adults over a 7.9-year period). Instead, they offer some possible ways that random error could have crept into the results, such as:

  • the study was too small to support its conclusions, with just 3,681 subjects;
  • the study used 24-hour urine collection to assess sodium intake, which just provides a snapshot in time;
  • the study didn’t account for the fact that people who are tall and/or active eat more food (and thus salt) but have lower risk of heart disease.

Okay, fair enough. Getting good epidemiological data on salt consumption and health outcomes is very difficult, and this study certainly would have been better if it had a million people in it and kept them in boxes for 20 years to prevent any confounding factors. Presumably that’s what the salt-is-bad studies did, right? It certainly sounds that way, according the Harvard article:

Furthermore, the study’s findings are inconsistent with a multitude of other studies conducted over the past 25 years that show a clear and direct relationship between high salt intakes and high blood pressure, and in turn, cardiovascular disease risk. (4-10)

Conveniently, the (4-10) refers to links to these studies — the strongest evidence Harvard could marshal to prove that salt is dangerous. So what happens if we actually bother to read and critically appraise these excellent studies — perhaps using the same standards they’re applying to the JAMA study?

Uh-oh. This Intersalt study uses 24-hour urine excretion (“unreliable,” according to Harvard). This BMJ study only had 3,126 subjects, smaller than the JAMA study. This AIM study used 24-hour urine and only had 2,974 subjects — and not only that, it found no significant relationship between sodium levels and heart disease. (They tried to salvage the “right” answer by saying there was a “nonsignificant trend” — imagine if the JAMA study had been so brazen!) This NEJM study only had 412 participants, and based its primary conclusion on a comparison of a regular, high-salt diet with a low-salt version of the DASH diet, which “emphasizes fruits, vegetables, and low-fat dairy products, includes whole grains, poultry, fish, and nuts, contains only small amounts of red meat, sweets, and sugar-containing beverages, and contains decreased amounts of total and saturated fat and cholesterol.” Sounds like a fair comparison to me!

Okay, seriously. There’s no doubt that salt has an effect on blood pressure. That’s just basic chemistry. But does it have a clinically significant effect? The DASH study I mentioned above found that cutting salt intake by about 55% (good luck with that in the real world, and feel free to donate your taste buds to science, since you won’t be needing them) reduced systolic and diastolic blood pressure by 6.7 and 3.5 mmHg respectively. For comparison, to go from stage 1 hypertension to normal, you’d have to reduce systolic pressure by a minimum of 20 mmHg. So if eliminating more than half the salt in your diet is able to (barely) move the needle on blood pressure, isn’t it reasonable to question whether dramatic society-wide efforts to reduce salt consumption even in healthy people are rational and useful? And given these small effects, isn’t it plausible that in a real-world epidemiological study of healthy (non-hypertensive) people (like the JAMA study), sodium intake might have no bearing on subsequent health outcomes? Why would such a finding be “most certainly wrong”?

The point is that applying double standards to evaluate studies doesn’t serve science, and it doesn’t serve the public interest. This latest JAMA study appears to me to be no better and no worse than the studies used to justify the “war on salt,” so promptly dismissing it because of its conclusions (rather than its methodology) is lazy at best, and dishonest at worst.

Final note: I still find it interesting that Walter Willett (the key voice in the Harvard School of Public Health article dissected above) himself published findings showing that salt intake in the U.S. essentially hasn’t changed over the last 50 years, while hypertension has risen dramatically. I’m still not sure how he explains this, if salt is such a key driver of blood pressure.

The Australian Paradox: less sugar, more obesity

THANK YOU FOR VISITING SWEATSCIENCE.COM!

My new Sweat Science columns are being published at www.outsideonline.com/sweatscience. Also check out my new book, THE EXPLORER'S GENE: Why We Seek Big Challenges, New Flavors, and the Blank Spots on the Map, published in March 2025.

- Alex Hutchinson (@sweatscience)

***

I’ve been debating whether to blog about this study since I received an e-mail about it from the Canadian Sugar Institute last week. In general, information from food marketing agencies is pretty suspect. For example, this press release from “Pistachio Health” that also went out last week about a new study:

[…] Additionally, pistachios – also known as the “Skinny Nut” – are shown to be a “mindful snack” in terms of taking longer to eat and requiring the snacker to slow down and be more conscious of what has been consumed. […]

Yeah, everybody calls them the “Skinny Nut.” Riiiight. Now I really believe that the information you’re sending me is impartial…

Anyway, I’ve decided to blog about this study — a look at sugar consumption and obesity rates in Australia, the U.S. and the U.K. between 1980 and 2003 — because the information is interesting. It’s by two very well-respected Australian researchers (one of whom, for the record — Jennie Brand-Miller — is a lecturer at the University of Sydney’s medical school, where my wife is studying). It’s in a peer-reviewed journal, Nutrients. And as far as I can tell from the disclosures in the paper, it wasn’t in any way funded by the sugar industry: it was a masters project supervised by the two authors. The only reason the sugar lobby is e-mailing it around is because — as you’ll see — they like the results.

The full text of the study is actually available online, so I’m not going to dissect every detail. But the key result is very simple: unlike in the U.S., where sugar consumption has been climbing, per capita consumption of refined and added sugars actually declined by 16% in Australia between 1980 and 2003. During the same period, rates of obesity tripled. Here’s the sugar data:

Now, population data like this always raises lots of questions. The paper discusses the various ways of estimating sugar consumption, along with their pros and cons, and also breaks down sub-categories like sweetened beverages and so on. Without getting bogged down in all that, I think the important point is — as we should all know by now — putting two graphs side-by-side and saying “Hey, they have the same shape! Graph A must have caused Graph B!” is not good science. The recent debate about Robert Lustig’s “sugar is toxic” crusade has involved a lot of this sort of analysis: added sugar intake has increased in the U.S. and so has obesity, ergo A caused B. But if the trend really is the opposite in Australia (and if anyone can suggest reasons why the data above shouldn’t be trusted, please chime in below!), then those arguments are considerably weakened.