Time-trials, body weight and allometric scaling in cycling

THANK YOU FOR VISITING SWEATSCIENCE.COM!

As of September 2017, new Sweat Science columns are being published at www.outsideonline.com/sweatscience. Check out my bestselling new book on the science of endurance, ENDURE: Mind, Body, and the Curiously Elastic Limits of Human Performance, published in February 2018 with a foreword by Malcolm Gladwell.

- Alex Hutchinson (@sweatscience)

***

If you want to be a great sprint cyclist, you need to be able produce enormous bursts of power — so being big and muscular helps. If you want to be able cycle up mountains, on the other hand, you need great relative power — power divided by body weight — since every extra pound is deadweight that you have to haul upwards. But what about the middle ground? How does weight affect your performance in, say, a flat 40-km time trial?

Studies dating back to the 1980s have suggested that you need to use “allometric” scaling of weight to get the best prediction of performance in a 40-km time trial. Start by performing a graded peak power output (PPO) test, which is basically like a VO2max test, and PPO is the average power maintained for the last minute before you reach failure. Your PPO is a great way to predict how you’ll do in a 16-km time trial. Divide PPO by your body weight, and you have a great predictor of how you’ll do in a mountain race. And the interesting part: divide PPO by your weight to the power of 0.32 and you’ll have a great prediction of how you’ll do in a 40-km time trial.

This idea was first proposed by David Swain back in 1987, but hasn’t been tested much — which is why a new study just posted online at the British Journal of Sports Medicine, from Rob Lamberts and his colleagues at the University of Cape Town, put it to the test with 45 trained male cyclists. Here are some of the key results:

It’s pretty clear that the bottom graph (power divided by weight to the power of 0.32) provides a much better fit to the data than power (top) or power divided by weight (middle). So this is a useful piece of data for performance monitoring. But left unanswered is the question: why 0.32? Is this just an empirical number that happens to capture the tradeoffs between having more muscle and carrying more weight in exercise lasting about an hour? Or is there some physical or physiological explanation?

Fact-checking the backlash against recent salt studies

THANK YOU FOR VISITING SWEATSCIENCE.COM!

As of September 2017, new Sweat Science columns are being published at www.outsideonline.com/sweatscience. Check out my bestselling new book on the science of endurance, ENDURE: Mind, Body, and the Curiously Elastic Limits of Human Performance, published in February 2018 with a foreword by Malcolm Gladwell.

- Alex Hutchinson (@sweatscience)

***

Look, I agree that the role of salt in food is complicated. It’s not that I think salt has no possible effect on health, and that people should just eat as much as they want. But I do think the reaction to recent studies questioning salt orthodoxy is ridiculous and closed-minded. I agree entirely with a recent statement from Yoni Freedhoff’s excellent Weighty Matters blog, in discussing a recent Scientific American article on salt:

So while I think healthy debate is in fact healthy, I would have thought that magazines like Scientific American, and many of the intelligent commentators on this and other blogs, would in fact do their due diligence to read and critically appraise studies, before getting on any particular bandwagon.

The thing is, I think SciAm did do its due diligence, and many of its critics didn’t. The most widely linked response to the recent salt studies comes from the Harvard School of Public Health, which posted a piece called “Flawed Science on Sodium from JAMA: Why you should take the latest sodium study with huge grain of salt.” It wastes no time in asserting that conclusions of the latest JAMA study (which I blogged about here) are “most certainly wrong.”

Why should we conclude that the JAMA study is wrong? Harvard doesn’t try to explain the results (which found that a measurement of sodium intake wasn’t linked to blood pressure, hypertension, heart disease in 3,681 healthy adults over a 7.9-year period). Instead, they offer some possible ways that random error could have crept into the results, such as:

  • the study was too small to support its conclusions, with just 3,681 subjects;
  • the study used 24-hour urine collection to assess sodium intake, which just provides a snapshot in time;
  • the study didn’t account for the fact that people who are tall and/or active eat more food (and thus salt) but have lower risk of heart disease.

Okay, fair enough. Getting good epidemiological data on salt consumption and health outcomes is very difficult, and this study certainly would have been better if it had a million people in it and kept them in boxes for 20 years to prevent any confounding factors. Presumably that’s what the salt-is-bad studies did, right? It certainly sounds that way, according the Harvard article:

Furthermore, the study’s findings are inconsistent with a multitude of other studies conducted over the past 25 years that show a clear and direct relationship between high salt intakes and high blood pressure, and in turn, cardiovascular disease risk. (4-10)

Conveniently, the (4-10) refers to links to these studies — the strongest evidence Harvard could marshal to prove that salt is dangerous. So what happens if we actually bother to read and critically appraise these excellent studies — perhaps using the same standards they’re applying to the JAMA study?

Uh-oh. This Intersalt study uses 24-hour urine excretion (“unreliable,” according to Harvard). This BMJ study only had 3,126 subjects, smaller than the JAMA study. This AIM study used 24-hour urine and only had 2,974 subjects — and not only that, it found no significant relationship between sodium levels and heart disease. (They tried to salvage the “right” answer by saying there was a “nonsignificant trend” — imagine if the JAMA study had been so brazen!) This NEJM study only had 412 participants, and based its primary conclusion on a comparison of a regular, high-salt diet with a low-salt version of the DASH diet, which “emphasizes fruits, vegetables, and low-fat dairy products, includes whole grains, poultry, fish, and nuts, contains only small amounts of red meat, sweets, and sugar-containing beverages, and contains decreased amounts of total and saturated fat and cholesterol.” Sounds like a fair comparison to me!

Okay, seriously. There’s no doubt that salt has an effect on blood pressure. That’s just basic chemistry. But does it have a clinically significant effect? The DASH study I mentioned above found that cutting salt intake by about 55% (good luck with that in the real world, and feel free to donate your taste buds to science, since you won’t be needing them) reduced systolic and diastolic blood pressure by 6.7 and 3.5 mmHg respectively. For comparison, to go from stage 1 hypertension to normal, you’d have to reduce systolic pressure by a minimum of 20 mmHg. So if eliminating more than half the salt in your diet is able to (barely) move the needle on blood pressure, isn’t it reasonable to question whether dramatic society-wide efforts to reduce salt consumption even in healthy people are rational and useful? And given these small effects, isn’t it plausible that in a real-world epidemiological study of healthy (non-hypertensive) people (like the JAMA study), sodium intake might have no bearing on subsequent health outcomes? Why would such a finding be “most certainly wrong”?

The point is that applying double standards to evaluate studies doesn’t serve science, and it doesn’t serve the public interest. This latest JAMA study appears to me to be no better and no worse than the studies used to justify the “war on salt,” so promptly dismissing it because of its conclusions (rather than its methodology) is lazy at best, and dishonest at worst.

Final note: I still find it interesting that Walter Willett (the key voice in the Harvard School of Public Health article dissected above) himself published findings showing that salt intake in the U.S. essentially hasn’t changed over the last 50 years, while hypertension has risen dramatically. I’m still not sure how he explains this, if salt is such a key driver of blood pressure.

Getting fitter doesn’t make you sweat more after all

THANK YOU FOR VISITING SWEATSCIENCE.COM!

As of September 2017, new Sweat Science columns are being published at www.outsideonline.com/sweatscience. Check out my bestselling new book on the science of endurance, ENDURE: Mind, Body, and the Curiously Elastic Limits of Human Performance, published in February 2018 with a foreword by Malcolm Gladwell.

- Alex Hutchinson (@sweatscience)

***

The fitter you get, the more you sweat during exercise in order to dissipate heat more quickly. That’s the conventional wisdom among scientists, and I’ve certainly repeated it many times here and elsewhere. So I was surprised to see a new study posted online in the American Journal of Physiology, from Ollie Jay and his colleagues at the University of Ottawa’s Thermal Ergogenics Laboratory, that contradicts this conventional wisdom. His results suggest that your sweat rate simply depends on how much physical work you’re doing, and how much skin surface area you have. Previous studies have been confused because fitter people are able to do more physical work (thus generating more heat and responding with more sweat) at the same effort level.

Let’s say I’m running at a given intensity (say 60% of VO2max) that corresponds to 6:00/km. In order to move my legs, my body is burning a combination of carbs and fat, producing heat as a metabolic byproduct. In order to dissipate that metabolic heat, I’ll sweat a certain amount.

Now let’s say I accelerate to 5:00/km (so I’m at 70% of VO2max). I’m moving my legs faster, so I generate more metabolic heat, and in response, I sweat more than at the slower pace.

The question is: what happens if I go away and train for a year, and improve my fitness so that I can run at 5:00/km (the faster speed) and have it correspond to 60% of VO2max (the lower intensity). How much will I sweat compared to my untrained state? Will it depend on my intensity, or my speed? The current conventional wisdom says it’ll depend on intensity: so running at 60% VO2max will produce the same amount of sweat whether I’m running at 6:00/km (unfit) or 5:00/km (fit). But Jay’s new study found the opposite: I’d sweat the same at 5:00/km regardless of whether my intensity is at 70% VO2max (unfit) or 60% VO2max (fit).

Confused yet? In actual fact, the study took a slightly different approach, comparing two groups matched for body mass and surface area but with dramatically different aerobic fitness (VO2max): one group averaged 40.3 mL/kg/min, the other 60.1 mL/kg/min. He had them perform cycling tests, fixing either the relative intensity (i.e. 60% of VO2max) or the metabolic heat production, and found that sweat rates depended on heat production, not aerobic fitness.

There is one important caveat, though: the study was conducted in relatively comfortable temperatures of 26 C (79 F) and 26% relative humidity:

Maximal sweating capacity and subjective tolerance to the heat are no doubt improved by aerobic fitness, and therefore individuals with a high VO2peak would certainly have a distinct advantage during exercise at a fixed heat production in a physiologically uncompensable (i.e. hot and humid) environment.

So under “normal” conditions, the amount you sweat depends only on how much physical work you’re doing (and how big you are). But if the conditions are so hot that it’s impossible for you to dissipate all your metabolic heat through sweating, less fit people will hit their maximum sweat rate earlier than fit people.

Micro-exercise and the shortest possible (useful) workout

THANK YOU FOR VISITING SWEATSCIENCE.COM!

As of September 2017, new Sweat Science columns are being published at www.outsideonline.com/sweatscience. Check out my bestselling new book on the science of endurance, ENDURE: Mind, Body, and the Curiously Elastic Limits of Human Performance, published in February 2018 with a foreword by Malcolm Gladwell.

- Alex Hutchinson (@sweatscience)

***

This week’s Jockology column in the Globe and Mail takes a look at “micro-exercise”: what is the smallest bout of exercise that actually offers health benefits?

Exercise generally obeys the normal rules of mathematics. You can replace one 40-minute workout with two 20-minute bouts, or even four 10-minute bouts, and get roughly the same health benefits. But beyond that, the rules break down: Exercise in bouts lasting less than 10 minutes simply doesn’t count.

At least, that’s what exercise physiologists and public-health authorities have been telling us for years.

But influential groups such as the American College of Sports Medicine are now reconsidering the value of ultra-short bouts of activity, and a new Canadian study suggests that the gradual accumulation of “incidental physical activity” – sweeping the floor, taking the stairs – in bouts as short as one minute can also contribute to your cardiovascular fitness level… [READ THE WHOLE ARTICLE]

The column focuses on the findings of a recent study by Ashlee McGuire and Bob Ross at Queen’s University. For more details on that study, check out Ashlee’s guest post describing the study’s results over at Obesity Panacea. Also, the print version of the study was accompanies by Trish McAlaster’s graphic, which hasn’t yet been posted online [UPDATE: now it’s posted here]. Unfortunately, it doesn’t really fit in this blog’s format, but nonetheless:

I’m reasonably confident that this is the first mention of the caloric expenditures involved in butchering small animals to make it into the Globe!