Archive

Archive for July, 2011

The Australian Paradox: less sugar, more obesity

July 13th, 2011

I’ve been debating whether to blog about this study since I received an e-mail about it from the Canadian Sugar Institute last week. In general, information from food marketing agencies is pretty suspect. For example, this press release from “Pistachio Health” that also went out last week about a new study:

[…] Additionally, pistachios – also known as the “Skinny Nut” – are shown to be a “mindful snack” in terms of taking longer to eat and requiring the snacker to slow down and be more conscious of what has been consumed. […]

Yeah, everybody calls them the “Skinny Nut.” Riiiight. Now I really believe that the information you’re sending me is impartial…

Anyway, I’ve decided to blog about this study — a look at sugar consumption and obesity rates in Australia, the U.S. and the U.K. between 1980 and 2003 — because the information is interesting. It’s by two very well-respected Australian researchers (one of whom, for the record — Jennie Brand-Miller — is a lecturer at the University of Sydney’s medical school, where my wife is studying). It’s in a peer-reviewed journal, Nutrients. And as far as I can tell from the disclosures in the paper, it wasn’t in any way funded by the sugar industry: it was a masters project supervised by the two authors. The only reason the sugar lobby is e-mailing it around is because — as you’ll see — they like the results.

The full text of the study is actually available online, so I’m not going to dissect every detail. But the key result is very simple: unlike in the U.S., where sugar consumption has been climbing, per capita consumption of refined and added sugars actually declined by 16% in Australia between 1980 and 2003. During the same period, rates of obesity tripled. Here’s the sugar data:

Now, population data like this always raises lots of questions. The paper discusses the various ways of estimating sugar consumption, along with their pros and cons, and also breaks down sub-categories like sweetened beverages and so on. Without getting bogged down in all that, I think the important point is — as we should all know by now — putting two graphs side-by-side and saying “Hey, they have the same shape! Graph A must have caused Graph B!” is not good science. The recent debate about Robert Lustig’s “sugar is toxic” crusade has involved a lot of this sort of analysis: added sugar intake has increased in the U.S. and so has obesity, ergo A caused B. But if the trend really is the opposite in Australia (and if anyone can suggest reasons why the data above shouldn’t be trusted, please chime in below!), then those arguments are considerably weakened.

Training one limb reduces soreness in the other limb

July 11th, 2011

This one surprised me. It’s a new study from the University of Exeter, just published online in the European Journal of Applied Physiology, about DOMS (delayed onset muscle soreness), the exact causes of which have been hotly debated for years (including a section in Cardio or Weights, I might add).

Here’s what they did: 15 volunteers did a hard biceps workout with one arm, emphasizing eccentric rather than concentric contractions in order to produce more post-workout soreness. Two weeks later, they did the same workout; half of them (okay, 7) did the workout with the same arm as before, while the other half did the workout with the opposite arm. An hour after each workout, and then again 24 and 48 hours later, the researchers measured a series of parameters related to soreness, including loss of strength, perceived soreness, and resting arm angle.

As expected, due to what’s known as the “repeated bout effect,” the amount of post-workout soreness was less after the second workout than after the first workout. What’s weird is that it was less even when the subjects did the workout with the opposite arm!

Apparently, this effect has been observed in one previous study, though the results weren’t quite as clear. And it fits with other results showing that training your right arm (for example) can lead to strength gains in your left arm. In that case, it’s not that the muscles in your left arm get bigger — instead, it’s neural adaptations. As your brain learns to send “contract!” signals more effectively to your right arm, it does so symmetrically, so some of the benefits transfer to your left arm. Something similar appears to happening with the post-workout soreness:

Data from the present study, therefore, provide limited evidence that the neural adaptations that provide protection from EIMD [exercise-induced muscle damage]  following a second bout of exercise are likely to be centrally [i.e. in the brain] mediated.

A clue as to how this might work comes from the EMG data they took of muscle activity during the workouts. Eccentric muscle contractions preferentially recruit fast-twitch muscle fibres, which thus sustain greater damage than slow-twitch fibres. As a result, during the second of two exercise bouts you automatically use a higher proportion of slow-twitch fibres — and that shows up as a change in the average frequency of muscle activity measured by EMG, which decreases by 20-30% because there are more slow twitch contributions. In the new experiment, this is exactly what the researchers found: the muscle frequency decreased in the second bout, no matter which arm they used. The brain seems to have learned from bitter experience that it should recruit fewer fast-twitch fibres, and it applies that lesson to both arms.

So what’s the practical value of this? If you have an injury that immobilizes one arm or leg, this suggests that training the opposite limb, with a focus on eccentric contractions, can help protect the bad limb from muscle damage once you start rehabilitation exercises. And hey, it’s also just a pretty cool piece of trivia.

A universal law of decline for running, swimming… and chess?!

July 10th, 2011

Which do you lose first as you get older: speed or endurance? Swimming speed or running speed? Athletic ability or cognitive ability? There have been dozens of studies investigating these questions, often with conflicting results. But all these processes of decline follow the same basic mathematical rules, according new study in the journal Age by researchers in France (press release here; full text freely available here).

The abilities of all living things follow a basic pattern: (1) you start at zero when you’re born; (2) as you get bigger, stronger and more experienced, your abilities increase exponentially; (3) as your body starts to age and wear out, an exponential decline kicks in; (4) you end up back at zero when you die. During your peak years, the balance between the rising and falling exponential curves gives you a brief plateau.

Now, this seems like a fairly bland set of statements. After all, they’re not taking into account loss of muscle mass with aging, and shortening of telomeres, and loss of tendon elasticity and so on. They’re just wrapping all the details into a general curve. And in fact, their argument is even bigger than that: this curve doesn’t just describe your 100-metre dash time. It describes the life of cycle of individual cells; of human beings; and of species in general. They claim.

To explore this theory, the researchers studied the career trajectory of more than 11,200 individual track athletes, swimmers and chess players. Here’s some sample data: track is at the top (Ato Boldon, 100m, blue; Sandie Richards, 400m, red), swimming is middle (Peter van den Hoogenband, 100m free, blue; Martina Marcova; 200m free; black), and chess is the bottom (Jonathan Simon Speelman in blue, Jan Timman in black).

Convinced? Yeah, me neither, to be honest. But statistically, they claim that this model accounts for 91.7% of the variance at the individual level, and 98.5% at the species level. For “species level,” they looked at age-class bests. The data below shows curves for (top) men’s 100m free, women’s 200m free and women’s 400m run; (middle) best chess performance by age; (bottom) men’s marathon. The marathon graph is the most interesting, because it shows the two exponentials — rising and falling — that are added together to produce the model.

So what does this all mean? I’ve got to be honest: I found the paper pretty hard to understand. (Fortunately, the full text is freely available online, so you don’t have to rely on my attempts to decipher it!) I don’t think we’re supposed to look at those graphs and assume that Ato Boldon is going to be either dead or paralyzed when he turns 64. Rather, I think this model is intended to help us understand what the future evolution of sport (and other human capabilities) as a whole might be. As the researchers conclude:

The study of the world records progression and top performances revealed a plateau in a majority of studied events. We extended the studied data and the model to a broader context: the development of physiological performance in the process of ageing. This questions the upcoming evolution of the biphasic pattern presented here: will the phenotypic expansion continue, plateau or decrease? Do we have the ability to maintain our development in a sustainable way?

Good questions — but of course, they don’t know the answers. Let’s see what Usain Bolt does this year and next, then we’ll talk.

[BONUS: Here’s a link to the appendix where they list the fitting parameters for the 11,200 careers they analyzed. Interesting tidbit: the average ages of peak performance for different events. E.g. for running: 23.3 for men’s 10,000m, 31.6 for men’s marathon. That’s a big gap!]

A tablespoon of water helps the exercise go on

July 7th, 2011

Neat study from some researchers in Greece, posted last month at Medicine & Science in Sports & Exercise, that suggests that the sensation of cool, wet water flowing down your throat may be more important than the actual hydration the water provides for helping athletic performance.

Hang on just a sec, you say. Doesn’t this sound a bit like the “mouth-rinsing” experiments that have caused such a stir over the past few years, where subjects get a boost from swishing some sports drink in their mouth and then spitting it out? Well, it’s similar… but different. The fluid in the mouth-rinsing experiments contains carbohydrate, and the evidence suggests (very strongly) that we have previously unknown carbohydrate sensors in our mouths. If you’re exercising at an intensity and duration where carbohydrate availability is a potential concern, then when your brain detects incoming carbohydrate, it lets you go faster/harder/farther — even if you then trick it by spitting the carbs out.

The new experiment only used water. They took 10 endurance-trained males, and put them through a two-hour dehydrating protocol (alternating cycling and walking in a heat chamber) that dehydrated them by 1.9% of their body mass. Then then did a ~20-minute cycle to exhaustion under one of three different conditions (each subject tried all three, separated by at least a week): either no water permitted at all, or 25 mL of water to drink every five minutes, or 25 mL of water to rinse their mouth and then spit out every five minutes.

Note that 25 mL is a very small amount of water — it works out to about 1.7 tablespoons. After 20 minutes, they’d have received a total of 100 mL, which weighs 100 grams. In comparison, the pre-test dehydration protocol had taken away about 1,500 grams on average.

The results: they lasted 17.7 minutes in the no-water condition, 18.7 minutes with the water rinse, and 21.9 minutes with water drinking. All the physiological measurements (heart rate, lactate) plus perceived exertion were the same between the groups at the end of the test:

[T]he efficacy of water suggests that probably the sensation of swallowing along with the cool sense in the digestive track can motivate moderate dehydrated subjects and lead to an increase in performance… There is evidence that drinking itself activates the oropharyngeal receptors which in turn, influence significantly fluid balance, thermoregulation and possibly exercise performance.

In other words, in addition to a carbohydrate warning system in the mouth, the brain also seems to gather information about incoming fluid from the throat. This makes sense: it allows us to adjust and moderate behaviour based on incoming fluid and energy rather than waiting for the fluid and energy to be processed and distributed throughout the body. But it leaves us susceptible to being tricked by sneaky scientists.

This isn’t the first time this idea has been investigated. The discussion section of the paper describes a really cool study:

Moreover, in a classic study published by Figaro and Mack, subjects performed three identical dehydration protocols followed by 75 min of rehydration consisting of 1) ad libitum drinking (control), 2) infusion of a similar volume of water directly into the stomach via a nasogastric tube (infusion) and 3) ad libitum drinking with simultaneous extraction of ingested fluid via a nasogastric tube (extraction). The researchers found reflex inhibition of AVP [vasopressin, a hormone that controls the regulation of fluids in the blood] and thirst in control and extraction but not during infusion, suggesting that oropharyngeal reflexes modulate thirst and the secretion of AVP.

Translation: this isn’t just a placebo effect. Drinking water by swallowing it down your throat tells your body that water is coming, and various systems throughout the body respond accordingly. If you’re rehydrated without swallowing, your body doesn’t realize the fluid is coming (at least not immediately), and doesn’t respond. And if you rehydrate by swallowing water but immediately have an equal amount of water sucked out your nose (is this a cool experiment, or what!), then your body still thinks it got rehydrated.

So what’s the message here? Well, it seems that rinsing and spitting doesn’t do everything. But the principle is the same: late in a race, when you don’t want to be downing a full bottle of anything, taking a few swallows of a drink can still trigger your carb response (in the mouth) and your hydration response (in the throat).

Testing your max heart in 30 seconds

July 5th, 2011

It’s widely known that the old “220 minus your age” equation isn’t very good at determining your maximum heart rate. So what’s better? A new study from the University of Hawaii, recently published online at the Journal of Strength and Conditioning Research, tested nine different prediction equations, along with a surprisingly accurate way of determining your true max heart rate in about 30 seconds (sort of).

The study looked at 96 volunteers with an average age of about 22. That’s the first caveat: these results are only relevant for people in that general age group. And the volunteers were phys ed students, which means they’re likely to be more physically active than the general population.

I’ll start with the less interesting part of the study. Here’s the data from the equations they tested for all 96 subjects:

“CHRmax” is basically the “real” max heart rate. So they conclude that Gellish2 (191.5 – 0.007*age^2) and Fairbarn (201 – 0.63*age for women, 208 – 0.80*age for men) are the most accurate for this population. Maybe so — although trying to fit a function of age to data taken from subjects who are all virtually the same age seems a little weak to me. But the bigger problem is something they themselves note in their introduction:

The most commonly used Fox equation [i.e. 220 minus age] has been reported to have an SD [standard deviation] between 10 and 12 b/min. Thus, when estimating HRmax using the Fox equation, approximately 66% of the population should fall within +/-10 beats of the actual HRmax, but for the remaining population, the actual HRmax could differ by as much as 12–20 b/min or more.

I don’t know why they’re saying this is a property of the Fox equation. This is a fundamental property of human physiology: heart rate varies between individuals, so ANY equation based only on age will be more than 10 beats off for at least a third of the population. So for practical exercise prescription purposes, who cares which equation is more accurate?

Much more interesting is the way they calculate HRmax. They did the usual graded treadmill test to exhaustion to determine “true” max for 25 of their volunteers. Then they tested two other protocols. One was the Wingate test, which is basically 30 seconds all-out on an exercise bike. It was a crappy predictor, more than 10 beats below the actual average.

The second test was really simple: they had the subjects sprint as hard as they could for 200 metres on a standard track, with a running start. That’s it. And they measured their heart rate during the sprint. The average from the treadmill test was 190.0; the average determined from the 200m sprint (which takes a little over 30 seconds for most people) was 190.1. Pretty darn good.

Now, there are some caveats. The fact that the averages were close doesn’t mean everyone got identical values on the two tests. In fact, the “mean absolute error” was 5.8 bpm — but since the treadmill test was higher about half the time and the sprint was higher the other half, the averages balanced out.

Also, they didn’t do just one 200 sprint. They actually did two, but separated by at least three days. Each individual sprint, on average, underestimated the HRmax. The first sprint produced an average of 187.9; the second sprint produced an average of 186.3. So this test wasn’t as reliable at getting right up to HRmax. But when they gave people two tries, most people seem to have nailed at least one of the tries. Presumably if you gave them five tries (spread over several weeks), you’d get an even higher average max value. Of course, the same is true of the treadmill test: not everyone will execute it perfectly, so if you do it twice, the average value will probably creep up a bit. But what this study tells us is that, for this group of subjects (and remember, these are young phys ed students who are capable of sprinting 200 metres all out without pulling up lame halfway), give them two cracks at sprinting 200 metres and then take the highest heart rate they produce, and you’ll have a very good estimate of maxHR. It’s a heck of a lot cheaper and quicker than a graded exercise test — and a billion times more useful than any equation based only on your age.

Jockology: exercising in the heat

July 3rd, 2011

This week’s Jockology column in the Globe and Mail is a round-up of a few recent studies on exercise in hot weather: how the brain slows you down more than the body; how acclimatization does (and doesn’t) work; and how cooling your palms can make your workout feel easier.

[…] “Slowing down in the heat could be a subconscious regulation to protect us from damage, such as heat stroke,” explains University of Bedfordshire researcher Paul Castle, the lead author of the study.

In other words, you don’t slow down because your body has reached some critical temperature. Instead, your brain slows you down to prevent you from ever reaching that critical temperature. It’s a subtle difference – but as the cyclists in the study discovered, it means that our physical “limits” are more negotiable than previously thought… [READ THE WHOLE ARTICLE]

 

,

Canoeing the Lievre River in Quebec

July 2nd, 2011
Comments Off on Canoeing the Lievre River in Quebec

Another bit of self-promotion: my travel piece on canoeing the Lièvre River in Quebec is running in this weekend’s New York Times Travel section:

WE had two choices: stop and haul our canoes and gear along a four-mile trail running parallel to the river, or paddle into the canyon, with its 19 sets of rapids and near vertical walls. Either way, the decision would be final.

It was our fourth day on the Lièvre River in Quebec, about 170 miles north of Ottawa, and the current, gentle at first, was getting pushier with each tributary we paddled past. The land around us, thickly blanketed with pines and dotted with occasional cedar and birch, rose steep and rocky on both sides of the river ahead of us. From around the bend came the muffled roar of angry water… [READ ON]

It was a great trip — and a great incentive to keep working at some upper-body strength instead of just running (those whitewater canoes are wayyyy harder to portage than the ultralight Kevlars I used to use for trips in Algonquin and Temagami). My biggest regret about being in Australia for the summer is that I’m going to miss this year’s canoe trip.

 

8.5 hours of sleep a night boosts speed and shooting average

July 2nd, 2011

It’s more or less an annual ritual: Cheri Mah of the Stanford Sleep Disorders Clinic and Research Laboratory releases some results that show how getting more sleep improves performance in Stanford varsity athletes. I’ve written before about her results for tennis players and swimmers. This year’s data, published in the journal Sleep, focused on basketball players, who were asked to aim to increase their time in bed to 10 hours a night:

Participants shot 10 free throws from 15 feet, making an average of 7.9 shots at baseline and 8.8 shots at the end of the sleep extension period. They also attempted 15 three-point field goals, making an average of 10.2 shots at baseline and 11.6 shots after sleep extension. The timed sprint [which improved from 16.2 to 15.5 seconds] involved running from  baseline to half-court and back to baseline, then the full 94-foot length of the court and back to baseline.

None of these sleep studies were randomized or controlled, so we can’t take the data too seriously. (Particularly in sports like swimming, we’d expect to see improvements from early season to late season even without a change in sleeping habits.) Still, it’s interesting stuff.

What differentiates the basketball data from some of the earlier studies is that sleep time was measured objectively using actigraphs (basically watched-sized devices that monitor movement at night). So we know that the basketball players managed to increase their actual time asleep (as opposed to just time in bed) for just under 8.5 hours a  night, an increase of110.9 minutes from baseline. That’s a big difference — and it’s a lot of sleep, considerably more than most people even aim for.

Mah offers these tips:

  • Prioritize sleep as a part of your regular training regimen.
  • Extend nightly sleep for several weeks to reduce your sleep debt before competition.
  • Maintain a low sleep debt by obtaining a sufficient amount of nightly sleep (seven to nine hours for adults, nine or more hours for teens and young adults).
  • Keep a regular sleep-wake schedule, going to bed and waking up at the same times every day.
  • Take brief 20-30 minute naps to obtain additional sleep during the day, especially if drowsy.

This is all eat-your-vegetables kind of advice. I mean, we all know sleep is important — but sometimes it’s good to be reminded that the results show up objectively.

Is leading a race stupid? Some 1500m championship data

July 1st, 2011

‘Tis the season for championship track racing — and with it, the annual moaning about slow, tactical middle-distance races. At both the U.S. and Canadian national championships last weekend, the men’s 1500-metre races went very slowly (until, late in the race, they suddenly started going very fast). On the message boards, people started the usual criticisms of everyone who didn’t win, saying that they should have taken the lead and made the race faster from the start — much like Christin Wurth-Thomas did in the U.S. women’s 1500m. (The fact that Wurth-Thomas, who had the fastest seed time, was passed by three women in the final straightaway and thus failed to qualify for the World Championships, seems lost on these critics.)

Anyway, as I always do, I got sucked into the debate too, in a thread on tnfnorth. Given that USATF results now offer complete splits for every lap of every race, it’s possible to do a much more detailed analysis of tactics than it used to be. Out of interest, I looked at the three semifinal heats of the USATF men’s 1500m. There were three intermediate splits (at 300m, 700m and 1100m) taken in each race, which means a total of nine intermediate leads recorded. Seven men filled these nine leading spots; none of them qualified for the final.

Of course, this data didn’t convince anyone. Just a fluke, they said. So I’ve decided to take it a bit further. I looked back at World Championship results between 1997 (the earliest year for which intermediate leaders are listed in the results) and 2009 (the most recent championship). Here is how the first-lap leaders (after 400m) fared in the 23 quarter-final heats of the men’s 1500m in that timespan:

As expected, there’s a full range of results — leading the first lap doesn’t guarantee either success or failure. But there’s a pretty pronounced tilt toward the right-hand side of the graph. Indeed, 35% of first-lap leaders managed to either hang in the top six or take their race out fast enough to get a time qualifier. In contrast, 65% failed to move on to the semis.

Now, the tactics in qualifying races (where the goal is simply to place within the top N) are obviously different than those in finals (where the goal is to place as highly as possible, with every place counting). There’s not as much data for the finals, but here they are nonetheless:

Let me emphasize that race tactics are enormously complicated, dictated by the individual’s physiology, psychology, abilities relative to the rest of the field, weather conditions, and so on. I have beliefs of varying strengths regarding many of these factors — but I don’t believe this data answers any specific questions. It does, however, give a snapshot of how all these factors play out in the real world when they’re mixed together. And it suggests (to me, at least) that taking the lead during the first lap of a championship men’s 1500m race rarely ends well.