Archive

Archive for December, 2011

Des Davila, and learning to increase your carb intake

December 31st, 2011

Great in-depth profile of Desiree Davila in the current Runner’s World, leading up to the U.S Olympic Marathon Trials later this month. One passage that caught my eye, referring back to the 2008 Trials:

Davila ran her plan, clocking 5:48 mile splits. At mile 21, she was eight seconds behind eventual third-place finisher, Blake Russell. “And then I just completely fell apart,” Davila says.

It was a fueling issue. As a track runner, competing in the 1500, the 5000, and the 10,000, Davila never had to take fluids. More to the point, she couldn’t. When she tried, everything came up. “I thought, Well, I don’t want to lose breakfast, too, so I’ll just stop drinking fluids on the course.”

That doesn’t work over 26.2 miles. Or at least not for her. She struggled to cross in 2:37:50, for 13th place.

The fueling issue would be addressed—directly. During long workouts, Davila would force herself to drink. Her system, well, rejected it. “It was actually kind of disgusting,” she says. But week after week, her body eventually adapted. “Gross,” she says, “but necessary.”

Every time I write about carbohydrate intake during long endurance races (e.g. here), I get comments from people who say “Well, that may be true for the subjects in that study, but unfortunately that doesn’t work for me. My stomach can’t handle that.” Good thing Davila didn’t just accept that as an unchangeable fact of life.

Is lactate threshold a reproducible measurement?

December 31st, 2011

I posted earlier this week about a study that found that the amount of lactate in your blood at threshold doesn’t predict endurance performance. This doesn’t mean that lactate measurements are useless, I pointed out:

It just means that a single lactate measurement in isolation is meaningless: you have to make repeated measurements and track your progress relative to your personal baseline, in order to eliminate the effects of individual variation.

Well, a new study in the British Journal of Sports Medicine actually calls that last statement into question. If you make repeated measurements of lactate threshold, are those measurements repeatable enough to detect small changes in fitness? Many, many studies have examined this question, with the general conclusion that “yes, they’re repeatable.” But most of these studies have only repeated the measurement twice or at most three times, which is hardly sufficient to look for variability.

So researchers from Massey University decided to run a study in which 11 fit subjects did at least six lactate tests each, to see how consistent the results were. The goal here was to make the measurements as identical as possible, so they strictly controlled diet, time of day, and training conditions, all which have been shown to influence lactate values. (Coaches: do you control these factors when you test your athletes?)

Of course, there are many different markers you can look for in lactate tests, so the researchers chose seven of the most common markers: Rest+1, 2.0 mmol/L, 4.0 mmol/L, D-max, nadir, lactate slope index, and visual turnpoint.

These results indicate that only the D-max marker has good reproducibility and that it alone can identify small but meaningful changes in training status with sufficient statistical power.

Expressed in terms of coefficient of variation (with is the standard deviation divided by the mean), the visual turnpoint was by far the worst, with a variation of 51.6%, while D-max has 3.8%. The other markers were between 5.9 and 12.6%. What does this mean in terms of cycling power? They run some numbers to show that even a fitness change corresponding to 70 watts (i.e. improving from 55 to 47 minutes in a 40 km time trial) wouldn’t be reliably detected by most lactate measures.

Tim Noakes, in his accompanying commentary, draws the following conclusions:

[T]hey conclude that unrealistically large changes in power output would have to occur before it can be claimed with certainty that training has produced a real change in an individual’s blood lactate concentrations during exercise. These findings should encourage sober reflection among that large group of exercise scientists who use blood lactate concentrations to guide athletes’ training.

I’m inclined to be a little less negative. After all, coaches and athletes can likely settle for somewhat less rigid definitions of what change can be considered “significant.” As long as they understand that the measurement is fallible and subject to variation, it might still be useful tool for monitoring fitness. (Is it useful for prescribing training paces? That’s a whole different question.)

When is VO2max not max?

December 30th, 2011

Last spring, I had the opportunity to visit the sports science research group at the University of Cape Town. While I was there, I heard about some very surprising new research raising questions about the definition of VO2max. Since the research hadn’t yet been published, I agreed not to write about it; that paper has now been published in the January issue of the British Journal of Sports Medicine, so here goes.

A little background to start. The concept of VO2max — the absolute limit on how much oxygen you can deliver to your exercising muscles — is controversial these days, because it implies a physical limit on endurance performance. That idea, entrenched for the last century, has been challenged recently by researchers led by Cape Town’s Tim Noakes, whose “Central Governor Theory” argues that we never actually reach our ultimate physical limits — instead, our brains hold us back to protect us.

The new issue of BJSM actually contains eight different papers that could be interpreted as supporting Noakes’s basic thesis. And Noakes himself has an introductory article that offers a good overview of the debate and why it matters, for those who aren’t familiar with it. The full text of that intro is freely available here. Here’s Noakes’s somewhat oversimplified summary of the current status quo thinking:

In 1923, Nobel Laureate Archibald V Hill developed the currently popular model of exercise fatigue. According to his understanding, fatigue develops in the exercising skeletal muscles when the heart is no longer able to produce a cardiac output which is sufficient to cover the exercising muscles’ increased demands for oxygen. This causes skeletal muscle anaerobiosis (lack of oxygen) leading to lactic acidosis. The lactic acid so produced then ‘poisons’ the muscles, impairing their function and causing all the symptoms we recognise as ‘fatigue’.

We already know for sure that lactic acid doesn’t “poison” the muscles. But what about the idea of VO2max?

According to Hill, during the period between progressive exercise and exhaustion, whole body oxygen consumption reached a maximum value – the maximum oxygen consumption (VO2max) – ‘beyond which no effort can drive it’.

Two studies in the new issue of BJSM appear to show that VO2max isn’t actually “maximal” — you can get a higher value. As Noakes argues:

Had Hill shown this in 1923, he could not have concluded that maximal exercise performance is controlled by a limiting cardiac output. Instead a more complex explanation is required to explain why athletes always terminate exercise before they reach an ultimate oxygen limitation.

Okay, so much for intro. The study, by Fernando Beltrami and his colleagues in Cape Town, introduces a new VO2max protocol. VO2max is usually tested with an incremental design: you get on a treadmill (or an exercise bike), and the speed/workload gets higher/harder every minute or so until you reach failure. At some point before you reach failure, the amount of oxygen you’re using will have reached a plateau. Beltrami’s test is a decremental protocol. You start at a speed/workload slightly higher than what you were able to reach in a conventional incremental test, and then the speed is progressively reduced.

The subjects in the study all did a series of tests, as shown below. The key test was on visit number 4, when the experimental group did their decremental test:

Now, why would you expect a different result from a decremental test instead of an incremental test?

We reasoned that if subjects knew beforehand that the test would become progressively easier the longer it continued, the possibility was that any biological controls directing the termination of exercise might be relaxed, thus allowing the achievement of a VO2max higher than that achieved with conventional INC.

In other words, if the plateau observed in conventional VO2max tests is mediated by the brain in some way, rather than being purely physical, then it might be possible to change the plateau. And here are the VO2max produced in those multiple tests by the two groups:

Sure enough, the VO2 produced in the decremental test is higher by 4.4% (with p=0.004) than in the incremental test. Strangely, it stays at this new higher value in the subsequent incremental test — even though there were no related physiological changes. Heart rate, breathing rate, and ventilation at VO2max were the same in the different protocols. So what’s going on?

Emotional stress can affect blood flow during exercise and stimulation of sympathetic cholinergic fibres are thought to promote arteriolar vasodilatation and to induce changes in metabolism, producing a switch from aerobic metabolism to increased oxygen-independent glycolytic pathways… We propose the interesting possibility that an anticipatory difference in perception of the future workload might impact the sympathetic or parasympathetic drives and lead to differences in the metabolic response during exercise.

That’s just speculation. But what’s not speculation is that the subjects in this study did conventional VO2max tests and produced reproducible plateaus; then they did another test that just involved changing the order of the speed, and produced higher Vo2max values. Whatever is happening here, it’s not tenable to argue that the VO2max values measured in conventional incremental tests represent some absolute physical limit on the body’s ability to deliver oxygen to working muscles.

Altitude babies and epigenetics

December 28th, 2011

Steve Magness has a fascinating post on his blog about a neat new study in the January issue of Journal of Applied Physiology. The researchers took a bunch of rats whose ancestors have been living at the Bolivian Institute for Altitude Biology, which is 3,600 metres above sea level. Half the rats were placed in a room with “enhanced oxygen” to mimic sea level from one day before birth to 15 days after birth; the other half simply grew up normally at 3,600 metres. Then the researchers followed the rats for the rest of their lives to see whether this “postnatal” exposure to low oxygen affected the rats’ development.

This study fits into the recent burst of research into epigenetics — the idea that early environmental influences can produce lasting changes in gene expression. And sure enough, there were significant differences between the rats who grew up at high altitude continuously (HACont) and those who got two weeks of sea-level oxygen (HApNorm):

As you can see, the high-altitude rats had higher hemoglobin and hematocrit long after the two-week exposure period (32 weeks is roughly middle-aged in rats). Among many other differences, the altitude babies also had a bigger heart, and used less oxygen. All of this sounds pretty good for endurance athletes — which is why Steve wrote:

I always joke with my friends that whenever I have kids, I’m going to stick them at altitude during pregnancy and right after just to develop super altitude adapted kids…

But there’s a caveat. Here’s the survival data for the two types of rat:

In fact, the researchers make the overall conclusion that low oxygen levels in the crucial weeks after birth are a bad thing:

We conclude that exposure to ambient hypoxia during postnatal development in [high altitude] rats has deleterious consequences on acclimatization to hypoxia as adults.

So you have to be careful what you wish for your kids. Either way, though, it’s clear that environment alone can produce profound, lifelong changes in physiology — producing group traits that we once might have mistakenly attributed to genetics.

Lactate at threshold doesn’t predict performance

December 27th, 2011

I was at a conference on fatigue a few months ago where one of the speakers was Mike Lambert, a well-known sports science researcher from Tim Noakes’s group at the University of Cape Town. One of the questions at the end of his talk was about the use of lactate monitoring; his answer was something along the lines of “We refuse to measure lactate, because we don’t believe it offers any useful predictive information.” As a result, the UCT sports science unit doesn’t do much work with certain teams like the South African swim team, because the swim coaches are convinced that lactate testing offers important feedback.

A new paper just published online in the European Journal of Applied Physiology reminded me of that discussion. Researchers in Austria performed a whole series of difference incremental and maximal tests on 62 volunteers to look for patterns. The basic finding was that the amount of lactate in the blood at “maximal lactate steady state” (MLSS: the point where you’re producing and clearing lactate at the same rate) isn’t correlated with how fast or fit you are.

This isn’t the first study to make this observation. But previous studies have used relatively homogeneous groups, which makes it hard to determine whether lactate levels really have an effect. With VO2max, for example, you can safely bet the someone with a VO2max of 75 will perform better on any endurance task than someone with a VO2max of 35. But if you take a group of people who all have VO2max clustered between 60 and 70, then VO2max becomes a very poor predictor of performance.

In this case, the study subjects ranged from sedentary (with 0 hours per week of sports or exercise) to very fit athletes training up to 24.5 hours per week. Their power output on the bike at MLSS ranged from 100 to 302 watts. But despite this wide range, it was still impossible to predict anyone’s power levels by looking at their lactate levels at MLSS.

So does this mean Lambert is right and lactate is useless? Not necessarily. It just means that a single lactate measurement in isolation is meaningless: you have to make repeated measurements and track your progress relative to your personal baseline, in order to eliminate the effects of individual variation.

Pre-race carbs influence marathon pace

December 23rd, 2011

Cool field study on carbohydrate loading that I missed when it came out in the International Journal of Sports Performance over the summer, but just noticed on Amby Burfoot’s Twitter feed. Researchers enrolled 257 runners preparing for the 2009 London Marathon in a five-week online study where they entered all sorts of details about their training and diet leading up to the race. The subjects had an average age of 39, and an average finishing time of 4:34.

Needless to say, there were many factors that predicted running time: gender, BMI and training being the most obvious! The most interesting was nutrition:

In addition, although individual differences in race day diet did not strongly influence the marathon performances of recreational athletes, the amount of carbohydrate ingested during the day before race-day was identified as a significant and independent predictor of running speed. Furthermore, those runners who ingested more than 7 g carbohydrate per kg body mass during the day before the event ran faster in general and also maintained their running speed to a greater extent than those participants who consumed lower quantities of carbohydrate.

This may remind you of a study I blogged about a few months ago, showing correlations between in-race carbohydrate intake and Ironman finishing time. In this case, the better predictor is day-before carb intake, not in-race carb intake — perhaps not surprising, since a marathon is much shorter than an Ironman triathlon. Here’s the most interesting data:

The most obvious question here is: Is this correlation or causation? It’s certainly plausible — in fact, it’s probable — that the most serious runners who’ve trained best are also those who realize they should eat a lot of carbs. To address this, the researchers did a matched-pair analysis. There were 30 runners who consumed more than 7 g/kg of carbs. From the rest of the subjects, the researchers extracted another 30 runners pair-matched to the carb eaters so that there was no statistical difference in age, BMI, training data, and marathon experience between those two groups of 30.

In the graph above, the open squares are the carb eaters, and the open circles are the matched group that ate fewer carbs. The finding remains the same: the runners who ate fewer carbs ran slower — and perhaps more importantly, their speed declined more sharply during the race, particularly between 35 and 40K.

Non-randomized observational studies like this need to be treated with caution, needless to say. This isn’t “proof” that eating carbohydrates the day before the race makes you faster. But it certainly fits with our current understanding of endurance physiology, and it offers a tangible target for midpack marathoners: 7 grams of carbohydrate per kilogram of bodyweight (a number that conveniently agrees with studies that have found that a single day at 10 g/kg is enough to fully max out your glycogen stores).

Men’s marathon: how much faster is it getting, and why?

December 22nd, 2011

Marathoners are getting faster — that’s no secret. Here’s the progression of the fastest men’s marathon in each year between 1969 and 2010:

But looking at the leading time is a somewhat narrow approach, since it just reflects the freakish talents of one individual in each year. So here’s data, from a new analysis by researchers from the University of Milan in the Journal of Strength and Conditioning Research, showing the average of the top 200 times for the same time period:

So the basic trends are similar, but not identical. Look at the last three years, for example. From the top time, you might think marathoners are getting slower; but the second graph shows clearly that — as anyone paying attention to the marathon lately can attest — the depth of fast performances has been increasing steadily and sharply. I’ll bet the 2011 numbers will continue that trend.

The average data also shows a few inflection points where the rate of improvement has changed. There was rapid improvement until 1983, then a leveling off until about 1997; decrease again until 2003, then a little hitch for a few years, and now steady decrease again.

So what explains the changes? We can speculate about the role of money, science, and training… But as Amby Burfoot pointed out in his take on this study, it’s hard to get away from this stat:

[I]n 1997, East Africans nabbed just 29 percent of the top 200 times. For 2010, the corresponding figure was 84 percent.

One other interesting nugget: during this time period, the top time improved by about 5 seconds per year, while the average of the top 200 improved by about 10 seconds per year. So this means that (a) competitive depth is improving, and (b) in another 50 years or so, the 100th ranked marathoner in the world will be faster than the top-ranked marathoner of that year.

Salt intake, and why taste is like a dog whistle

December 21st, 2011

I was reading a profile of British historian Lucy Worsley in the New Yorker last night, in which the writer (Lauren Collins) takes part in the authentic re-enactment of the meal eaten by King George on February 6, 1789:

THEIR MAJESTIES DINNER

Soupe barley

4 chickens roasted

3 pullets minced and broiled

7 3/4 mutton collop pyes

6 perch boiled

2 breasts of lamb a la pluck

2 salmic of ducks

13 loin veal smort

(And a partridge in a pear tree, presumably.) The article is both fascinating and funny, but the culinary payoff after an enormous amount of work ends up being a bit anticlimactic:

If every age has its sounds and smells, it also has its flavors. The taste of 1789 can be a dog whistle to modern palates… “You’ve had your tongue burnt off by a Mexican chili, and you’ve been eating sugar cookies since you’ve been able to stand [says Marc Meltonville, co-head of the Historic Kitchens team;] if something’s subtle, sweetened with rose petals, how are you going to be able to taste it?”

This made me think of the long-running and bitter debate about the “right” amount of salt consumption, and how tastes are formed. Just this morning, the New York Times reported on a neat new study suggesting that the amount of salt you’re fed as an infant determines your taste for it in later life. But taste for salt is also somewhat plastic. When my wife and I started eating together, her taste for salt was dramatically higher than mine. Now, a few years later, our tastes have sort of met in the middle. She no longer adds as much salt to food as she used to; but when I visit my parents, I find that I now need to add salt to dishes that I loved for many years with no added salt.

And I still struggle to reconcile all this with the widespread message that we’re eating wayyy more salt now than we used to (and thus salt is responsible for the current epidemic of hypertension). As I wrote last year about a study by Harvard’s Walter Willett:

He and a colleague reviewed studies between 1957 and 2003 that measured sodium excretion in urine — a very accurate way of determining salt intake that gets around the difficulties in figuring out exactly how much salt is in your food. They found two main things: (a) sodium intake averaged about 3,700 mg per person per day, which is way higher than the upper recommended limit of 2,300; and (b) it essentially hasn’t changed in the half-century studied.

Interestingly, these results agree almost exactly with similar reviews of studies from 33 different countries: salt intake is high, and it hasn’t changed in recent memory.

And Henry VIII, according to Worsley in the New Yorker piece, ate 20 grams of salt each day!

How neutrophils boost (or weaken) your immune system after exercise

December 20th, 2011

Exercise boosts your immune system — up to a point. A neat new paper in Medicine & Science in Sports & Exercise digs a little deeper into this complicated relationship between exercise and immune function. Specifically, it looks into the response of neutrophils:

When infection occurs, neutrophils rapidly migrate to the infection site (chemotaxis) and ingest the pathogens (phagocytosis).

So how does exercise affect these neutrophils? Well, that depends on what kind of exercise you do. For regular, moderate exercise (“CME,” or “chronic moderate exercise,” consisting of 30 minutes of moderate cycling daily for two months), here are the results:

“DT” is “detraining.” So you can clearly see that regular, moderate exercise boosts the ability of the neutrophils to get to infection sites quickly (chemotaxis) and attack the bad guys (phagocytosis). And in fact, the neutrophils are still ultra-alert for a couple of months after you stop training. In addition, the researchers found that regular exercise extended the life of the neutrophils.

On the other hand, the effects of  “acute severe exercise” (an incremental test to exhaustion) had more mixed results. Chemotaxis was enhanced, but phagocytosis wasn’t, and the lifespan of the neutrophils was shortened — not so good for immune function.

So is this a surprise? Not really — it’s been clear for a long time that exercise has a J-shaped influence on immune function. Some is good, more is better, but beyond a certain point, too much is bad. Run a marathon, you’ll have a slightly elevated risk of catching a cold (or at least suffering from some sort of respiratory symptoms) afterward. But studies like this are needed to understand what exactly is happening in the body, so that eventually we’ll have a better idea of exactly where the curve in the J starts — and possibly figure out some ways to extend the sweet spot of the curve.

Standing desks, sedentary behaviour, and the need for motion

December 19th, 2011

My Jockology column in this week’s Globe and Mail takes a look at the surge of interest in standing desks:

Now that we’ve accepted the surprising truth about sedentary behaviour – that sitting at a desk all day wreaks havoc on your health, no matter how much you exercise before or after work – the standing desk is having a moment. Desk jockeys everywhere are rising up.

The cashiers of the world, meanwhile, must be scratching their heads.

“Ask anyone who works in a shop whether they feel good standing all day, or whether they need to periodically sit,” says Alan Hedge, who directs the Human Factors and Ergonomics program at Cornell University in Ithaca, N.Y.

Indeed, prolonged standing has been linked to a long list of health problems over the years: most commonly varicose veins, but also night cramps, clogged arteries, back pain and even (according to one study) “spontaneous abortions” – enough to make you think twice before throwing away your chair. But striking the right balance in your cubicle isn’t necessarily about the furniture, researchers say – it’s about how you use it… [READ THE WHOLE ARTICLE]

 

,