Archive

Archive for September, 2011

Cadence data redux

September 18th, 2011

Last week I posted some data about my running cadence at difference running paces, which sparked plenty of interesting discussion here and at several other sites including Pete Larson’s Runblogger, Amby Burfoot’s Peak Performance and Brian Martin’s Running Technique Tips. All of those folks also sent me some data on their own cadence-vs-pace curves, so I just wanted to share the updated graph:

Without rehashing the whole discussion from last week, the key point I take away from this is that cadence changes as a function of pace (and in a fairly predictable manner, at that). The runners shown here vary dramatically in age, morphology, speed, running shoe preference, running style and probably many other parameters — and as a consequence, at any given pace they have different cadences.

Some might argue that, if all of us took a course to learn the “perfect” form, our cadences would converge toward similar values. That’s an interesting debate — but not the one I’m focused on here. Because even if we did all have the same cadence at 5:00/km, this data suggests very strongly to me that we’d have a faster cadence at 4:30/km, and an even faster cadence at 4:00/km. The moral: any discussion of cadence, whether of an individual or a group, is meaningless without implicitly or explicitly considering pace.

A few thoughts on the paleo diet

September 17th, 2011

I was asked in an interview a few days ago for my take on the “paleo diet,” and I figured I may as well share those thoughts here. I’ll start by saying that I’m not an expert in this area — these are just my impressions from the outside! For anyone who’s interested in the scientific rationale behind it, there’s a very comprehensive review paper that was published earlier this year and is freely available online. Anyway, a few scattered thoughts:

It’s not a diet, it’s a lifestyle. I’ve seen this sentiment expressed on a number of paleo-oriented blogs, and I think it’s a very important point. If you want to argue that humans are uniquely adapted to the paleolithic environment because that’s where we spent the most time, it’s meaningless to just consider one part of that environment. If you spend the day sitting on your couch watching TV, then picking up the phone and ordering an authentic ancestral meal from McPaleo’s isn’t going to make you healthy. The review paper focuses on the following key elements of the paleolithic environment:

  • regular sun exposure for vitamin D
  • plenty of sleep, in synch with light/dark cycles
  • lots of physical activity!
  • no exposure to pollution
  • fresh, unprocessed food
  • short bouts of acute stress (tiger!) rather than chronic stress

All of this stuff sounds great — I’m absolutely in favour of every element of this lifestyle.

Plants vs. animals. In the fantasies of some people, going paleo means you get to eat enormous Fred Flintstone-style chunks of meat — for every meal. Not quite: here’s a passage from a paper published in the British Journal of Nutrition last year:

[I]n contrast to common belief, hunting probably played a less dominant role from a nutritional point of view compared with gathering, and on average, it makes up 35% of the subsistence base for present-day worldwide hunter–gatherers, independent of latitude or environment.

This is a point picked up by David Katz in an article earlier this summer: you’re still going to eat, as Michael Pollan would say, “mostly plants.”

The evils of wheat and dairy, and the pace of evolution. Okay, this is where I believe we start to drift away from well-supported science and into the realm of unsupported hypotheses. The basic idea is that, since humans only started farming about 11,000 years ago, our genome hasn’t had time to adapt to these foods. Moreoever, grains like wheat actually contain “antinutrients” that hinder proper digestion and cause chronic inflammation — in everyone, not just those with celiac disease or gluten sensitivity. These results are not widely accepted — or at least, I personally don’t find the evidence convincing.

Moreover, 11,000 years — or 366 human generations — is actually quite a long time. As a result, for example, it’s well understood that the gene allowing for humans to digest milk was selected through evolutionary pressure in populations that domesticated cows. The review paper I mentioned above notes this as a “key exception” to what they otherwise claim is the rule that humans haven’t had time to adapt to agriculture. I, on the other hand, would view it as “key evidence” that humans have had time to adapt to agriculture. Obviously, not all modern humans can digest milk — and those who can’t shouldn’t drink it! But I see no evidence that those who can drink milk should avoid it. Same goes for wheat: it’s certainly true that some people can’t process it adequately, but I’m not convinced that it’s full of antinutrients that are secretly poisoning the rest of us.

Overall, as Stephan Guyenet pointed out in his discussion of the review paper, the evidence seems to support the idea that “the main detrimental change was not the adoption of agriculture, but the more recent industrialization of the food system.” In other words, the diet we should be seeking to emulate is pre-1850, not pre-10,000 BC — which, not coincidentally, once again sounds a lot like Michael Pollan’s advice: don’t eat anything your grandparents wouldn’t recognize as food.

The Talk Test vs. lactate and ventilatory thresholds

September 16th, 2011

Figuring out how hard to push is one of the great challenges in exercise. Personally, I’m a big fan of relying on perceptual methods (“how hard does this feel?”) rather than seemingly objective approaches like heart rate or lactate level. Certainly for competitive athletes, learning to interpret your body’s cues is a crucial step to being able to pace yourself properly in a race. But perceived exertion can be pretty tricky for beginners — which is why simple tricks like the “Talk Test” can be very helpful.

In its most basic form, the Talk Test is pretty simple: if you can talk in complete sentences, you’re below threshold. If you can’t talk, you’re above threshold. If you’re in the middle — you can say a few words at a time — you’re pretty close to threshold. So what is this “threshold” we’re talking about? Ah, that’s where it gets complicated. As exercise gets more intense, your body may or may not pass through several thresholds related to breathing rate, lactate accumulation in the blood, and other physiological parameters. The precise definition of these thresholds — and their very existence, in some cases — is hotly debated. As a crude simplification, threshold pace corresponds to the fastest pace you can sustain aerobically, which usually turns out to be the pace you’d hold in a race lasting about an hour.

All of this is by way of introduction to a new study from researchers at the University of New Hampshire, published in the Journal of Sports Sciences, that compared the exercise intensity at various points in the Talk Test to the exercise intensity at the ventilatory and lactate thresholds. Here’s the data, expressed in terms of heart rate and VO2:

“Negative Talk Test” is when the subjects couldn’t talk comfortably; “positive Talk Test” was when they could talk comfortably; “equivocal Talk Test” was in the middle. It’s clear that this middle zone corresponds pretty closely to lactate threshold. This is a bit surprising, since you’d expect ventilatory threshold — when breathing gets significantly harder — to be more closely tied to talking ability. But it’s convenient, because people care a lot more about lactate threshold than ventilatory threshold.

So how do we use this information? Here’s a basic “training zone pyramid” that I included in a Jockology column on pacing last year, based on research by Carl Foster and others about the typical training patterns of endurance athletes:

So most of your training should be below threshold — a common mistake beginners make, since they’re so unfit, is to be pushing above threshold on every bout of exercise. And some of your training should be at threshold — and I’d bet many competitive runners would badly fail the Talk Test during what they claim are “tempo runs” at threshold! On the other hand, casually spinning the wheels of an exercise bike while reading a magazine is unlikely to do much for you, as the press release from the UNH researchers points out:

“If you are beginning an exercise program and can still talk while you’re exercising, you’re doing OK,” Quinn says. “But if you really want to improve, you’ve got to push a little bit harder.”

 

Why neither “normal” nor minimalist running shoes will disappear

September 15th, 2011

Which is better: a pill that 75% of the population can take, which produces wonderful benefits for 25% of those who take it; or a pill that 25% of the population can take, which produces wonderful benefits for 75% of those who take it? I was pondering this question while re-reading Peter Vigneron’s long, thoughtful piece about running form, from the June issue of Runner’s World. This passage, in particular, made me think:

Perhaps—and this, too, is speculative—the modern cushioned running shoe makes running easy for the modern runner. This seems like a good thing. Should millions of runners suddenly decide to change their form and then find that running is no longer a manageable activity, it would be a tragedy. The solution to an imperfect state of affairs ought not make things worse—it should not produce more injured, unhappy runners.

One of the common narratives you hear a lot these days is that modern running shoes are the product of an insidious corporate campaign to sell us useless shoes that effectively enslave us by weakening our feet. I find this conspiracy-theory stuff quite tiresome — shoes may or may not be good for us, and of course shoe companies want to sell us anything they can, but I have no doubt that the origin of what Pete Larson calls the “pronation paradigm” was well-intentioned. The simple but often overlooked point is that the shoes caught on. I’d bet that, in 10 years, the recent fad for “toning shoes” will be all but forgotten since they simply don’t work. But running shoes have had remarkable staying power — perhaps because, as Vigneron says, “the modern cushioned running shoe makes running easy for the modern runner.” Or at least some modern runners, some of the time.

Let’s say we accept that, in a perfect world, the barefoot running style is optimal for humans. What if in our postlapsarian modern society, a large proportion of us are simply not equipped to make that transition after decades of sedentary, shod living? Or we can make the transition, but it requires the careful, patient, dedicated, slightly obsessive six-month transition period that barefoot advocates scrupulously recommend? Given the staggeringly high numbers of people who can’t be bothered to do any physical activity, even so much as a brisk walk, despite the overwhelming evidence that it’s the single best thing they could do for all aspects of their physical and mental health, I suspect that the barriers to successful barefoot running will always limit it to a fairly small subset of population.

So that’s what my opening question was about: what if barefoot running is fantastic for a small segment of the population, while running shoes are hit-and-miss but accessible to a much larger portion of the population? What’s the “right” answer to how we proceed? Obviously I chose my sample numbers carefully (so that both versions of the pill help 18.75% of the population, if anyone’s checking the math), but I wonder what those numbers are in reality. How many people can barefoot running reach? How “bad” are normal shoes? In the end, the numbers don’t really matter — because there will always be some part of the population that can succeed with one approach but not the other.

Full-body compression makes your heart work harder

September 13th, 2011

Quick look at a study just posted in the European Journal of Applied Physiology, from researchers at the University of Otago in New Zealand. They investigated the effects of full-body compression garments (Skins) on cyclists, looking in particular at three outcomes:

  1. Did it make the cyclists faster?
  2. What effect did it have on their body temperature?
  3. What effect did it have on their cardiovascular workload?

To separate the effects of compression from the effects of wearing a full-body suit in reasonably warm temperatures (24 C), the subjects each did three trials: a control trial in gym shorts; a trial with “properly fitting” Skins; and a trial with oversize Skins. The results:

  1. No difference in cycling performance
  2. Skin temperature was higher by 0.5-0.9 C during exercise when wearing compression gear, but core temperature was unaffected.
  3. Their hearts had to work about 5% harder with the compression gear on, and they finished with a heart rate 4-7% higher than in the control condition.

The authors make their skepticism clear pretty much from the start: the first sentence of the abstract is “Sporting compression garments are used widely during exercise despite little evidence of benefits.” They make several interesting points in the paper — for instance, the vast majority of “evidence” cited for increased venous flow and reduced venous pooling comes from studies of people (generally with some sort of circulatory condition) at rest. Do the same findings apply during exercise? It may be that the “calf muscle pump” — the squeezing of the calf that shoots blood back toward the heart, which supposedly gets a boost from compression socks — is already acting at maximal capacity during vigorous exercise.

Bottom line from this study: the garments didn’t really make much difference (the mild changes in temperature and cardiovascular function, though negative, weren’t enough to be a big issue). The authors are careful to note that the study has nothing to do with whether compression garments help recovery. But as far as wearing them during exercise, this certainly doesn’t change my opinion that wearing a full-body speed-suit while jogging on a hot summer day (and I see plenty of people doing that here in Sydney!) may look cool, but doesn’t do anything for your performance.

All exercise performances are sub-maximal

September 12th, 2011

Another interesting pacing study, with many similarities to the one I blogged about last week, published once again in Medicine & Science in Sports & Exercise. Cyclists are asked to do a series of 2,000-metre time trials in a pseudo-virtual reality set-up. Most of them they perform solo, but in one of the trials they race against a virtual competitor (who, unbeknownst to them, is actually programmed to exactly mimic their own previous trial). The result is obvious: competition improves performance, so they’re able to beat their doppelganger and race significantly faster.

What’s interesting is how they manage to beat their previous performance. Throughout the race, the power generated from aerobic sources is exactly identical in all the different trials. But the power from anaerobic sources is significantly higher in the “racing” scenario during the second half of the race (during the last 90 seconds or so, in other words).

What does this mean?

Consequently, it has been argued that all exercise performances are sub-maximal, since they are terminated before there is a catastrophic metabolic or cardio-respiratory failure, and that a physiological ‘reserve’ capacity will always remain. The ergogenic effects of the [head-to-head] competition might therefore result from the central influence of some motivational or dissociative effect enabling the use of a greater degree of the physiologic ‘reserve’ capacity.

That’s actually quite a powerful statement: “all exercise performances are sub-maximal.” If the stakes are raised sufficiently, you can always squeeze out a little extra. I think most of us grow up knowing this intuitively, but at some point — after we start learning about VO2max and lactate threshold and so on — it’s often forgotten.

Even Kenyans stride slowly

September 10th, 2011

Just for fun, following up on yesterday’s post on running cadence, I did a little YouTube surfing to find footage of fast Kenyans running slowly. Because the question I’m interested in isn’t: “Do fast runners take quick strides?” I think that’s reasonably well established. The trickier — and I’d argue more relevant — question is: “Do fast runners take quick strides when they’re running slowly?”

The best example I found was this 10-minute clip posted in 2007 by Toby Tanser, which shows all sorts of footage of Kenyan runners at different speeds:

Now, if you spend a little time with a stopwatch, you quickly find that when the runners are shuffling along slowly, then tend to have a slow cadence in the 160s, and when they’re running fast, their cadence tends to be above 180. But that doesn’t really answer the question, because it’s not necessarily the same runner. So I’ve cued to video to 2:49, where you see a clip of Hilda Kibet (1:08 half-marathoner, 2:24 marathoner) jogging slowly, and then another clip of her running quickly around the track. My measurements:

Jogging slowly: 18 strides in 6.7 seconds = 162 steps per minute

Running fast: 16 strides in 5.0 seconds = 190 steps per minute

I realize this is pretty scanty data! And I also realize that there’s a fairly extreme difference between how slowly she’s shuffling in the first clip, and how quickly she’s hauling in the second clip. But that’s the whole point: you can’t talk about cadence without considering speed.

The problem with 180 strides per minute: some personal data

September 8th, 2011

My wife is out of town at the moment, which means I’m doing lots of running on my own. Plenty of time to ponder the meaning of life — and, when I get tired of that, to count my footsteps. Sparked by interesting discussions with the likes of Pete Larson from Runblogger and Dave Munger from Science-Based Running, I’ve been wondering what my own cadence is like — particularly in light of widespread belief in the magic of 180 strides per minute. Over the past few weeks, I counted strides for 60-second intervals at a variety of paces. Here’s what I found:

Most surprising to me was (a) how consistent my cadence was when I repeated measurements at the same pace, and (b) how much it changed between paces: from 164 to 188, with every indication that it would decrease further at slower paces and increase further at faster paces. This certainly confirms what Max Donelan, the inventor of a “cruise control” device for runners that adjusts speed by changing your cadence, told me earlier this year: contrary to the myth that cadence stays relatively constant at different speeds, most runners control their speed through a combination of cadence and stride length.

So the next question is: am I a freak, running with a “bad” slow cadence at slower paces, but a “good” quick cadence at faster paces? To find out, I plotted my data on top of the data from one of the classic papers on this topic, by Peter Weyand:

The graph is a little busy, but if you look closely, you’ll find that my data is slightly offset from the Weyand data, but has essentially identical slope. So compared to a representative example of Weyand’s subjects, I have a slightly quicker cadence and shorter stride at any given speed, but my stride changes in exactly the same way as I accelerate. So I’m not a freak: the fact that my cadence increased from 164 to 188 as I accerelated from 5:00/km to 3:00/km is exactly consistent with what Weyand observed.

One key point: I’ve highlighted two key “speed zones.” One is the pace at which typical Olympic distance races from the 1,500 metres to the marathon are run at. This is where Jack Daniels made his famous observations that elite runners all seemed to run at 180 steps per minute (which corresponds to 1.5 strides per second on the left axis). The other zone is what I’ve called, tongue-in-cheek, the “jogging zone,” ranging from about 4:30 to 7:00 per kilometre. This latter zone is where most of us spend most of our time. So does it really make sense to take a bunch of measurements in the Olympic zone, and from that deduce the “optimal stride rate” for the jogging zone?

This isn’t just a question of “Don’t try to do what the elites do.” If Daniels or anyone else had measured my cadence during a race, it would have been well above 180. But at jogging paces, it’s in the 160s. I strongly suspect the same is true for most elite runners: just because we can videotape them running at 180 steps per minute during the Boston Marathon doesn’t mean that they have the same cadence during their warm-up jog. In fact, that’s a pretty good challenge: can anyone find some decent video footage of Kenyan runners during one of their famously slow pre-race warm-up shuffles? I’d love to get some cadence data from that!

Of course, this doesn’t mean I don’t think stride rate is important. I definitely agree with those who suggest that overstriding is probably the most widespread and easily addressed problem among recreational runners. But rather than aspiring to a magical 180 threshold, I agree with Wisconsin researcher Bryan Heiderscheit, whose studies suggest that increasing your cadence by 5-10% (if you suspect you may be overstriding) is the way to go.

[UPDATE: Make sure to check out the interesting discussion in the comments section! Also, Amby Burfoot did his own cadence test and posted the data. I’ve added it to the graph below to show how it compares to my own and Weyand’s data. Feel free to try it out on your next run, and I’ll add your data to the graph too!]

Read more…

Jonah Lehrer on marshmallows and executive function

September 8th, 2011

In the comments section of last week’s post on delayed gratification and the Marshmallow Test, Seth Leon pointed out a really interesting article by Jonah Lehrer (first published in the Wall Street Journal but mirrored on his excellent blog, Frontal Cortex) that discusses ways in which you can improve your focus and impulse control — how to boost your performance on the Marshmallow Test, in other words:

The key is strengthening what psychologists call “executive function,” a collection of cognitive skills that allow us to exert control over our thoughts and impulses. When we resist the allure of a sweet treat, or do homework instead of watch television, or concentrate for hours on a difficult problem, we are relying on these lofty mental talents. What we want to do in the moment, and what we want to want, are often very different things. Executive function helps to narrow the gap. […]

But here’s the good news: Executive function can be significantly improved, especially if interventions begin at an early age. In the current issue of Science, Adele Diamond, a neuroscientist at the University of British Columbia, reviews the activities that can reliably boost these essential mental skills. The list is surprisingly varied, revolving around activities that are both engaging and challenging, such as computer exercises involving short-term memory, tae-kwon-do, yoga and difficult board games.

The whole article (which isn’t very long) is worth a read, as is Lehrer’s previous post, which describes in considerably more detail the history and implications of the marshmallow studies.

The finishing kick is in your head, not your legs

September 7th, 2011

Another cool study showing that your brain always holds back a little energy even during “maximal” effort — and that you can access this reserve during your finishing kick. This one comes from Northumbria University in the UK, published in Medicine & Science in Sports & Exercise, and it’s fairly straightforward. Nine trained cyclists each do three 4,000-metre time trials on a stationary bike hooked up to pseudo-virtual-reality computer system:

  1. a baseline trial where they go as fast as they can;
  2. a “race” where they compete against an avatar representing their baseline performance;
  3. another “race” where they compete against an avatar which they’re told represents their baseline performance, but is actually going 2% faster (the second and third trials were given in random order to avoid learning effects).

The results: as you might expect, when racing against their previous performance, the cyclists were able to eke out a little extra energy, finishing 1.0% faster on average. But crucially, when they were deceived into competing against a faster avatar, they managed an even bigger boost, improving their time by 1.7%! Interestingly, an earlier study that tried something similar but gave feedback that was off by 5% produced the opposite result, because the cyclists were tricked into going out too fast and eventually crashed — so this isn’t an unlimited technique that will allow you to travel at the speed of light.

On the surface, these results aren’t really that surprising. Knowing how the human body (and mind) work, that’s pretty much what we’d expect. But it’s important to realize that this conflicts with the conventional understanding of how physiological constraints limit our performance. Whatever factors determined the baseline finishing times, they clearly weren’t absolute physiological limits, because the cyclists were able to beat them a few days later.

Further analysis of the data shows that in the deception trial, the cyclists had to start supplying more anaerobic power in the final 10 percent of the race in a desperate attempt to keep up with their supercharged rival. Here’s the graph of aerobic and anaerobic power contributions in the three trials (baseline, accurate and deception):

This graph sheds some interesting light on a longstanding debate about the origins of the “finishing kick,” which is a pretty much universal phenomenon in endurance races lasting longer than a few minutes. Why are we able to accelerate at the end, when we should be at our most tired? The conventional answer is that we’ve been relying primarily on aerobic energy throughout the race, but as the finish line approaches, we can mobilize anaerobic sources — the same ones we’d use to sprint 100 metres — and exhaust them just as we cross the line. The “alternate” explanation is that the brain has been limiting exertion in order to preserve homeostasis, but permits us to access some of those reserves as we approach the finish line (with the implicit promise that we’ll then stop and allow the body to recover).

It’s certainly true that the extra power needed for the finishing kick comes from anaerobic energy sources. But it’s also clear that, in the baseline trial and even in the “accurate” competition trial, the cyclists didn’t fully exhaust their anaerobic energy stores. Why not? The answer can lie only in the brain.

So what’s the practical takeaway? Well, I suppose if you can convince your real-life competitors to run 2% faster than normal without telling you, that would help! But realistically, I think this is a situation where knowledge is, literally, power. When you approach the finish of a race, you DO have energy remaining, despite what your mind and body are telling you. Believing that beyond a shadow of a doubt is, I believe, the first step to accessing it.