More on altitude training research
Yesterday’s post about Carsten Lundby’s altitude study sparked some fantastic discussion in the comments section, on Twitter, and over e-mail. I really appreciate everyone who took time to share their thoughts and expertise, and I’d just like to follow up with a few thoughts of my own.
When a study like this comes along that contradicts the “conventional wisdom,” there are many possible ways to respond. One good response is to look for flaws in the study, to figure out if there’s some logical reason that it contradicts previous findings. At the other end of the spectrum, there are responses like this one, from the comments section of the previous post:
This study has a lot of holes in it, especially since some of the “best” physiologists state that LHTL works. One bogus study cannot change the work that guys like David Martin and the Australia of Sport (AIS) have performed.
With all due respect, the study “has holes in it” if there’s a problem with its methodology or design, not just because someone says it does. One study certainly can refute the work that others have done if the new study is correct and the others are flawed. That’s how science works: it doesn’t care what your name is or where you work. (Speaking of which, it’s no coincidence that this particular commenter works for a company that manufactures and sells altitude tents!)
Another commenter asked about individual (rather than average) responses. This is an excellent question, since it has long been hypothesized that there are “responders” and “non-responders” to altitude training. Here are the individual responses in hemoglobin mass for the altitude group:
On the surface, this looks to be exactly what we see: five “significant” (above the dashed line, which represents the typical error level of the measuring apparatus) responders, three who got significantly worse, and two basically unchanged. But let’s look also at the placebo group:
Once again (though with fewer subjects), we have some individuals responding “significantly” in both directions to the placebo stimulus, and some staying unchanged. Though the small sample size makes comparisons difficult, the scatter of individual results looks pretty similar in both cases (and was statistically indistinguishable).
So what do we conclude from this study? As I said in the previous post, this was an exquisitely careful study with an excellent design. That means we can place very high confidence (relative to previous altitude studies) in its evaluation of the specific conditions it tested. And this is the rub. They held certain conditions constant, such as oxygen levels, time exposed to hypoxia, and training stimulus. But what if the training stimulus was inappropriate (too hard? too easy?). What if the athletes had insufficiently high iron (despite being given daily iron supplements)? What if being confined to their rooms for 16 hours a day caused negative adaptations?
These are all possibilities — and they’re all possibilities considered by the researchers themselves in their discussion in the paper. No one — not me, not the researchers — is saying “altitude training is a scam.” But what they (and I) are saying is that, if you take a fairly conventional live-high-train-low paradigm as executed in the study (4 weeks, 3,000m/1,000m, continuing essentially the same training plan that you were doing at sea level, etc.), don’t assume that you’re automatically going to get the results you’re looking for. There are clearly some other variables at play that need to be controlled. Elite coaches and athletes have some pretty strong ideas about what these additional variables are. And if I worked for an altitude tent company, I’d spend a little less time mouthing off about “bogus studies,” and a little more time trying to nail down exactly what those variables are.