THANK YOU FOR VISITING SWEATSCIENCE.COM!
As of September 2017, new Sweat Science columns are being published at www.outsideonline.com/sweatscience. Check out my bestselling new book on the science of endurance, ENDURE: Mind, Body, and the Curiously Elastic Limits of Human Performance, published in February 2018 with a foreword by Malcolm Gladwell.
- Alex Hutchinson (@sweatscience)
Back in the late 1990s, I was training under the guidance of Harry Wilson, the coach who steered Steve Ovett to Olympic gold and world records at 1,500m and the mile. Harry was an interesting mix of old-school traditionalist and cutting-edge training buff. Instead of prescribing a set amount of rest between hard intervals (like two minutes, say), he liked to wait until the athlete’s heart rate had returned to given value (generally 120bpm for me). Being a young technophile, I would wear my heart-rate monitor for these workouts in order to have instant feedback. But Harry never really trusted this newfangled technology, so I would stand there between each interval while Harry jammed his fingers into my jugular, listening to my pulse himself until it had slowed to his satisfaction. [EDIT: An astute reader points out to me that you take your pulse from the carotid artery, not the jugular vein. My apologies for any misunderstanding!]
I bring this up because, while I was browsing through the pre-prints of the Scandinavian Journal of Medicine & Science in Sports yesterday, I noticed an article by researchers at South Africa’s University of Cape Town, including Tim Noakes, on “heart rate recovery” to monitor training fatigue. The gist is as follows: 14 cyclists took part in a four-week high-intensity training program that included two interval sessions (eight repetitions of four minutes hard, with 90 seconds recovery) each week. Immediately after the final hard interval of each session, the researchers recorded how much the athlete’s heart rate decreased in the next 60 seconds.
After the four-week training period was finished, the researchers divided the subjects into two groups: those whose heart rates had recovered more and more quickly throughout the study, and those whose heart rates had recovered more and more slowly. The hypothesis was that getting better at recovery indicated the subjects were adapting to the training, while getting worse would be a sensitive indicator that they were overtraining. To test this, the subjects rode a 40-km time trial, and compared the results to a similar time trial they had ridden at the start of the study. Sure enough, the group that was recovering better rode faster, and increased power by 8.0%, compared to the slower-recovering group, which only improved power by 3.8%.
This study is part of a larger project investigating the role of heart rate recovery, so it will be interesting to see the remainder of the results when they appear. Monitoring overtraining — the failure to recover from a heavy training load, essentially — is much more of an art than a science, so having some objective tools to use would be really helpful to endurance athletes. (And I’m sure it’ll work better with heart-rate monitors than using a finger to the jugular.)
UPDATE: Jim Ley posted the following comment, which I thought was worth responding to in the main post:
I’m not sure I understand how that study can say that the results are due to “Overtraining”, and not the fact that the group whos heart rate were recovering fastest were those who were responding to that training the best.
All it’s shown are there are two groups, one of which has improved power and HR recovery time – themselves a linked part of fitness. And another group which improved slightly less. To assume it’s overtraining is not borne out by the evidence you’ve said, but I didn’t bother looking up the actual study.
First of all, I was a bit cavalier in using the term “overtraining,” which generally has specific clinical significance that goes beyond simply failing to recover from a series of workouts. In this case, I’m simply talking about “training more than is optimal, so that performance declines rather than improves.” So how do we know this is happening, rather than it simply being a case of one group responding better to the training?
I’ve inserted a graph from the paper (at right), which shows the progress of the two groups through the eight training sessions. One group improves monotonically; the other improves initially, but after about three weeks they don’t just plateau, they actually regress. In the absence of other physiological data, we can’t guarantee that excess fatigue is the culprit (it could be, say, that everyone in that group decided to go on a crash diet after workout five). But the simplest and most likely explanation is that they’re training harder than they’re able to recover from.
From a practical point of view, if you were training under a constant workload and started to see a decline like that, the logical thing to do would be to cut back the intensity, volume, and/or frequency of your workload, in order to stop the decline. And that’s the point of this study: trying to provide a simple objective measurement that can tip you off that you’re no longer adapting optimally to the stress-recovery cycle of your training.