THANK YOU FOR VISITING SWEATSCIENCE.COM!
As of September 2017, new Sweat Science columns are being published at www.outsideonline.com/sweatscience. Check out my bestselling new book on the science of endurance, ENDURE: Mind, Body, and the Curiously Elastic Limits of Human Performance, published in February 2018 with a foreword by Malcolm Gladwell.
- Alex Hutchinson (@sweatscience)
***
This came up in the comments of a previous post, but I thought it was worth a post of its own. I was trying to explain the dangers of the “Streetlight Effect,” and came across this article from Discover magazine by David Freedman, the author of Wrong, which does a very nice job of explaining it:
The fundamental error here is summed up in an old joke scientists love to tell. Late at night, a police officer finds a drunk man crawling around on his hands and knees under a streetlight. The drunk man tells the officer he’s looking for his wallet. When the officer asks if he’s sure this is where he dropped the wallet, the man replies that he thinks he more likely dropped it across the street. Then why are you looking over here? the befuddled officer asks. Because the light’s better here, explains the drunk man.
How does this relate to scientific research? It points out that researchers are always likely to focus on quantities they can measure (i.e. where the light is good), regardless of whether they’re the most important. An example:
A bolt of excitement ran through the field of cardiology in the early 1980s when anti-arrhythmia drugs burst onto the scene. Researchers knew that heart-attack victims with steady heartbeats had the best odds of survival, so a medication that could tamp down irregularities seemed like a no-brainer. The drugs became the standard of care for heart-attack patients and were soon smoothing out heartbeats in intensive care wards across the United States.
But in the early 1990s, cardiologists realized that the drugs were also doing something else: killing about 56,000 heart-attack patients a year. Yes, hearts were beating more regularly on the drugs than off, but their owners were, on average, one-third as likely to pull through. Cardiologists had been so focused on immediately measurable arrhythmias that they had overlooked the longer-term but far more important variable of death.
This particular example brings to mind last month’s discussion about the increased incidence of arrhythmias among elite cross-country skiers. Should we be worried about those arrhythmias? Or should we focus on the “more important variable of death,” since studies of the same skiers found that more skiing led to longer life?
More generally, the Streetlight Effect is one of the key reasons why extrapolating “real world advice” from lab studies has such a low batting average. Last week, I blogged about an interview with Asker Jeukendrup, a sports scientist who I have a tremendous amount of respect for. But I disagreed with one of his comments: “I think if the physiological changes are there, the performance must ultimately follow.” All it takes is a stroll down the supplements aisle of a pharmacy or health-food store to remind me that promising physiological changes don’t always translate into measurable health or performance benefits.
My partner, who has a PhD in chemistry, says the same thing about the cholesterol-lowering and blood pressure-lowering drugs. It’s like manually adjusting your thermometer and expecting the temperature to drop. You need to focus on the actual cause as opposed to just the symptom.
“It’s like manually adjusting your thermometer and expecting the temperature to drop.”
Ha! I like that analogy — think I’ll start using it. 🙂
Hi Alex
I think I’m with you here but just want to thrash it out a bit
1. Beta Alanine
So the initial studies showed that it had only physiological benefits but recent one’s have shown performance gains. So is this not example of how Juenkedrups statement holds “If the physiological changes are there, the performance must ultimately follow” ??
2. Glycogen Depletion – Relevant Performance Studies
So back to my main interest. Rather than this being a “streetlight” study, I think it is more a case of searching for the needle in the haystack. Its there, its just going to take a long time to find it. In addition to that, they are simply not using the appropriate exercise protocols to properly investigate the performance gains. The adaptations are long term I believe, more than just 4-6wk studies. Secondly, the performance measures need to be conducted over much longer distances that they are currently testing. In terms of fat adaptation, it really comes into play at ultra distance events, 100K, 12hr, 24hr races. Finally, nutrition needs to be tailored to enhance the fat adaptations.
So, do we have trial, where subjects train in the fasted state for 6 months, with the appropriate nutrition strategies, with performance then measrued over a 12 or 24hr endurance race ? No, we don’t, and logistically, we probably never will. As a result, do you think that we can conclude that fasted state training has no impact on performance ?
@Barry:
Just to clarify, I’m not saying that performance changes don’t exist! Of course there are interventions that produce both physiological and performance changes — beet juice is another example. The point is that physiological changes are a NECESSARY but not SUFFICIENT condition to guarantee the presence of performance changes. Saying that “beta alanine produces performance changes, so therefore performance changes follow physiological changes” is like saying “I saw a gray horse the other day, so therefore all horses are gray.”
As for the difficulty of “proving” performance changes: absolutely, of course I agree. This is the classic challenge for sports science: for athletes, an improvement of ~0.5% is a “worthwhile” change — but unless the study is very large and well-designed, that magnitude of change will lie within the range of error. So it’s vital to recognize that the absence of evidence (of performance changes) is NOT the same as evidence of absence.
But at the same time — and this is the point I’m trying to make — the absence of evidence isn’t the same as evidence, either! Right now, we don’t know whether glycogen-depleted training improves performance in a practically meaningful way. No one has shown anything remotely like that. And just because it’s very, very hard to prove it doesn’t mean that the normal rules of evidence don’t apply!
Now, from a practical perspective, the evidence may be strong enough that it’s worth trying. I know you’re doing it, and I know a number of international-class athletes who are incorporating it in their training. That’s entirely reasonable: since we don’t have “proof” one way or the other, we have to make decisions based on the information we’ve got. But in doing so, we shouldn’t forget that we operating on the basis of inference, not evidence.
old study but see below for relationship between heart rate and mortality
Kleiger RE, Miller JP, Bigger JT, Moss AR, Multicenter Post-Infarction Research Group. Decreased heart rate variability and its association with increased mortality after acute myocardial infarction. Am J Cardiol. 1987;59:256–262.
Alex
I guess its a case of speculation, assumption and N=1 in the sports science world. Makes scientific interpretation very tricky !!
@Barry: Yes, lots of art to go with the science! To me, the key is being honest (with ourselves and others) and not claiming that something is “proven” when it isn’t. There’s no shame in educated guesses in situations where that’s the best we’ve got.