Archive

Archive for October, 2011

Yoga vs. stretching for lower back pain

October 31st, 2011

I tend to post a lot about studies that find no benefits from traditional static stretching. Does that mean stretching has no benefits? No — it just means that the benefits are hard to quantify. So to be fair and balanced, I figured I should mention this recent study from the Archives of Internal Medicine, which suggests that stretching may be helpful for lower back pain (press releases here and here).

The study was actually designed to test whether yoga helps back pain. They compared a 12-week yoga program to 12 weeks of stretching (chosen to have a similar level of physical exertion), or 12 weeks reading a self-care book. Both yoga and stretching were better than reading the book at improving pain and function; there were no differences between yoga and stretching.

Now, I can’t help pointing out that the study isn’t immune to placebo effects. The assessments of pain and function were done with telephone interviews, and relied on subjective reports from the patients. And let’s be honest: the suckers who were randomized into the “self-care book” group knew darn well that they got the short end of the stick! So I don’t view this as strong evidence of a mechanistic relationship between stretching and back pain (i.e. that the back pain is caused by tightness in some specific muscle, and stretching releases the pressure to eliminate the pain). But that’s kind of beside the point. The stretching made people feel better — and for a very simple, low-cost, low-risk, uninvasive intervention (unlike, say, surgery), that’s a good enough outcome.

Another reason for morning workouts: UV and cancer

October 29th, 2011

Runners and cyclists (and walkers and open-water swimmers and so on) spend a  lot of time outdoors. Which is great — for me, that’s one of the big attractions! Still, that’s a lot of UV exposure, which is a bit worrying. But a neat and surprising study from researchers at the University of North Carolina, published in the Proceedings of the National Academy of Sciences, suggests that UV exposure in the morning is much less damaging than an identical dose of UV exposure later in the afternoon. This has nothing to do with cloud cover or sunlight intensity – it’s all about the body’s circadian rhythms.

The problem with UV light is that it damages your DNA; your body fights this ongoing damage by trying to repair the DNA. The levels of a key protein responsible for this repair process fluctuate during the day, with a maximum early in the morning and a minimum late in the afternoon. In contrast, the process of DNA replication, which can cause the errors in damaged DNA to spread, is slowest in the morning and fastest in the afternoon. So UV damage in the morning should be less likely to spread and more quickly repaired; in the afternoon, it’s the opposite. Here’s an illustration from the study’ s press release (you’ll notice that mice, which are nocturnal, have exactly the reverse pattern):

So does this effect have any real practical significance? Well, the researcher tested it on mice. They exposed two groups of mice to identical doses of UV radiation, one at 4 a.m. and the other at 4 p.m. The morning exposure group was five times more likely to develop skin cancer than the afternoon exposure group. (Remember that mice have the opposite cycle compared to humans, so that means morning is the best time to be exposed for humans.)

Is this sufficient evidence to tell people to switch up their exercise patterns? Not really. The researchers are now planning to directly measure DNA repair rates in human volunteers at various times, which would be another plank. For now, the timing of my workouts (which are, in fact, mostly in the morning) is dictated by lots of other factors — but it’ll make me worry a little less about the tan lines that I develop even when I’m running super-early in the morning. Now I just need a study that tells me that vitamin D production is maximized by morning sun exposure, and I’ll be all set!

Power Balance bracelets in placebo-controlled experiment

October 28th, 2011

I’m embarrassed to even report on this study — but just in case there are still any Power Balance believers out there, researchers at the University of Texas at Tyler have just published a placebo-controlled, double-blind, counterbalanced test of strength, flexibility and balance, in the Journal of Strength & Conditioning Research. They compared Power Balance bracelets to the same bracelets with the “energy flow distributing Mylar hologram” removed, and to nothing at all. And, believe it or not, they found no differences. For example:

And for all those who still swear that, when the salesman put the bracelet on their wrist, they really did do better on the balance test, it’s worth noting the University of Wisconsin pilot study (cited in the Texas paper) that found that in balance and flexibility tests like the ones used by Power Balance salespeople, you always do better the second time you try it, due to learning effects. So if you try the test first with the bracelet on, then with the bracelet off, you’ll “prove” that the energy flow actually harms your balance. (Or maybe that just means you had the bracelet on backwards…)

The activitystat hypothesis: do we have an exercise set point?

October 27th, 2011

If you do a vigorous workout in the morning, will you be correspondingly less active for the rest of the day, so that your total physical activity ends up being the same as if you hadn’t worked out at all? That’s the basic gist of the “activitystat” hypothesis, which Gretchen Reynolds described in a New York Times article last week (thanks to Ed for the heads-up!). It’s also the topic of a pro vs. con [EDIT: had the links backward before -AH] debate in the current issue of the International Journal of Obesity (full text free available).

Reynolds describes several interesting studies that line up in favour of or against the theory, including one (from the same issue of IJO) that compared three British elementary schools with very different amounts of in-school physical activity. Here’s what that study found:

You can see that, for both “total physical activity” and “moderate and vigorous physical activity,” one group had much higher levels in school than the other two, but compensated by doing less outside school. On the surface, it seems like a pretty compelling argument in favour of the activitystat hypothesis.

My take: somewhere in the middle, as usual. It would be ludicrous to claim that the body doesn’t regulate physical activity based on previous exertions to some degree. Do a one-day study of “voluntary movement” among people who have run a marathon that morning, and of course you’re going to find that they chill out more. At the opposite extreme, it would be equally silly to argue that all people everywhere in the world do exactly the same amount of physical activity. Or that any given person’s physical activity stays essentially constant over long periods of time  — again, think of someone who goes from sedentary to marathon training: no amount of fidgeting or taking the stairs will add up to the exertions of 100-mile weeks. (For more examples of the role of environment in determining activity level, read the “con” commentary I linked to above. E.g. Nandi children in Kenya who grow up in the countryside are more active overall than Nandi city kids — an obvious result, but one that clashes with the activitystat idea.)

So the relevant question isn’t “Do compensatory mechanisms exist?” It’s “Do they matter, and are they insurmountable?” As lovely as the data from the British school study is, I don’t find it convincing. The school with the highest in-school physical activity was a fancy boarding school in the countryside, while the other two schools were urban. If the boarding-school kids play an hour of cricket in phys ed every day, the fact that they don’t choose to go play an hour after school doesn’t necessarily mean that the activitystat is limiting them. Maybe they just want to (or have to, depending on the other extracurricular requirements of the school) do something else.

One final point: it would be interesting to stratify those results based on the activity levels of the kids. Does the apparent activitystat mechanism apply equally to the most active and least active kids? Because if there are some kids who, left to their own devices, only get a total of 50 minutes of moderate/vigorous activity per week, then giving them 100 minutes a week in school is going to benefit them — and there’s nothing any activitystat can do to stop it!

Higher carb intake = faster Ironman finish

October 26th, 2011

Here’s a graph, from a recent paper on nutrition during long (marathon and longer) endurance competitions, that’s worth a close look:

What do you see? A bunch of dots scattered randomly? Look a bit more closely. The data shows total carb intake (in grams per hour) by racers in Ironman Hawaii (top) and Ironman Germany (bottom), plotted against finishing time. It comes from a Medicine & Science in Sports & Exercise paper by Asker Jeukendrup’s group (with several collaborators, including Canadian Sport Centre physiologist Trent Stellingwerff) that looked at “in the field” nutritional intake and gastrointestinal problems in marathons, Ironman and half-Ironman triathlons, and long cycling races. The basic conclusion:

High CHO [carbohydrate] intake during exercise was related to increased scores for nausea and flatulence, but also to better performance during IM races.

So basically, taking lots of carbs may upset your stomach, but helps you perform better. It’s important to remember that gastrointestinal tolerance is trainable, so it’s worth putting up with some discomfort to gradually raise the threshold of what you’re able to tolerate.

Anyway, back to that graph: while it may look pretty random, statistical analysis shows a crystal-clear link between higher carb intake rates and faster race times, albeit with significant individual variation. Obviously there are some important caveats — it may be, for example, that faster athletes tend to be more knowledgeable about the benefits of carbs, and thus take more. Still, it’s real world data that tells us the people at the front of the race tend to have a higher carb intake rate.

One other point worth noting. The traditional thinking was that humans generally couldn’t process more than 60 grams of carb per hour. Over the last few years, thanks to multiple-carb blends, that threshold has been pushed up to 90 grams of carb per hour. In this data set, about 50% of the triathletes were taking 90 g/hr or more.

[UPDATE 10/26: Given all the comments below about the variability in the data, I think it's worth emphasizing what should be a fairly obvious point. The only way this data would come out as a nice straight line is if Ironman finishing time depended ONLY on carb intake, and was totally independent of training, experience, talent, gender, body size, and innumerable other factors. This is obviously not the case, so we should expect the data to be very broadly scattered. What the statistical analysis shows is that, with p<0.001, faster finishers tended to have consumed carbs at a higher rate. There are many ways to interpret this data; one possibility is that, if your carb consumption is below average, you might wish to try a higher rate of consumption (e.g. 90 g/hr) to see if it helps.]

,

Can you trust your own judgment about health/fitness?

October 25th, 2011

Just wanted to highlight a book excerpt that ran in the New York Times Magazine over the weekend, from Nobel Prize-winning psychologist Daniel Kahneman’s forthcoming book “Thinking, Fast and Slow.” It’s about our general tendency to place great faith in our own explanations for things, regardless of whether the facts bear them out:

The confidence we experience as we make a judgment is not a reasoned evaluation of the probability that it is right. Confidence is a feeling, one determined mostly by the coherence of the story and by the ease with which it comes to mind, even when the evidence for the story is sparse and unreliable. The bias toward coherence favors overconfidence. An individual who expresses high confidence probably has a good story, which may or may not be true.

[...] When a compelling impression of a particular event clashes with general knowledge, the impression commonly prevails.

The main example he discusses in the excerpt is the world of finance — many, many people (including just about everyone I know, seemingly) are convinced that they or their financial advisors are capable of outperforming the market, despite ample evidence that this is nearly impossible to do on a consistent basis. But the good stock picks they’ve made over the years make such a vivid impression that they remain convinced of their abilities.

The reason I’m blogging about this here is that I think this phenomenon is also nearly universal when it comes to health and fitness. Of course, there are many people who either don’t believe in or don’t understand the scientific method. They trust their instincts in figuring out which potions and pills are helping them in vague and unquantifiable ways. This is not surprising at all. What is surprising to me is the number of people who understand and profess belief in the scientific method, who murmur all the right catchphrases about “correlation is not causation” and “of course n=1 anecdotes don’t mean anything,” and yet are still absolutely convinced of their ability to determine which stretch has enhanced their power or saved them from injury, or which pill makes them feel more energetic, or which type of training has enhanced their lactate clearance.

There is some good news at the end of Kahneman’s excerpt: it is possible to have real intuitive expertise. (“You are probably an expert in guessing your spouse’s mood from one word on the telephone,” he notes. Chess players and medical diagnosticians are other example.) But there’s a necessary condition:

Is the environment in which the judgment is made sufficiently regular to enable predictions from the available evidence? The answer is yes for diagnosticians, no for stock pickers.

Maybe I’m just a particularly complicated human, or unusually incapable of reading my body’s signals. But given the huge number of factors, both intrinsic and extrinsic, that affect the day-to-day variation in my mood, energy and physical performance, I don’t consider my own body “sufficiently regular” to be able to make accurate judgements about the efficacy of any particular single intervention.

Aging: does the average decline as much as the extremes?

October 24th, 2011

My Jockology column in today’s Globe and Mail takes another look at aging and physical decline:

It’s the chicken-and-egg question of aging: Do we become less active as we get older because our bodies start to break down, or do our bodies start to break down because we allow ourselves to become less active?

For years, it was widely accepted that humans would start getting slower, weaker and more fragile starting in their 30s. But new studies on topics ranging from the cellular mechanisms of aging to the time-defying performances of masters athletes are forcing researchers to question this orthodoxy. It seems increasingly likely that the first signs of decline are more a function of lifestyle than DNA: If you keep using it, you’ll be well into middle age before you start losing it. [READ THE WHOLE ARTICLE...]

One of the studies discussed in the article is this analysis of the finishing times of 900,000 German marathoners and half-marathoners, published last year. The researchers argue that the rate of decline of mid-packers is a better way of judging “natural” aging processes compared to the outliers who set age-group world records. For fun, I plotted the average finishing times of the runners in the German study, and superimposed the curve that you’d get if they declined at the same rate as age-group records. It’s pretty clear that this group of midpackers does decline at a slower rate:

 

,

Stress fractures: is it weak bones or muscles?

October 23rd, 2011

A new study from researchers at the University of Calgary, published in the November issue of Medicine & Science in Sports & Exercise, looks at bone quality and leg muscle strength in a group of 19 women who have suffered stress fractures in their legs, and compares them to a group of matched controls. The basic results:

  • the women who got stress fractures had thinner bones;
  • at certain key locations, the quality of the bone was lower in the stress fracture group;
  • the stress fracture group also had weaker leg muscles, particularly for knee extension (lower by 18.3%, statistically significant) and plantarflexion (lower by 17.3%, though not statistically significant).

Now, this sounds very similar to the results of a University of Minnesota study published a couple of years ago. Here‘s how I summed up the conclusions reached by those researchers:

What’s interesting, though, it that the bone differences were exactly in proportion to the size of the muscles in the same area, and there was no difference in bone mineral density. What this suggests is that the best way to avoid stress fractures is to make sure you have enough muscle on your legs — presumably by doing weights and (it goes without saying) eating enough.

What I don’t understand is that, in the new Calgary study, even though they mention the Minnesota study repeatedly in their discussion, they don’t discuss at all this idea that it’s the lower muscle strength that dictates the reduced bone size and thus the stress fracture risk — even though that was the primary conclusion of the Minnesota study. Instead, they say “the role of muscle weakness in [stress fractures] is unclear from previous studies,” and suggest that weaker knee extension might change running form to produce a “stiffer” running stride or somehow alter the direction of forces on the bone during running — both of which seem like unnecessarily complex and speculative ideas compared to the straightforward link between muscle strength and bone strength.

It’s entirely possible that I’m missing something here, because the paper is quite complex. But what I take away from it is, once again, that strengthening your legs is likely (though not yet proven in a prospective trial) to reduce your stress fracture risk.

,

Good diet trumps genetic risk of heart disease

October 20th, 2011

I posted last week about “epigenetics” — the idea that, while the genes you’re born with are unchangeable, environmental influences can dictate which of your genes are turned “on” or “off.” A few days later, I saw a mention of this PLoS Medicine study in Amby Burfoot’s Twitter feed. It’s not an epigenetic study, but it again reinforces the idea that the “destiny” imprinted in your genes is highly modifiable by how you live your life.

The study mines the data from two very large heart disease studies, analyzing 8,114 people in the INTERHEART study and 19,129 people in the FINRISK prospective trial. They looked at a particular set of DNA variations that increase your risk of heart attack by around 20%. Then they divided up the subjects based their diet, using a measure that essentially looked at either their raw vegetable consumption, or their fresh veg, fruit and berry consumption. Here’s what the key INTERHEART data looked like:

Breaking it down:

  • The squares on the right represent the “odds ratio,” where the farther you are to the right (i.e. greater than one), the more likely you are to have a heart attack.
  • The top three squares represent the people who ate the least vegetables, and the bottom three squares are those who ate the most vegetables.
  • Within each group of three, GG are the people with the “worst” gene variants for heart attack risk, AG are in the middle, and AA are the people with the least risk.

So if we look at the top group first, we see exactly what we’d expect: the people with the bad genes are about twice as likely to suffer a heart attack as the people with the good genes. But if you look at the middle group (i.e. eat more vegetables), the elevated risk from bad genes is down to about 30%. And in the group eating the most vegetables, there’s essentially no difference between the good and bad genes.

How does this work? The researchers don’t know — partly because no one’s even sure exactly how the bad gene variants cause higher risk. (There are some theories, e.g. that it affects the structure of your veins and arteries.) But the practical message is pretty clear: if you eat your veggies, you don’t have to worry about this particular aspect of your genetic “destiny.”

,

How quickly is water absorbed after you drink it?

October 19th, 2011

I’ve always been curious about this. Sometimes, after drinking a big glass of water, it seems like I pee it all out literally just a few minutes later. Is this just in my head, or is ingested fluid really processed that quickly? A new study by researchers at the University of Montreal, published online in the European Journal of Applied Physiology, takes a very detailed look at the kinetics of water absorption and offers some answers.

The study gave 36 volunteers 300 mL of ordinary bottled water, “labelled” with deuterium (an isotope of hydrogen than contains a proton and a neutron instead of just a proton) to allow the researchers to track how much of that specific gulp of water was found at different places in the body. They found that the water started showing up in the bloodstream within five minutes; half of the water was absorbed in 11-13 minutes; and it was completely absorbed in 75-120 minutes.

Here’s what the data looks like:

On the left, it shows how quickly the water was absorbed in the first hour, measured in the blood. On the right, it shows the gradual decay of deuterium levels over the subsequent 10 days, measured from urine samples. This, of course, shows that when I pee after drinking a glass of water, I’m not peeing out the same glass of water! Within ~10 minutes, fluid levels in my blood will have risen sufficiently to trigger processes that tell me to pee — but, according to this data, it takes about 50 days for complete turnover of all the water in your body.

The other wrinkle in this data is that the subjects showed two distinct absorption patterns (shown on the bottom and top), with about half in each group. In the top group, the water is very rapidly absorbed into the blood (possibly because these people get water out of the stomach and into the small intestine very quickly) before running into a slight bottleneck as the water is then distributed throughout the body to all the extremities. The second group, on the other hand, doesn’t hit this bottleneck: the flow of water out of the stomach and into the small intestine is slow enough that extra water doesn’t have a chance to build up in the blood before being distributed throughout the body.

So what does this all mean? I don’t have any particular practical applications in mind — I just thought it was kind of cool.