THANK YOU FOR VISITING SWEATSCIENCE.COM!
As of September 2017, new Sweat Science columns are being published at www.outsideonline.com/sweatscience. Check out my bestselling new book on the science of endurance, ENDURE: Mind, Body, and the Curiously Elastic Limits of Human Performance, published in February 2018 with a foreword by Malcolm Gladwell.
- Alex Hutchinson (@sweatscience)
***
I posted a couple of days ago about some famous research showing that the “hot hand” in basketball is (apparently) an illusion. As one of the researchers, Amos Tversky, said:
I’ve been in a thousand arguments over this topic, won them all, but convinced no one.
The reason that story interested me was because I’d been thinking about similar issues in relation to the recent news that a U.S. medical panel has recommended against routine screening for prostate cancer for healthy men. A couple of recent articles in the New York Times have explored the rationale behind this recommendation, and why our minds are ill-equipped to weigh the pros and cons in these cases. As Daniel Levitin wrote in a review of Jerome Groopman and Pamela Hartzband’s new book “Your Medical Mind”:
Yet studies by cognitive psychologists have shown that our brains are not configured to think statistically, whether the question is how to find the best price on paper towels or whether to have back surgery. In one famous study, Amos Tversky [yes, the guy who did the “hot hand” study -AH] and Daniel Kahneman found that even doctors and statisticians made an astonishing number of inference errors in mock cases; if those cases had been real, many people would have died needlessly. The problem is that our brains overestimate the generalizability of anecdotes… The power of modern scientific method comes from random assignment of treatment conditions; some proportion of people will get better by doing nothing, and without a controlled experiment it is impossible to tell whether that homeopathic thistle tea that helped Aunt Marge is really doing anything.
I actually got into a bit of debate at (Canadian!) Thanksgiving dinner a few nights ago about prostate screening. Two of my elder relatives have had prostate cancer, and undergone the whole shebang — surgery, radiation, etc. One of them, in particular, was highly critical of the recommendation not to be screened. It had saved his life, he said. How did he know?, I asked. He just knew — and he knew dozens of other men in his survivors’ support group who had also been saved.
I feel bad about having argued over what is clearly a very emotional topic. Nonetheless, I’m ashamed to admit, I did try to explain the concept of “number needed to treat.” Surely everyone would agree that, say, taking a million men and amputating their legs wouldn’t be worthwhile if it saved one man of dying from foot cancer, right? And if you accept that, then you realize that it’s not a debate about the absolute merit of saving lives — it’s a debate about weighing the relative impact and likelihood of different outcomes. That’s captured very nicely in another NYT piece, by a professor of medicine at Dartmouth, Gilbert Welch, who compared the similarities between breast cancer screening (recommended) and prostate cancer screening (not recommended):
Overall, in breast cancer screening, for every big winner whose life is saved, there are about 5 to 15 losers who are overdiagnosed [i.e. undergo treatments such as surgery, radiation, etc.]. In prostate cancer screening, for every big winner there are about 30 to 100 losers.
So what’s the message here? Is it worth putting 100 men through surgery to extend one man’s life? How about 30 men? (The average extension of life after prostate surgery is six weeks.) Of course, there’s no “right” answer. As Welch writes, reasonable people can reach different conclusions based on the same input data. In the end, you have to assess the odds and the stakes, and choose how you want to gamble. But before you do, you should at least understand what those odds are — and that means taking a good luck at number needed to treat and other ways of assessing treatments.