Gary Schwitzer of Health News Review– Any possible contribution I might make to any discussion of health literacy comes from my daily analysis of health news stories and the possible impact they may have on the American public. With that said, I have noticed that there are three recurring problems in many news stories.
Absolute versus relative risk/benefit data
One of our key observations after reviewing more than 1,600 stories over the past 5+ years is that stories tend to exaggerate benefits of interventions and tend to minimize or ignore harms. The problem, in this case, could be filed under both the health literacy and numeracy categories. Many stories use relative risk reduction or benefit estimates without providing the absolute data.
So, in other words, a drug is said to reduce the risk of hip fracture by 50% (relative risk reduction), without ever explaining that it’s a reduction from 2 fractures in 100 untreated women down to 1 fracture in 100 treated women. Yes, that’s 50%, but in order to understand the true scope of the potential benefit, people need to know that it’s only a 1% absolute risk reduction (and that all the other 99 who didn’t benefit still had to pay and still ran the risk of side effects).
Steve Woloshin and Lisa Schwartz of Dartmouth and the VA Outcomes Research Group in Vermont teach that it’s like having a 50% off coupon for selected items at a department store. But you don’t know what items the coupon can be used for. A diamond necklace? Or only a pack of chewing gum? That’s what the absolute risk/benefit data tells you. Consumers aren’t fully informed with only the relative data – yet that’s often all they get in news stories – much less in drug ads.
Association does not equal causation
A second key observation is that journalists often fail to explain the inherent limitations in observational studies – especially that they can not establish cause and effect. They can point to a strong statistical association but they can’t prove that A causes B, or that if you do A you’ll be protected from B. But over and over we see news stories suggesting causal links. They use active verbs in inaccurately suggesting established benefits. Examples:
- “Eating chocolate may decrease heart disease by as much as 37 percent,” reported NBC News.
- The Los Angeles Times reports, “Military suicides linked to low Omega-3 levels.” The story says the finding suggests “powerful psychiatric benefits.”
- A story on the MSNBC website is headlined, “Coffee habit may protect against breast cancer.”
If you think news consumers aren’t savvy and don’t pick up on these errors, look at some of the comments online users left in response to a CNN.com story headlined, “Coffee may cut risk for some cancers.” (All comments are unedited; this is how they appeared online.)
* “i love how an article starts with something positive and then slowly becomes a little gloomy. so is it good or not? i’m still where i was with coffee, it’s all in moderation, it ain’t gonna solve your health woes.”
* “The statistics book in a class I’m taking right know uses coffee as an example of statistics run amok. It seems coffee has caused all the cancers and cures them at the same time.”
* “Could it be that instead of having mysterious compounds, coffee drinkers just drink more coffee than they drink alcohol or smoke?”
* “I am so [expletive] sick of these studies, or more precisely how these “risk factors” are interpreted as “facts” by newspaper headlines. If you can’t explain why something happens other than surmising, stop wasting our time.”
* “…correlation IS NOT causation!!!! So people that drink 4 or more cups of coffee have a lower incidence of two certain types of head and neck cancers, and this is supposed to mean that coffee is actually “warding off” these cancers???”
The on-again, off-again, “coffee is good for you…coffee is bad for you” kind of story – often based on observational studies that aren’t explained adequately – gives readers reason to question scientists when they actually should be questioning the journalists or communicators who botch the message.
How we discuss screening tests
The third recurring problem I see in health news stories involves screening tests. Actually, this issue extends far beyond news stories to how ads, health education efforts, patient advocacy group campaigns, and even health care professionals sometimes misuse the term “screening.” Perhaps some of the current consumer confusion over screening test recommendations for breast and prostate cancer may be due to the fact that we’re not all talking about the same thing.
“Screening,” I believe, should only be used to refer to looking for problems in people who don’t have signs or symptoms or a family history. So it’s like going into Yankee Stadium filled with 50,000 people about whom you know very little and looking for disease in all of them.
Screening is not a term, in my opinion, that should be used to describe:
- Testing to find out why someone is having problems; that’s a diagnostic test, not a screening test.
- Testing of anyone at increased risk because of past problems, past treatment or family history.
I have heard women with breast cancer argue, for example, that mammograms saved their lives because they were found to have cancer just as their mothers did. I think that using “screening” in this context distorts the discussion because such a woman was obviously at higher risk because of her family history. She’s not just one of the 50,000 in the general population in the stadium. There were special reasons to look more closely in her. There may not be reasons to look more closely in the 49,999 others.
Why is this important?
Because all screening tests cause harm; some may also do good. That’s not the way we discuss screening, though, and certainly not in news stories where we have seen a consistent pro-screening bias that is an impediment to truly informed decision-making.
And when screening is portrayed as an imperative, not as a decision, the heavy-handed message is not balanced. Screening applied outside the boundaries of our best evidence may result in countless cases of unnecessary anxiety from false positives, unnecessary follow-up testing which may be invasive with its own additional complications and costs, and possibly even unnecessary treatment for a condition that never would have caused harm. (Dr. Barron Lerner has an excellent New York Times blog column, “The Shortfalls of Early Cancer Detection,” that gives a historical perspective to this problem.)
If we don’t improve our communication on screening tests, we don’t stand much chance of improving our communication on downstream treatment issues. And if we don’t achieve that, we don’t stand much of a chance of achieving meaningful health care reform.
The words matter. Accuracy, balance and completeness in news stories matters. Journalists – and other communicators – may not intend to cause harm by their framing of these messages, but harm may, indeed, be done.
About Gary Schwitzer
Gary Schwitzer has specialized in health care journalism for nearly 40 years. He is Publisher of HealthNewsReview.org – an award-winning site that grades health news. For 9 years he taught health journalism and media ethics at the University of Minnesota. Gary worked in television news for 15 years – in Milwaukee, Dallas and CNN. He was founding editor-in-chief of MayoClinic.com. The Kaiser Family Foundation published his 2009 report, “The State of U.S. Health Journalism.” He wrote the Association of Health Care Journalists’ guide for reporting on studies and the group’s Statement of Principles. One competition named his blog “Best Medical Blog” of 2009.