Do doctors understand test results?
Are doctors confused by statistics? A new book by one prominent statistician says they are - and that this makes it hard for patients to make informed decisions about treatment.
In 1992, shortly after Gerd Gigerenzer moved to Chicago, he took his six-year-old daughter to the dentist. She didn't have toothache, but he thought it was about time she got acquainted with the routine of sitting in the big reclining chair and being prodded with pointy objects.
The clinic had other ideas. "The dentist wanted to X-ray her," Gigerenzer recalls. "I told first the nurse, and then him, that she had no pains and I wanted him to do a clinical examination, not an X-ray."
These words went down as well as a gulp of dental mouthwash. The dentist argued that he might miss something if he didn't perform an X-ray, and Gigerenzer would be responsible.
But the advice of the US Food and Drug Administration is not to use X-rays to screen for problems before a regular examination. Gigerenzer asked him: "Could you please tell me what's known about the potential harms of dental X-rays for children? For instance, thyroid and brain cancer? Or give me a reference so I can check the evidence?"
The dentist stared at him blankly.
Gigerenzer, director of the Harding Center for Risk Literacy in Berlin, is an expert in uncertainty and decision-making. His new book, Risk Savvy, takes aim at health professionals for not giving patients the information they need to make choices about healthcare.
But it's not just that doctors and dentists can't reel off the relevant stats for every treatment option. Even when the information is placed in front of them, Gigerenzer says, they often can't make sense of it.
In 2006 and 2007 Gigerenzer gave a series of statistics workshops to more than 1,000 practising gynaecologists, and kicked off every session with the same question:
A 50-year-old woman, no symptoms, participates in routine mammography screening. She tests positive, is alarmed, and wants to know from you whether she has breast cancer for certain or what the chances are. Apart from the screening results, you know nothing else about this woman. How many women who test positive actually have breast cancer? What is the best answer?
- nine in 10
- eight in 10
- one in 10
- one in 100
Gigerenzer then supplied the assembled doctors with some data about women of this age to help them answer his question. (His figures were based on US studies from the 1990s, rounded up or down for simplicity -current stats from Britain's National Health Service are slightly different).
- The probability that a woman has breast cancer is 1% ("prevalence")
- If a woman has breast cancer, the probability that she tests positive is 90% ("sensitivity")
- If a woman does not have breast cancer, the probability that she nevertheless tests positive is 9% ("false alarm rate")
In one session, almost half the group of 160 gynaecologistsresponded that the woman's chance of having cancer was nine in 10. Only 21% said that the figure was one in 10 - which is the correct answer. That's a worse result than if the doctors had been answering at random.
The fact that 90% of women with breast cancer get a positive result from a mammogram doesn't mean that 90% of women with positive results have breast cancer. The high false alarm rate, combined with the disease's prevalence of 1%, means that roughly nine out of 10 women with a worrying mammogram don't actually have breast cancer.
It's a maths puzzle many of us would struggle with. That's because, Gigerenzer says, setting probabilities out as percentages, although standard practice, is confusing. He campaigns for risks to be expressed using numbers of people instead, and if possible diagrams.
Even so, Gigerenzer says, it's surprising how few specialists understand the risk a woman with a positive mammogram result is facing - and worrying too. "We can only imagine how much anxiety those innumerate doctors instil in women," he says. Research suggests that months after a mammogram false alarm, up to a quarter of women are still affected by the process on a daily basis.
Survival rates are another source of confusion for doctors, not to mention journalists, politicians and patients. These are not, as you might assume, simply the opposite of mortality rates - the proportion of the general population who die from a disease. They describe the health outcomes of people who have been diagnosed with a disease, over a period of time - often five years from the point of diagnosis. They don't tell us about whether patients die from the disease afterwards.
Take prostate cancer. In the US, many men choose to be screened for prostate-specific antigens (PSA) which can be an indicator of the disease. In the UK, it's more common for men to get checked only after they start experiencing problems. Consequently, they are diagnosed with prostate cancer later, and are less likely to survive for five years before dying - but this doesn't mean that more men die.
Moreover, many men have "non-progressive" prostate cancer that will never kill them. While screened American men in this situation are marked as having "survived" cancer, unscreened British men aren't. These two facts explain why five-year survival rates of prostate cancer are much higher in the US than in the UK (99% rather than 81%), while the numbers of deaths every year per 100,000 men are almost the same (23 in the US, 24 in the UK).
One of the Harding Center's diagrams shows that the risk of death is the same whether men are screened for prostate cancer or not:
So when former New York mayor Rudy Giuliani declared in 2007 that someone's chance of surviving prostate cancer in the US was twice that of someone using the "socialised medicine" of Britain's National Health Service, he was wrong. And when, in 1999, there was a furore about Britain's survival rate for colon cancer (at the time 35%) being half that of the US (60%), experts again ignored the fact that that the mortality rate was about the same.
Gigerenzer's research shows just how confused doctors often are about survival and mortality rates. In a survey of 412 doctors in the US he found three-quarters mistakenly believed that higher survival rates meant more lives were saved. He also found more doctors would recommend a test to a patient on the basis of a higher survival rate, than they would on the basis of a lower mortality rate.
Unsurprisingly, patients' misconceptions about health risks are even further off the mark than doctors'. Gigerenzer and his colleagues asked over 10,000 men and women across Europe about the benefits of PSA screening and breast cancer screening respectively. Most overestimated the benefits, with respondents in the UK doing particularly badly - 99% of British men and 96% of British women overestimated the benefit of the tests. (Russians did the best, though Gigerenzer speculates that this is not because they get more good information, but because they get less misleading information.)
A quarter of British women went so far as to guess that 200 women out of every 1,000 screened have their lives saved by mammograms. But Gigerenzer says the real figure is about one woman in 1,000 - four out of every 1,000 screened women die from the disease, as opposed to five out of every 1,000 unscreened women. He says that this benefit has been represented as a "20% mortality reduction", which might explain why many women in the UK seem to think that 20% of women are saved by undergoing the procedure.
Again, the culprit is the use of percentages rather than actual numbers to represent risk and benefit. Simple factboxes - an idea from Dr Lisa Schwartz and Dr Steven Woloshin at Dartmouth Medical School - would help:
Perhaps the most notorious example of patients being misled about risk occurred in October 1995, when the UK's Committee on Safety of Medicines warned doctors that a new, third-generation oral contraceptive pill doubled the risk of thrombosis. Thousands of women came off the pill, even though the risk had merely increased from a one-in-7,000 chance of getting the disease to a two-in-7,000 chance. The following year saw an additional 13,000 abortions in the UK.
So far as doctors' understanding of statistics is concerned, Gigerenzer believes the problem is easily fixed. "It's not a problem of the medical mind. It's a problem of training at the universities, in the medical departments where young doctors are trained in everything except statistical thinking."
But there are other pressures that get in the way of clinicians and patients working together to make decisions about treatment. One trend, evident in some health systems more than others, is for doctors to practise medicine "defensively", recommending treatments that are least likely to leave them open to being sued. Gigerenzer's altercation with his daughter's dentist ended with him signing a waiver form to show that he understood the risks of her not being X-rayed. (That incident may also serve to highlight how financial incentives can be a factor, since the clinic would have earned more money if it had done the X-ray.)
A 2005 survey of 824 US hospital doctors working in fields particularly at risk of litigation, such as obstetrics and neurosurgery, revealed that 93% said they practised defensive medicine. A 2009 study of 250 Swiss doctors found that although about half thought the harms of PSA screening outweighed the benefits, 75% nevertheless recommended the procedure.
"In order to get the doctor out of the defensive medicine mood - where they protect themselves against you as a patient - it's often good to ask doctors not 'What would you recommend?' but, 'If it were your mother or your brother, what would you do?'" says Gigerenzer.
The answer may be very different. A 1993 study found that while the rate of hysterectomy among Swiss women was 16%, among female doctors and doctors' wives it was 10%.
There are three other questions Gigerenzer advises patients to ask doctors to ensure they get all the facts:
- "What are the alternatives?"
- "What's the benefit and what's the harm?"
- "Please tell me this in terms of absolute numbers. If 100 take this medication and 100 people don't, what happens after five years?"
Once they get the answers it is up to them to make up their own mind about treatment, he says.
Dr Glynn Elwyn at the Dartmouth Center in the US, shares this zeal for shared decision-making, but has found that even educated patients feel uncomfortable asking their doctors too many questions or questioning their recommendations, for fear of being labelled "difficult".
He encourages patients to ask questions in a way that doesn't antagonise doctors or put them on the spot. "Framing it in such a way as, for example: 'I happen to have been doing some research. I know there is a controversy here. You may not know this immediately but could you guide me towards some reading?'"
But if a clinician dodges a question or gets angry, he says, patients should switch doctors.
As for the doctors, Elwyn recommends they come clean when they don't know something, and make use of tools like option grids - which clearly lay out treatment options and their consequences - to work through difficult decisions with patients.
"It's surprising that in the 21st Century, many still think of doctors as Gods and you don't ask God," says Gigerenzer.
"A physician is someone who can help you but also someone you need to challenge in order to get the best treatment."
original post found her http://www.bbc.com/news/magazine-28166019
No comments :
Post a Comment