A field guide to pointless medical research

A Renaissance painting of a physician examining a flask of urine brought by a young woman.

A physician examining a flask of urine brought by a young woman. Oil painting attributed to a German follower of Gerrit Dou.

The ‘race for a cure’ is in many ways a fantasy. If only. Medical research is at best a precarious, plodding mountain climb with many contestants taking dead-end routes. At its worst, medical research is an enterprise that wastes tens of billions of dollars a year on studies that are repetitive, irreparably biased, kept hidden by industry sponsors, or designed with no regard to the desires of people who have to live with serious illness.  (Cross posted from my blog at Oregon Business)

A few years ago I wrote about a five-year-old Portland girl named Katie who was born with Smith-Lemli-Opitz syndrome, a rare disorder that causes severe developmental delays and a spectrum of birth defects.  I keep thinking about what her mother Kathy told me about her situation as a parent:

What’s been most difficult [she said] “is having no proven, real treatment out there for her, and having it all be in studies. But also it’s not knowing what to expect for her future, not having the answers you would have for a more common disease — whether she could be independent, what she may or may not be able to do as she grows into an adult.”

Although the syndrome’s root cause has been known since 1993 – and several major medical centers including Oregon Health & Science University formed a consortium in 2010 to expedite studies – researchers still have not published a randomized, controlled clinical trial of any treatment.  Only six small, nonrandomized studies appear in the National Library of Medicine’s PubMed database, and they don’t provide much in the way of answers.

The “race for a cure” is in many ways a fantasy. If only. Medical research is at best a precarious, plodding mountain climb with many contestants taking dead-end routes.  Some of the most acclaimed drug discoveries of the past decade took a median of 24 years from conception to become established as clinically useful treatments, according to an analysis reported a few years ago in the journal Science.  The authors zeroed in on 101 studies published between 1979 and 1983 that were highly regarded and made strong claims about potential to cure illness.  A decade later, only five had led to a practical treatment, and only one that was widely used.  And 40% of the discoveries celebrated as “breakthroughs” later proved partially or completely ineffective. Among them: hormone replacement for preventing stroke and heart disease, and vitamin E for preventing heart disease.

At its worst, medical research is an enterprise that wastes tens of billions of dollars a year on studies that are repetitive, irreparably biased, kept hidden by industry sponsors, or designed with no regard to the desires of people who have to live with serious illness. I’m disgusted but no longer surprised by the estimate that 85% of the money poured into medical research is wasted,  which The Lancet reported in 2009.  The esteemed British journal has now published a major series of papers on the problem of waste in medical research, and I hope it will lead to a watershed.

The series is freely accessible online (with registration) and worth reading in its entirety. Here are some of the highlights:

Flawed decisions about what research to pursue

From basic science to clinical trials, scientists frequently fail to consider the needs of patients and medical caregivers, and ignore much of what is already known from earlier studies.

• A review of existing knowledge could have spared six healthy volunteers from life-threatening complications in a 2006 safety study on the monoclonal antibody TGN1412. And a healthy volunteer recruited for a 2001 study on the effects of inhaled hexamethonium would not have died from the toxic effects of the drug on her lungs. At least 16 published papers had warned of lung complications from the drug.

• If researchers had assessed animal studies showing no protective effects, they would not have pointlessly tested the drug nimodipine on more than 7,000 stroke patients in the 1990s.

• Timely review of the evidence on risk factors for sudden infant death syndrome could have identified the danger of babies lying on their front at least a decade earlier than it was, and tens of thousands of infant deaths could have been avoided, a look-back in 2005 concluded.

Flawed research designs, methods, and analysis

Scientists routinely plan and conduct studies without seriously considering the value or usefulness of the information that they will produce.

•  When researchers at Oregon Health & Science University – much to their credit – looked back at their record of cancer clinical trials, they found that a third “resulted in little scientific benefit” because they failed to enroll enough subjects.

• Single-mindedly aiming for adequate statistical power may help a scientist bring in grant money and get published in big-time journals. But it tends to pervert the science by tempting researchers to use treatment outcome measures that don’t matter to patients. Clinical trials of potential Alzheimer’s disease treatments, for example, have used cognitive function scales that allow detection of small “improvements” that are essentially meaningless for patients.

• Current incentives reward scientists for getting newsworthy results published in prestigious journals more directly than they reward them for producing well-designed studies that actually help people who are sick. My favorite example is a wonderfully candid self-assessment by researchers in the Netherlands who conducted a multicenter study of the role of the stress hormone cortisol in mental illness.  The scientists involved produced a stream of publications in highly cited journals, presentations at conferences, and successful PhD dissertations. But the conclusions of many of the investigators contradicted each other, producing an “incoherent” overall result and no scientific progress.

• Scientists face few negative consequences for making exaggerated claims or producing flawed or incorrect results, and even the most egregious offenders may go undetected for years.  I’m reminded of Marc Hauser, the Harvard professor who got away with fabricating data for ten years before investigators outed the scientific misconduct, and the prestigious Dutch psychologist who made up or manipulated data in dozens of papers before being caught.

• Even when incorrect results are overturned by more definitive research, they can continue to steer scientists in the wrong direction.  Hundreds of scientific papers, for instance,  repeated the discredited claim that vitamin E protects against heart disease after clinical trials ruled out that benefit more than ten years ago. Likewise, researchers continued to quote discredited claims for beta-carotene for cancer and estrogen for Alzheimer’s disease.

Inaccessible research information

About half of all health-related studies never go public. Even when researchers publish clinical trial results, they or their industry sponsors very often keep the patient-level data secret from other scientists who otherwise could use it to speed further discoveries.

• The flu drug oseltamivir (Tamiflu) gained FDA approval in 1999, but 60% of the patient data—including the largest known trial—was kept secret by Roche, the drug’s manufacturer for more than a decade. Some independent researchers say they still can’t be sure how safe and effective the drug is.

• The studies that get published paint an overly rosy picture of new drugs. Dr. Erick Turner, a former drug reviewer for the federal Food and Drug Administration now at the Portland Veteran Affairs Medical Center has carefully documented the problem. In 2008, he and colleagues found that nearly a third of the clinical trials of antidepressants by drug companies produced questionable or negative results that were not published.  Trials with positive outcomes were about five times more likely to be published than those without, researchers at the University of California San Francisco found in 2008 when they examined all the new drugs approved by the U.S. Food and Drug Administration in a two-year period.

• Deprived of study results, patients end up using ineffective or harmful treatments. When researchers dug up unreported trial data on the antidepressant reboxetine, their analysis showed it was more harmful and no more effective than placebo for treating  major depression—in contrast to the positive findings of published clinical trials. Pfizer, which sells the drug in Europe, had not published 74% of the patient data.

• • •

I’ll close with the words of Alessandro Liberati, an Italian doctor and medical professor, who was diagnosed with multiple myeloma and knew well the anguish of not having sound research to make treatment decisions. Liberati died in 2012 at the age of 57. Here he is in a 2010 interview:

We need to move forward on several fronts. We need to increase awareness of the misalignment between the research that is done and what needs to be done. Few people understand how much waste there is in research – research on questions that have already been answered, research on irrelevant questions, and so on. Those who use research – health practitioners and patients – need to be involved in setting priorities and designing research.

When I had to decide whether to have a second bone-marrow transplant, I found there were four trials that might have answered my questions, but I was forced to make my decision without knowing the results because, although the trials had been completed some time before, they had not been properly published!

This should not happen. I believe that research results must be seen as a public good that belongs to the community – especially patients. Several practical changes are needed: more public funding and so more public control of research, more integration of research into clinical practice, and routine use of all sound research results in everyday practice. Every clinical encounter should be an occasion for contributing, in some way, to new knowledge.

Joe Rojas-Burke blogs on health care and science for Oregon Business.

One response to “A field guide to pointless medical research

  1. Pingback: Weekend reads: How much can one scientist publish? And more stem cell misconduct | Retraction Watch·

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s