Skip to content
Link copied to clipboard

A guide to making sense of coronavirus studies

News coverage of scientific studies can be misleading. Here's how to tell the good from the bad.

When weighing the evidence for different drugs, the type of study matters.
When weighing the evidence for different drugs, the type of study matters.Read moreCYNTHIA GREER/STAFF

Here at The Inquirer’s Health & Science desk, we often are asked how we decide what to write about. My answer always includes this statement:

A huge part of the job is deciding what not to write about.

That is true even for the coronavirus, a topic for which people seem to have an insatiable appetite, no matter how small the development. We sift through all kinds of studies, analyses, and announcements of new drugs or products — and in many cases, we take a pass.

That doesn’t necessarily mean the information lacks value. It just means we don’t think it is worth presenting to a general audience. Science is often incremental — one step forward, two steps backward, and maybe three or four in an entirely new direction. The findings might be preliminary. Or they might not be relevant to treating the disease. Maybe there are unanswered questions, conflicts of interest, or other red flags.

As you read through news coverage of scientific studies — or, if you feel comfortable with jargon and math, the studies themselves — keep in mind these potential pitfalls:

Science by press release. In May, Moderna Inc. announced preliminary results from testing its coronavirus vaccine. In April, Anthony Fauci, the chief infectious disease scientist for the National Institutes of Health, heralded early findings from a trial of the antiviral drug remdesivir. In each case, the announcements were not accompanied by a formal write-up, leading critics to describe them as “science by press release.”

That in itself is not bad. If a press release is issued by a publicly traded company, it cannot, by law, contain fabrications. But the information might be incomplete, or some of the details might be couched in a way that only an insider could understand. Maybe there is a passing reference to “adverse events” in some volunteers who took a drug.

In the case of the Moderna vaccine, the company said three participants who received the highest dose experienced “grade 3 systemic symptoms,” with no additional description. That term generally means the symptoms were not life-threatening, but nevertheless no picnic. In fact, Ian Haydon, one of the three participants, later said he had spiked a fever above 103 degrees and fainted.

Haydon — a scientist by training who wrote for The Inquirer in the summer of 2018 through a fellowship from the nonprofit American Association for the Advancement of Science — is fine. What’s more, the dose he received was far higher than what eventually would be administered to the public. But when reading news coverage of studies, look to see whether someone pressed the company for those kinds of details.

Likewise, on April 29, Fauci announced that remdesivir, a drug originally designed to treat Ebola, could “block” the coronavirus, shortening a patient’s recovery time by four days. Breathless headlines ensued. The complete study came out in May, and while the results were essentially as Fauci described, a few details were less promising. The drug did not appear to hasten recovery for patients on ventilators, for example.

The significance trap. Scientists tend to publish findings that are deemed to be “statistically significant.” That does not necessarily mean they are significant in the everyday sense of being meaningful. Loosely speaking, all it means is that the researchers believe their results are “real” — that is, not due to chance.

Technically, it has another meaning. In the medical community, researchers regard a finding as significant when a statistical measure called the p-value is below 0.05. But that threshold is arbitrary, and findings for which the number is below 0.05 can still be due to chance. Multiple studies are needed to confirm.

Even if a result seems to be real, it is not necessarily cause for action. If researchers came up with a pill that lowered cholesterol levels by, say, an average of three points, and the statistical analysis suggested it was not a fluke, good for them. But we would not all head off to the pharmacy.

Likewise, look for the raw numbers behind percents. If a treatment appears to reduce the risk of disease by 25%, that sounds great. But if the disease in question is rare, say, striking just four out of 1,000 people, that means we’d be whacking that number down to three in 1,000. Not so earth-shattering, especially if the drug is costly.

Beware of breakthroughs. Quick, picture a scientist. A person in a white lab coat, shouting “Eureka!” maybe?

Reality check: Usually, they don’t wear lab coats, except when posing for photos. And except in very rare cases, they do not experience “eureka” moments — Archimedes is said to have exclaimed that upon realizing how much water he’d displaced in a bathtub — though a casual observer can be forgiven for getting that impression.

That’s because universities and companies sometimes get carried away, using words such as “breakthrough” and “potential cure” in their news releases. And some media outlets, unfortunately, will repeat those magic words without calling an independent expert for a reality check.

Nature, the British journal, includes this disclaimer each week when it sends out short descriptions of upcoming studies to members of the media: “We take great care not to hype the papers mentioned on our press releases. If you ever consider that a story has been hyped, please do not hesitate to contact us.”

Study type matters. A common reason that findings are hyped is a failure to appreciate the type of study. Not all are created equal.

If the research was done on lab animals, that is a valuable starting point, but caution is warranted. Scientists have cured cancer many times in mice, for example, only to find that the treatment did not work in people. One reason may be that the disease does not occur naturally in mice. Researchers can cope by genetically “engineering” the animals to have a version of the illness, but it is not the same thing.

Studies performed in humans can have limitations, too. Take observational studies, which are what they sound like: Give a drug to a group of people and observe the result. That has happened a lot with COVID-19, as physicians felt pressure to treat the sick with medicines that were designed for other purposes. It can be a reasonable approach if the drug has minimal side effects and there is a biologically plausible reason for why it might help in fighting a disease against which it has not yet been tested.

But beware. If all patients get the same drug and some of them improve, does the drug deserve the credit, as some physicians (and President Donald Trump) claimed after early studies of hydroxychloroquine? Or would the patients have gotten better anyway? Maybe even more patients would have gotten better had they not gotten the drug. Impossible to say. (In May, the World Health Organization paused a trial of that drug amid safety concerns, then on Wednesday said it would resume.)

That study is a randomized controlled trial — often called the “gold standard” for determining a drug’s efficacy. Typically, patients are randomly assigned to the treatment “arm” of the study, meaning they receive the actual drug, or to a control group, meaning they get a placebo.

The lure of preprints. Before research is published in a journal, it generally undergoes a process called “peer review.” An editor typically asks the authors to provide more information, such as explaining why some people dropped out of the trial. A key question is whether there was anything different about people who dropped out, compared with those who remained enrolled.

But increasingly, researchers post their findings online as “preprints” before getting them published, in the interest of staking their claim to a discovery or helping other scientists at work on similar topics. During the coronavirus pandemic, this practice has exploded. On medRxiv.org and bioRxiv.org, a pair of preprint sites maintained by Cold Spring Harbor Laboratory, more than 4,000 such studies have been posted so far on COVID-19.

Sharing these preliminary findings is fine, so long as they are viewed through the right lens, said physician Ivan Oransky, a vice president at Medscape, a media outlet aimed at health-care professionals. Many such studies have merit. And on the other hand, some published studies have flaws, as Oransky often notes in his second job, co-running the blog Retraction Watch.

“This notion that if it’s peer-reviewed, it somehow has the Good Housekeeping seal of approval, and that if it’s a preprint, it’s automatically wrong — the world is a far more gray place than that,” he said.