Students often ask why they need to take statistics for a degree in journalism. This article is Patient Zero for that answer:
The article comes from Jordan Davidson of The Federalist, an online conservative publication that covers a variety of topics, including politics, art and culture. Its previous coverage on COVID-19 has been labeled “pseudo-science” and criticized for its scientific content coming from “writers not known for their epidemiological expertise.”
Charles Bethea’s review of The Federalist’s coronavirus coverage, linked above, focuses more on the publication’s commentary. In Davidson’s article about masks, however, the content looks legit at first glance:
A Centers for Disease Control report released in September shows that masks and face coverings are not effective in preventing the spread of COVID-19, even for those people who consistently wear them.
Davidson cites this data from the study to support that information:
A study conducted in the United States in July found that when they compared 154 “case-patients,” who tested positive for COVID-19, to a control group of 160 participants from the same health care facility who were symptomatic but tested negative, over 70 percent of the case-patients were contaminated with the virus and fell ill despite “always” wearing a mask.
“In the 14 days before illness onset, 71% of case-patients and 74% of control participants reported always using cloth face coverings or other mask types when in public,” the report stated.
This isn’t exactly what the headline or the lead says, but it does look impressive, and the quote from the study is accurately written. No chops, no rewrites.
The evidence against masks looks even more damning here:
Despite over 70 percent of the case-patient participants’ efforts to follow CDC recommendations by committing to always wearing face coverings at “gatherings with ≤10 or >10 persons in a home; shopping; dining at a restaurant; going to an office setting, salon, gym, bar/coffee shop, or church/religious gathering; or using public transportation,” they still contracted the virus.
So why is the CDC still yammering on about masks, despite this research?
Because this is a case of relying on an accurate study, reading only the parts that matter to the writer and reporting erroneously based on that. And this is why stats (and the ability to read them within research) matter to you as a journalist.
First, let’s look at the headline again, compared to what the study says. The headline says “overwhelming majority of people,” implying that everyone out there wearing a mask throughout the country (or maybe even around the world) is getting COVID-19 regardless of their masking. The study says that it examined 154 people who had tested positive at specific health care facilities and 160 people who had symptoms of COVID-19 but tested negative at those same facilities.
In other words, we’re looking at a sample of about 314 people. That’s a far cry from what the headline indicates. (More on this later.)
Second, Davidson cherry picks the data and does it in a way that I don’t think she fully understands. The authors of the study reported that 71 percent of the people who tested positive and 74 percent of the people who did not told researchers that they “always” wore a mask.
The first thing I noted in the chart Davidson posted was that the significance value meant to show differences between the groups was almost comically non-significant (p= 0.86), meaning the control group and the infected group were not statistically different in how often they wore masks. (For stat geeks interested, researchers usually only get excited about p values of .05 or lower.)
Davidson takes this to mean that the study finds wearing a mask or not wearing a mask makes no difference in COVID-19 illness rates. What the authors of the study see here is that there must be something happening differently between how these two groups are acting that ends up with one group positive for the virus and the other group not.
The analysis of the data in the study finds this:
In this investigation, participants with and without COVID-19 reported generally similar community exposures, with the exception of going to locations with on-site eating and drinking options. Adults with confirmed COVID-19 (case-patients) were approximately twice as likely as were control-participants to have reported dining at a restaurant in the 14 days before becoming ill. In addition to dining at a restaurant, case-patients were more likely to report going to a bar/coffee shop, but only when the analysis was restricted to participants without close contact with persons with known COVID-19 before illness onset.
In short, the key thing that the COVID people did that the non-COVID people did not do was go to restaurants and bars where they had to remove their mask to eat or drink, where they couldn’t properly socially distance and where they dealt with the presence of other people who had to deal with the same mask/distance problems.
Davidson argues that masks don’t matter because “the report suggests that ‘direction, ventilation, and intensity of airflow might affect virus transmission, even if social distancing measures and mask use are implemented according to current guidance'” even though that’s not what the report actually says. This quote is talking about restaurants and bars, where people without masks are potentially spewing the coronavirus into the HVAC system between maskless bouts of beers and wings.
At the end of the day, I don’t expect Davidson or The Federalist to care much about this. In fact, if they notice it at all, I’m quite certain the headline on their next story will be, “Liberal Commie Pinko Professor Uses Blog to Bully, Indoctrinate Students Against Free Thought.” However, for those of you who need to use research studies in your journalism writings, and you want to get things right, here are three quick tips:
Read the summary first: The one clue I got that told me Davidson might not really be all that interested in the findings beyond that they allowed her to say what she wanted to anyways, was that the authors actually did a simple findings box:
This is pretty clear in regard to what it was trying to say about masks, restaurants and COVID-19 transmission. Not every journal article will have this clear of a summary, but if the article has one, read it. If it doesn’t, most articles will have an abstract, which attempts to do the same kind of summary. You’ll need to read it a couple times carefully to get the gist of what it is trying to say, as most abstracts are written for other people in the field, as opposed to journalists who are trying to make sense of the article’s minutia.
The goal of reading the summary or the abstract is not to take the place of reading the study, but rather to help you understand what it is the study is doing, what it found and why it matters. That information can serve as a guide as you go forward.
Find sources and rely on them: Like most topics you will cover, you are probably not an expert on whatever that study is attempting to tell you. This is why we rely on sources for our stories instead of just telling people whatever we think is going on. A good place to start here? Probably one of the 20-plus authors who worked on the paper.
In most cases, it’s easy to find the authors, as they list their full names, academic association, workplace and even their contact emails with their papers. Some academics have no problem explaining their work in a simple way for journalists. Those who have difficulty are at least a starting point, in that they can either help you understand a little bit or they can point you to the author on the study who is better at this.
If you’re concerned that the authors of the study are going to BS you about what they found, find another expert in the field who would be willing to review the study for you. In most cases, if you work at a major university or in an area with a major university, you can find experts on campus in the area from which the study originated.
And, to be fair, it wouldn’t hurt to talk to people from both of those groups to get a better-rounded view of the study itself.
Understand how research works before reporting on it: I’m not going to pick on the author of this Federalist piece, but let me just say it took me about nine years of higher education, four years of statistics and half of an academic lifetime to be able to look at research like this and make at least heads or tails out of it. A poli sci major with a journalism minor, who graduated in May 2020 and has about 3 months of experience in pro journalism probably isn’t qualified to breakdown a major study with the level of certainty Davidson put into her article.
Giving most starting journalists a scientific study to write a quick-hit news story is like giving a bag of meth and an automatic weapon to a toddler: It’s not a great idea and even if they only figure out a little bit about what they have in their hands, it’s probably going to end poorly. And that’s if the writer is really trying to understand things, as opposed to just finding a great way to create a hyped headline.
Here are things you need to understand about research studies if you want to write on them:
- They are based on small sample, which might or might not be capable of extrapolation to a larger population. Authors are very careful about explaining their sampling method and the broader implications to the public in their studies. The journalists who paying attention to that have a problem.
- They often involve complex statistical measures that are interrelated, so you can’t just grab one number out of them and make it mean something.
- They usually have a very narrow purpose. It might be to find out if using a particular additive in a diet soft drink will cause increased risk of blood clots among women age 65+ who have a preexisting kidney condition. It might be to measure the short-term effects of viewing violent television programs on preschool boys when provided with replication opportunities subsequent to viewing. Whatever the study, it’s never going to be something that can be immediately generalized to all people in all places doing all manner of different things.
- They almost always come with caveats. In journal-speak, we call these “limitations” and they show up in the conclusion or discussion part of the paper where we tell other scholars, “Look, this isn’t perfect for the following reasons.” It’s up to the rest of the community of readers (and usually reviewers, prior to publication) to decide how serious these limitations are. In the study under discussion here, the authors list five key limitations, including the sample size, unmeasured confounding behavior (which basically translates to, “The sick people all might have been doing something else we didn’t ask about and that got them sick.”), the sample selection (only 11 participating facilities were involved and “might not be representative of the United States population.”), the patients being aware of their illness status and the PCR test used to classify them as having COVID-19 might be inaccurate. In short, this is about 813 miles away from a definitive headline.
If you don’t understand all of this before you write your story, you are really going to have a heck of a time being right with what you tell people.
So, if you don’t understand research at all, don’t report on it. If you have to report on it anyway or you’ll be fired, read the summary carefully and see if you can figure things out. If you can’t, talk to smart people who can help you figure it out. If that still isn’t helping, get someone else who works with you to help you out and share a byline.
You’ll do more harm than good if you put this stuff out when you don’t know what you’re talking about.