COVID-19 research: are we moving too fast?

Categories
Applied Institute for Research in Economics

Peter Howley is Professor of Behavioral Economics at Leeds University Business School. His research focuses on: the economics of happiness, behavioural science, environmental and natural resource economics, agricultural economics, labour economics, and health and well-being.

Peter Howley

This blog post originally appeared on the LSE British Politics and Policy Blog

There has been an enormous volume of important COVID-19 research coming out into the public domain This includes studies aimed at calculating case fatalities, effectiveness of new treatments, risk profiles, and effectiveness of mitigation strategies. One can understand why — there is an insatiable appetite and need for information about the novel coronavirus, and a promise of not only much publicity for any research findings on the topic but also the hope that such research can make an immediate difference in people’s lives by helping to determine the best response to this pandemic. That being said, a degree of caution is needed when it comes to the dissemination of new findings.

Existing problems with publishing

It is not uncommon for scientists to spend months, if not years, carefully developing an idea into a paper but we are seeing an increasing number of instances where the whole process takes a matter of days. Bias towards publishing research with ‘sexy’ findings often facilitated by problems in the research design, such as small samples and the winners curse, multiple comparisons, and selective reporting of results have been the source of much discussion. There are a small number of exceptions but it is generally the result of misinformation coupled with cognitive biases such as confirmation bias which we are all susceptible too (e.g. we tend to only see the evidence we want to see) rather than any malfeasance. There are also signs that such problems are beginning to be taken more seriously by scientists across all disciplines.

The pandemic has intensified the above issues, however, as not only are researchers rushing to write papers, but journals are also rushing to publish them with an expedited peer review process. Of course it is important to get good science on an important topic out into the public domain as quickly as possible but this does make an already unpredictable peer review process even noisier than usual. While good science has been key to shaping our response to the pandemic, research undertaken and published with great haste has the potential to cause harm.

Potential for harm

As an example, a study published in The Lancet and subsequently retracted purported to show that hydroxychloroquine, far from being a successful treatment for COVID-19 as initially hoped, was actually associated with increased risk. This led to the World Health Organization suspending relevant clinical trials. The lack of transparency by the firm that supplied the data used in this study led to many questions surrounding the credibility of the conclusions – questions that can be difficult, if not impossible for editors to pick up prior to publication. Still, it seems reasonable to suggest that given the many evident methodological concerns in the paper it would ordinarily have been rejected by an established journal under a ‘normal’ peer review process. That it had nevertheless been published has been used by some as a means to undermine trust in science.

A further example was a study released to the media as a preprint (which opens up another set of problematic issues). Through Facebook ads, the authors recruited a sample of 3,330 residents and tested them for COVID-19 antibodies. It was an ambitious and timely study and argued that its subsequent analysis suggested that the background rate of infection was much higher than what people had previously thought. This, in turn, indicated that the fatality rate may be substantially lower than previous best estimates. These findings were seized on by conservative activists in the US as evidence against lockdowns and other mitigation efforts. This kind of damage was done, despite a good deal of criticism from other scientists pointing out that the findings are most likely the result of various sources of statistical error. A further methodological hurdle is selection bias. While many people would likely want an antibody test, which group is likely to go to greater effort (e.g. travel to a testing site etc.) to obtain one? Likely those who had good reason to suspect that they or someone close to them had the virus. This means that instead of testing a ‘representative’ sample of the population in Santa Clara for antibodies, you end up testing a sample of the population who, all things considered, are more likely to have been infected in the first place. It is worth noting that since its release on 11 April, the paper has already been cited 135 times, which gives a sense of the scale and rapid nature of research in this area.

My aim is not to be overly critical of any particular paper as methodological and sampling issues are common, there is always uncertainty in observational data, and there is no such thing as a perfect study. Rather, my aim is to highlight that while researchers are understandably keen to make a positive contribution, there can be a cost to jumping out with new research findings without taking the necessary time to scrutinise the work.

Overreach

Another, less serious example is an article discussed in The Telegraph which suggested that baldness could predict the severity of disease. The evidence for this assertion is a study of 122 patients in hospitals in Madrid where it was found that male patients had a somewhat higher background rate of baldness than what would be expected from men of a similar age in the population at large.

While I cannot say that baldness will not turn out to be a predictor of disease risk, what I can say is that a small scale observational study of 122 patients provides no evidence one way or the other. To see why, one must consider just how variable observational studies are. If I look at any group of 122 patients I will likely find that they differ from the population at large on a myriad of factors unrelated to the disease. They could, for example, sleep more/less, eat more processed meats, drink more red wine or perhaps drink less white wine. If I did uncover a statistically significant pattern, with the exception of the first example, it would not be too difficult to come up with a ‘causal’ explanation to explain its relevance to coronavirus (e.g. medicinal properties of red wine) and therein lies the crux of the problem.

Another related example are a number of studies (e.g. here and here) which suggest that Vitamin D supplementation might play an important role in managing disease risk. These studies essentially compare mean Vitamin D levels with Covid-19 mortality across countries in the EU. Somewhat counter-intuitively, European countries at higher latitude have higher Vitamin D levels despite less UVB sunlight exposure as fortification of foods and supplementation is more common. At the time the studies were published (things have now changed), lower latitude countries such as Spain and Italy had higher mortality rates from Covid-19 and also lower mean Vitamin D levels. In contrast to the example relating to baldness, I would not be surprised if clinical trials in future do show that Vitamin D is important at least to some degree but again what I can say is that studies of this nature provide no real evidence one way or the other. These countries may differ in Vitamin D intake but they differ in everything else from population density, mitigation strategies, and demographics to consumption of ice-cream.

One might reasonably counter that notwithstanding any methodological concerns, these studies provide supportivesuggestive or preliminary evidence of the importance of Vitamin D, baldness or any of the variety of other factors examined in such a crude fashion. They don’t, in the same way that a significant correlation between ice cream consumption and mortality rates from COVID-19 does not provide any evidence in support of the premise that reducing ice-cream consumption can mitigate mortality risk.

One might also reasonably ask what the harm here is, as Vitamin D is good for you. Well it is up to a point, but there are health risks associated with consuming too much and if we can imagine some people ingesting poison in order to stave off the coronavirus, it is not hard to imagine some people taking more Vitamin D supplements than is good for them.

More generally, irrespective of any potential for population harm, when research of this nature is being awarded with a publication and media attention, then it makes it more likely for other researchers to assume such methods are an acceptable way to answer important research questions and follow suit.

To conclude, scientists of all disciplinary backgrounds have really come to the fore in this pandemic. Indeed, the pandemic highlights the importance of what scientists do. It has been inspiring to see the incredible skill and creativity on display from many knowledgeable researchers in providing timely and reliable answers to very complex questions. Yet it is incumbent on everyone to ensure that in the eagerness to provide timely and important advice, we don’t trade off too much by way of accuracy and reliability.

Of course in the midst of a pandemic the benefits of getting new information out into the public domain quickly, particularly so when aimed at curing or preventing the spread of a lethal virus, may outweigh any costs associated with providing less reliable information. Having said that, not all research will be critical to managing the risks of the disease and scientists need to ensure as best they can that they get this balance right.

Contact us

If you would like to get in touch regarding any of these blog entries, or are interested in contributing to the blog, please contact:

Email: research.lubs@leeds.ac.uk
Phone: +44 (0)113 343 8754

Click here to view our privacy statement

The views expressed in this article are those of the author and may not reflect the views of Leeds University Business School or the University of Leeds.