Risky numbers: the national reporting of Covid-19

Daily government bulletins provide us with important numbers during the pandemic on how Covid-19 is affecting the British public, and how many people have lost their lives. But numbers are sometimes difficult to interpret, argues Billy Palmer, and there may be ways to ensure the reporting can make people better informed of the overall picture.

Blog post

Published: 14/04/2020

Daily, almost relentlessly, a tweet from the Department of Health and Social Care gives the number of tests made for Covid-19 in the UK and, of those found positive, the number that have sadly died. Information is published by the government for many reasons – and justifying policy decisions and motivating socially responsible behaviours are among these. Time will tell whether the data being reported has succeeded in this.

But this information is also about communicating risk. Doing so effectively is important to build trust with, empower and appropriately reassure the public. Given the understandable clamour for informative data on the pressures the pandemic is placing on health care and the effectiveness of the response in different countries, there is a risk that if authorities fail to provide sufficient information this will create a vacuum to be filled by sensationalist, inaccurate or wholly fake news.

Given the shortcomings of the current public-facing data and the effect this may have on people’s understanding, there exist opportunities for further openness, greater public involvement, more information, and better presentation in the reporting of Covid-19.

Testing results

The World Health Organization’s message to all countries to “test, test, test” has made clear the importance to the public of numbers around testing. Yet the published data is of limited value. By 12 April 2020, across the UK there were – according to the official daily tweet – 84,279 confirmed positive tests. Yet analysis by Imperial College London suggests that, by that date, some 3.0 million people (credible interval 2.1–4.3 million) were infected. If this is true, the testing regime had picked up somewhere in the region of one in every 35 infections.[i] Equally fundamentally, even the testing numbers are incomplete: for instance, daily figures published on 7 April were missing data from Manchester, Leeds and Northern Ireland.

Typically, when under-recording, you might look at changes (or rates of changes) as a reasonable relative guide to what’s going on. Comparing daily updates, it’s possible to see that the proportion of positive tests has typically increased, for example, from 12% on the 22 March to 48% on 5 April. But because we don’t know how the testing strategy may have changed, it’s not possible to assume from this increase alone that the prevalence of the virus increased. In fact, the proportion of positive tests plateaued recently, but it is unclear whether this is partly due to the increased numbers of tests being used on NHS staff (as opposed to symptomatic patients) or a possible stabilising in prevalence.

Daily death toll

Much has been said about limitations in the UK’s daily coronavirus death count, such as delays in the recording of deaths and exclusion of those occurring in care homes and outside hospitals more generally (these are instead included in Office for National Statistics data). For instance, the figure reported for in-hospital deaths by 5pm on 3 April, as reported the following day, was 3,939. But a week later this had been restated at a sizeably higher 5,186 in-hospital deaths – and higher again when including deaths outside hospital (see chart).

Attribution is an equally important problem. While we may presume that the vast majority of deaths for people who have tested positive were caused by Covid-19 itself, if prevalence and testing both increase, identifying deaths that are not attributable to coronavirus may become important. And given some worrying recent reports on unintended consequences – including people with serious conditions avoiding treatments and potential mistakes by the health service as it seeks to free up capacity – it is also important to know how many deaths may be a result of the very measures designed to combat Covid-19.

Public understanding and trust

These flaws in the data leave the public with, at best, a very limited real-time understanding of testing levels, prevalence and the numbers dying. And even if these recording issues can be addressed, it is unclear how the public should interpret these numbers. Even the most sophisticated of armchair epidemiologists would struggle to infer much from the mortality counts about their current risk or the success of social interventions, given that deaths on average occur 24 days after infection. And the barrage of interpretations on social media following each official update – while admittedly a crude barometer – suggests public understanding is, at best, mixed.

The full extent of the shortcomings in public understanding around the pandemic is difficult to assess at this stage. But the level of trust the public has in official information is informative for two key reasons: firstly, as a reflection of the effectiveness of government communications, and secondly, as a marker of the contribution this trust may play in determining whether the public responds to social policy interventions. As such, it is concerning that a recent survey by researchers at the University of Cambridge suggests the British public is more likely to trust coronavirus information from their workplace than from government and official sources. The survey also suggests people in the UK are less trusting of government and official information on the pandemic than in Germany, for example.

Opportunities for better communication

National bodies have shown much endeavour in an obvious time of pressure to report data on Covid-19, and many non-governmental organisations have acted to fill any data voids. But official communications could go further in the following ways.

First, given that the absolute numbers of deaths from Covid-19 are, sadly, large, denominators and comparators are important for putting figures in perspective. These could include the typical number of deaths in a day, the chance of different outcomes relative to the number of infections, or cases as a proportion of the population. But testing the approach with the public first will help to inform the best presentation.

Second, displaying forecasts – we know that some exist – may also help the public understand whether the latest data is as expected or showing a new trend. This can be guided by lessons from history. Following a flawed attempt by government to articulate the risk of the swine flu a decade or so ago, there is now an established principle that government should continually communicate “most probable scenarios” with the public, while also being open about the worst-case scenario.

Third, providing a single, consistent and accessible source of information on a variety of health service data can give a more accurate picture. This could include available capacity (e.g. ventilators); activity (e.g. hospital admissions – data currently provided on this is poorly explained); intermediate outcomes (e.g. admissions to intensive care units); and outcomes (e.g. deaths and recoveries). A suite of indicators could be supported with evidence on the relationship between the measures, such as the proportion of intensive care patients that die or recover. To be of use, geographical breakdowns of such information should reflect varying population sizes and regional health care capacities.

Finally – and specifically on the possible deficit in public trust – there are again lessons to be learnt from the past. An inquiry into bovine spongiform encephalopathy (BSE) in 1996 in the UK highlighted that trust can only be generated by openness, and that openness requires recognition of uncertainty, where it exists. The lack of specifics around the current testing strategies is a particular area where greater openness would help.

Given the huge impact of Covid-19 on people’s health and the economy, and the risks of misunderstanding and fake news, it is clear the current approach to public communication is falling short. More openness – and a willingness to confront inevitable uncertainties – during this pandemic may at times seem unpalatable, but the public and media deserve more.

Notes

[i] This comparison needs to be treated with some caution given the uncertainty in the Imperial analysis and also that, for instance, the testing figures can include the same person two or more times (if they have been tested repeatedly).

Suggested citation

Palmer W (2020) "Risky numbers: the national reporting of COVID-19", Nuffield Trust comment.

Comments