Making the most of clinical data

Following our data conference, Jenny Neuburger unpacks some of the opportunities and pitfalls of using routine clinical data to monitor quailty of care.

Blog post

Published: 15/11/2016

NHS hospitals and clinical teams collect vast amounts of data about care delivered in different settings. But using these data effectively to monitor and improve services presents several challenges.

Last week, the Nuffield Trust and the Health Foundation held a one-day conference to discuss how to make the most of data. We heard from analysts, clinicians, policy-makers and regulators who explained how they use routinely collected data to inform their work. Three insights emerged from the discussions.

1. Combining data with other knowledge

Unlike data collected for clinical trials in controlled environments, routine data are collected by people delivering care and often vary widely in accuracy and completeness. For example, data collected on pressure ulcers could be used to monitor the quality of nursing care on a ward. Good-quality care, with patients being routinely helped to get out of bed and the use of pressure-relieving mattresses, should reduce the numbers suffering from ulcers. But very low numbers, or high rates of missing information, might indicate infrequent reviews. Likewise, better reviewing may result in apparent increases. Changes in use of grading systems for judging severity may also lead to spurious trends due to changes in coding.

Despite the likely presence of errors and biases in routine data, there are many examples of how careful analysis can provide clues about the quality of care. The Care Quality Commission (CQC) uses a wide set of indicators. Using an old but nonethless useful analogy, they treat these as 'tin-openers' to prompt further investigation, rather than 'dials' to formulate judgements. A recent report by the Nuffield Trust described effective ways of using routine data to identify local examples of good care, focusing on unusual trends and followed up by qualitative investigation to identify changes to local care.

Local knowledge can also enable the correct interpretation and appropriate response to audit data. For example, during the event we talked about a case where hospital data were showing a rise in the number of in-hospital fractures, followed by a reduction. The data make more sense when combined with other information; for instance, the fact that there had been a recent move to a new hospital site with a high proportion of private rooms, which was immediately followed by an improvement initiative to reduce inpatient falls.

2. Involving clinicians and other users of data

It is clinicians and other decision makers who need to use data to understand, improve, choose and regulate care. The value of clinical involvement in the design of data collections, visual displays and methods was emphasised by several speakers at the event. For example, national clinical audits and other professionally led improvement projects have been shown to provide trusted 'bottom-up' sources of data. The credibility of these data means that they are more likely to be used to underpin local improvement projects than mandated 'top-down' data collections.

Clinical involvement and leadership is fundamental to a national project to monitor and improve survival after children’s heart surgery. Local clinical teams were involved in the design of data displays and descriptions of results. As a result, charts have been altered to include additional information on surgical and catheter reintervention as well as survival. This helps teams to interpret and act on information showing that survival among children treated in their unit is worse or better than predicted.

Clinical involvement also influences the methods used to analyse health outcomes like survival. To monitor quality of care using these data, outcomes are compared to predicted outcomes from a statistical model. Predictions take into account individual characteristics that influence outcomes such as age, and clinical knowledge should inform this choice of characteristics. As several participants described, there may be a trade-off here between an 'analyst’s tool' that makes more accurate predictions and a credible tool that makes clinical sense.

We also heard from national clinical audit programmes who are working to make data more effective by developing outputs tailored to the needs and workflows of different users. For example, the Stroke Audit provides ready-made slides for download by clinical teams to review their local data in team meetings. The National Hip Fracture Audit produces clinical notes along with the CQC dashboards that go in inspection packs. Both the hip fracture and stroke audits have consulted patients and carers on the design of easy access information and advice on questions that people should ask about their care.

3. Using statistical methods

  • 01/11/2016
  • Chris Sherlaw-Johnson | Dr Alisha Davies | Claire Currie | Dr Tazeem Bhatia | Dr Elizabeth Fisher | Dr Martin Bardsley

Day-to-day factors influence the care that is delivered at a given point in time. For example, whether or not someone receives timely surgery after arrival at A&E with a hip fracture depends on several factors, including the day and time of day, how medically unwell they are upon arrival, how many elective procedures are already planned that day, and so on. The proportion receiving timely surgery each week will fluctuate naturally, even if hospitals are delivering a consistent standard of care. Conversely, when care has improved or deteriorated, this change may be masked by these fluctuations.

There were different views at the event about how useful statistical methods are for helping to avoid misinterpretation of data. Limits can be drawn on charts showing the role of natural variation, in order to help identify changes that fall outside this. On the other hand, statistical methods can limit the accessibility of charts for users of the data. Furthermore, when methods are used to determine whether a change is 'significant', they may be misinterpreted as providing a definitive answer. And inappropriate use of methods may lead to false conclusions.

It is widely acknowledged that effective use of statistical methods requires analytical capacity within the NHS and that this could be resource intensive. The NHS England Improvement Analytics Unit is currently trying to increase skills in local evaluation. There is a challenge in increasing clinical involvement when staff are under increasing pressure to reduce their time on non-clinical activities. However, many participants described how they used web-based technology to help clinicians and other decision makers to access and use data more easily and quickly. People were excited about the opportunities for using data better to improve care.

Presentations from all speakers at the data conference can be found on the event webpage

Suggested citation

Neuburger J (2016) 'Making the most of clinical data' Nuffield Trust comment, 15 November 2016. https://www.nuffieldtrust.org.uk/news-item/making-the-most-of-clinical-data

Comments

Appears in

  • 01/11/2016
  • Chris Sherlaw-Johnson | Dr Alisha Davies | Claire Currie | Dr Tazeem Bhatia | Dr Elizabeth Fisher | Dr Martin Bardsley