Five years ago, the Nuffield Trust and the Health Foundation set up QualityWatch to monitor how the quality of health and social care was changing over time. Since then, QualityWatch has been tracking and reporting on over 300 care quality indicators drawn from routinely published statistics.
This has provided an independent view of the changes in care quality during a period of significant financial challenge for the NHS and local authorities. NHS funding growth slowed to an average of just 1.26% per year between 2009/10 and 2016/17 compared to a long-run average of around 4% over the lifetime of the NHS up to 2010. In light of this, our research programme asked at its outset whether the inevitable focus by providers of care on making financial ends meet would come at the expense of quality.
As the first phase of the QualityWatch programme draws to a close, we take the opportunity to look back at how some key quality performance measures have changed over an unprecedented period of financial squeeze. In particular, we look not just at trends in quality and performance but identify significant changes or turning points in these trends.
1. Data: measures of quality and performance
There are many possible measures of quality and performance that would be interesting and informative to track over time, but data limitations (the frequency of reporting, time periods that are too short and changes in definition over time, for example) reduce the choice of metric. For this analysis we selected ten quality measures for which published data extended back to 2009 (although for two of the health care-associated infection measures this was 2011) to include a period before the 2010 Spending Review. We also sought to define a period for which there were no major definitional changes and which captured key aspects of patients’ care – from waiting for admission and the use of accident and emergency services, to patients’ experience of care, including delays in being discharged from hospital.
Most of the measures pertain to aspects of access, in particular waiting times, and all relate mainly to secondary care (this is largely a reflection of data availability). However, changes occurring across these measures reflect not just the immediate service they cover but often the wider health and social care system. A number of the measures are also key national targets and are prominent in the NHS Outcomes Framework and the NHS Constitution.
The specific measures we analysed were:
- Percentage of patients waiting two months or less from the time of having an urgent GP referral for suspected cancer to first definitive treatment for all cancers
- Percentage of all people attending A&E who wait four hours or less from arrival to admission, transfer or discharge, by month
- Total waiting list for consultant-led elective care
- Proportion of people waiting less that 18 weeks for a GP referral for consultant-led elective care
- Proportion of patients waiting six weeks or more for one of 15 key diagnostic tests after referral
- Rates of health care-associated infection:
- Methicillin-resistant Staphylococcus aureus (MRSA)
- C. difficile
- Methicillin-sensitive Staphylococcus aureus (MSSA)
- E. coli
- Total number of delayed transfer of care days
The aim of the analysis was to identify statistically significant changes in trends in these performance measures among the general ‘noise’ of fluctuating monthly data. It is important to bear in mind that while a change of direction in trends may be statistically significant, it is not necessarily important in a real world sense. The statistical method used to identify turning points in the data is driven by the data alone and not by possible explanatory factors; the analysis does not provide detailed answers as to why changes in trends occurred.
Box 1 summarises the statistical method used to identify significant changes in the trend data.
There are a number of ways to detect turning points in trend data. Sometimes they can be obvious and easily spotted by ‘eyeballing’ a chart. But often data is ‘noisy’ – fluctuating over time, sometimes in a seasonal way. Trends can also be deceiving – appearing to be important when not, or missed when in fact significant.
Such ‘noisy’ data can be smoothed statistically to bring out more important changes – to identify the signal from the noise. Other methods include fitting regression lines to parts of the data and testing whether the slopes of these lines are significantly different. Still other approaches include various rules of thumb based on some knowledge of the data such that, for example, a certain size of change over a certain number of periods would be considered important, i.e. a signal rather than noise.
We used a proprietary statistical package (Change Point Analysis) which uses cumulative sum control charts (CUSUM) to detect when a process, or a trend, has deviated significantly from a given benchmark. This software is not based on a sophisticated model of the real world – rather, it relies purely on the time trend data and a statistical view of what’s important with respect to changes in trends. The output from this analysis identifies the date of turning points in the trends as well as confidence intervals expressed as periods of time around these points.
Having identified these turning points in the NHS performance data, the data were split at these points, with linear regressions being fitted to more easily identify the broad change in trends before and after the turning points. We do not interpret these breaks as a specific event or policy change causing the turn in the trend, but rather as a guide to then consider the range of factors that may account for changes in trends. We discuss possible explanations in the conclusion.
2. Findings: What’s happened to headline NHS performance in England since 2009?
Drawing on published monthly data for NHS trends in England, ten quality-related performance measures were analysed to identify any significant deviations in trends over the last seven to eight years. Below we describe the findings for each measure.
Cancer referral waiting times
The time taken to start definitive treatment, if needed, from the date of an urgent referral by a GP for suspected cancer has been one of several headline target measures for the NHS since 2000. Following changes in 2008 to the way cancer waiting times were recorded, the target was expressed as a requirement that at least 85% of such referrals should wait no longer than 62 days.
As Figure 1 shows, for all but a few months this target was met in aggregate across the country between the end of 2009 and the beginning of 2014. However, at this point, the trend in the proportion of referred patients waiting longer than 62 days started to rise. The national target has now only been met four times since January 2014, and the trend in performance to date has worsened.
A note on the charts within this blog
The charts used throughout this blog show individual data series divided into two or more time periods, marked by changes of colour. Turning points are shown as vertical blue regions. Trend lines are shown in grey. Where appropriate, performance targets are shown as horizontal red lines.
Accident and emergency waiting times
The percentage of people attending A&E who are admitted, transferred or discharged within four hours is also a headline and targeted performance measure for the NHS. Since mid-2010, the national target has been that no more than 5% of patients should wait more than four hours in A&E before being either treated, admitted or discharged. For this reason the period from mid-2010 has been selected for analysis in this case.
As can be seen from Figure 2, while the target had, by and large, been met at a national level between 2010 and mid-2014, the trend in the proportion of patients waiting longer than four hours rose over this period, and around the beginning of 2015 this trend started to rise at a faster rate, with the target missed every month up to July 2017.
The waiting list and elective waiting times
The total waiting list of ‘incomplete referral-to-treatment pathways’ records all patients still waiting for treatment. This includes people still waiting for an outpatient appointment and those who, while they may have had an initial clinic visit, are yet to be admitted or to start treatment. The size of the waiting list is of less importance to patients than the time they spend waiting on a list, no matter how many others are also waiting. While the relationship between the size of the list and the time patients spend waiting is not straightforward, we have looked at trends in list size alongside those in waiting times.
As Figure 3 shows, seasonal fluctuations aside, from April 2009 to the summer of 2013 the total list remained fairly stable, at around 2.5 million people. However, around this time the trend in the number of people waiting started to accelerate, and by the summer of 2017 it reached around 3.8 million – an increase of over 50%.
Elective waiting times
Since 2012/13, the target for elective waiting times has been that no more than 8% of patients on the waiting list should wait longer than 18 weeks for admission to hospital at the end of each month. This originally sat alongside targets that no more than 10% of the ‘admitted’ patients and 5% of the ‘non-admitted’ patients treated each month should have waited longer than 18 weeks before starting treatment.
These latter targets were dropped in 2015, based on arguments that the admitted and non-admitted targets created perverse incentives not to treat patients who had already waited longer than 18 weeks, since this would pull down reported performance. This just left the 8% ‘still waiting’ target in place.
Looking at the monthly trend in the current 18-week referral-to-treatment target between April 2009 and July 2017, there appear to be two turning points. Performance was on a slightly improving trend from April 2009 to around the beginning of 2012 (before the introduction of the 8% target). A turning point can be identified here, although this is followed by a period in which the new target was met up to the summer of 2014 when, despite continuing to meet the ‘still waiting’ target, performance began to deteriorate. This trend continued so that, by July 2017, around 10% of patients on the waiting list were waiting over 18 weeks (Figure 4).
Diagnostic test waiting times
The target that no more than 1% of patients should wait longer than six weeks for any of 15 key diagnostic tests was first introduced as a ‘milestone’ in March 2008 as part of the goal of achieving the overarching 18-week referral-to-treatment targets by December 2008. Since 2013/14 this target has formed part of the NHS Constitution, as part of the legal right to treatment within 18 weeks.
Analysis of turning points in the trends of the proportion of patients waiting longer than six weeks for a diagnostic test shows a somewhat complicated picture, with three separate turning points in trends (Figure 5).
From April 2009 to the beginning of 2011, the trend was upward – though erratic. From the beginning of 2011 to the end of that year, performance was worse, but also very erratic from month to month. Across the two years 2012 to 2013 – a period at the beginning of the more formal target for waiting times for diagnostic tests – performance improved, with the target generally being met. However, by the end of 2013 and into 2014, performance deteriorated sharply. Then, from around the beginning of 2014 to the most recent data, despite still missing the 1% target, performance (fluctuations aside) seems to be improving slightly.
Health care-associated infection
Improvements in health care-associated infections such as MRSA and C. difficile have been a notable success for the NHS. In 1993, for example, there were 51 death certificates that mentioned MRSA across England and Wales. By 2006 this had increased to 1,652. But since then, numbers have fallen, and by 2012 the number was 292. A greater political, managerial and clinical focus on what was becoming an increasing problem yielded success.
Part of the approach to address the problem was, as with waiting times, the setting of targets. In 2004, the Government introduced a national target to halve the number of MRSA infections by March 2008 from a 2003/4 baseline. This target was exceeded, with a reduction from 7,700 cases in 2003/4 to 2,935 in 2008/9. In 2007, a target was set to reduce C. difficile infections by 30% by March 2011 from the 2007/8 baseline.
The latest goal is to reduce Gram-negative bloodstream infections by half by 2020. E. coli infections make up 65% of these infections and are estimated to cost the NHS £2.8 billion by 2018.
In addition to examining trends in MRSA and C. difficile, we also analyse trends in MSSA and E. coli to identify any changes in trends in NHS trust reports of these infections. The time period for the latter two infections covers 2011 onwards, when data were first collected for trusts.
Methicillin-resistant Staphylococcus aureus (MRSA)
Figure 6 shows monthly counts of MRSA infections as apportioned or assigned to NHS trusts. As with C. difficile, assignment occurs in a post-infection case review, with the aim of identifying which organisation is best placed to learn any lessons from the case and to take appropriate action. A number of turning points can be identified in the trend from 2009 to 2017. However, these reflect a broader trend of (relatively) large reductions in cases of MRSA from 2009 to the late part of 2011 and then a flattening off of this downward trend. A turning point in 2014 hints at a slight change in trend to increasing numbers of cases.
NB: From the beginning of 2014 a different methodology was applied to assigning MRSA cases to either the responsibility of the trust, the local CCG or ‘third parties’.
Clostridium difficile (C. difficile)
In a similar way to the scenario for MRSA, Figure 7 shows relatively large reductions in C. difficile infections in the early part of the period between 2009 to 2017, but with reductions flattening off since the middle of 2017 at around 400 cases per month (compared to around 1,000 a month in 2009).
Methicillin-susceptible Staphylococcus aureus (MSSA)
MSSA is a class of bacterial infection which differs from MRSA in the degree of antibiotic resistance. Monthly data on cases of MSSA assigned to NHS trusts have been collected and published since the beginning of 2011. As Figure 8 indicates, two changes in the trend were identified: in mid-2012, a slight downward trend turned into a slightly increasing trend through to the end of 2015, at which point the trend flattened off.
Escherichia coli (E. coli)
Trends in the number of cases of E. coli – summer peaks aside – remained flat from the middle of 2011 to the summer of 2013. Cases started to increase from that point to mid-2015, at which time the trend changed again with a further increase in the rate of cases. By the summer of 2017 the number of cases being reported had increased by around 30% since 2011.
Delayed transfers of care
The 2017/18 NHS Mandate outlining the Government’s objectives and goals for the NHS specified a target of reducing the number of hospital bed days occupied by patients affected by delayed transfers of care from 5.6% to 3.5% by September 2017. This equates to a reduction from around 6,428 delays per day to 4,080 delays per day – around 70,000 fewer delays per month.
As Figure 10 shows, while delayed transfers of care (as measured by the total number of delayed days – similar in pattern to delays measured in terms of beds) remained flat between 2009 and mid-2014 at around 115,000 per month, from the summer of 2014 the trend changed and delays increased to a peak of around 200,000 per month towards the end of 2016.
3. Worsening performance – but from when, and why?
The broad picture of NHS performance over the last seven years or so does not look encouraging. Of the trends across ten measures of service quality, seven have deteriorated and are currently on trend for even poorer performance; two (MRSA and C. difficile) have plateaued, following improvement in previous years; and one (diagnostic waiting times), while now on a downward trend, remains above target and similar to levels in 2011.
While there is no specific point at which trends across all these measures turned, the 12 months from mid-2013 to mid-2014 appears to be the period that captures many of the changes (in particular, deteriorations) in trends.
There will also be a number of factors driving these changes across, and specific to, the individual quality measures examined here. But there does appear to be an emerging answer to the question QualityWatch posed at its outset concerning the possible trade-off between performance and funding growth. Performance across a majority of the quality metrics analysed began to deteriorate three or four years into the period of much slower funding growth for the NHS.
This perhaps suggests that while productivity improvements by the NHS aimed at closing the funding needs gap and maintaining standards were for a time successful, the continued squeeze in funding beyond 2014/15 started to overwhelm the ability of the NHS to meet its headline quality standards.
With current Government spending plans for the next five years to 2022/23 set to continue the relatively low growth of the last seven years, it remains to be seen whether trends in many of the key quality measures examined here will continue on the deteriorating paths they have taken over the last few years, or whether the NHS can find new ways to reverse these trends.
Following the 2017 Autumn Budget settlement for the NHS over the next few years, the view of the NHS England board is that “even with some increased volume, and even assuming this year's [2017/18] unprecedented elective demand management success continues, our current forecast is that – without offsetting reductions in other areas of care – NHS constitution waiting times standards, in the round, will not be fully funded and met next year.”
In other words, the opportunity cost of stabilising current overspending by trusts and prioritising emergency and primary care services may well be further deterioration in elective waiting lists and waiting times.
1. We have excluded estimates for a number of trusts that did not make returns. These would add around 180,000 to recent totals. Their exclusion makes no material difference to the analysis of turning points.