Data should drive action

Why is there geographical variation in the rates of hospital admission for ambulatory care sensitive conditions?

Qualitywatch

Blog post

Published: 16/10/2013

The focus on the crises in Accident and Emergency has been on inappropriate attendance, staffing and availability of services. However we now have new insights from QualityWatch – the collaboration between the Nuffield Trust and the Health Foundation, dedicated to providing independent indicators of health care quality.

The QualityWatch team has produced a 'Focus on' report on trends in a different area – the conditions that should have been treatable in primary care and should not have resulted in a hospital inpatient admission.

Ambulatory care sensitive (ACS) conditions, as they are slightly obtusely named, are increasingly used world wide as markers of care quality and are now in the NHS Outcomes Framework. We wanted to know what had happened to them in England over the last dozen years.

The QualityWatch team studied over 165 million consultant episodes for ACS conditions from 2001 to 2013 and found stark results; they accounted for one in five emergency admissions. What's more, the number of admissions increased by 48% over the 12 years of the study – more than the growth in other emergency admissions (34%).

Why is this? The growth has been particularly prevalent in the areas that affect the elderly and the young, with half of all admissions in five conditions: urinary tract infection (UTI)/pyelonephritis; pneumonia; chronic obstructive pulmonary disease (COPD); convulsions/epilepsy; and ear, nose and throat (ENT) infections in the young.

But the story is not all bad. For example there have been reductions in admissions over time in some conditions, such as angina and bleeding ulcers, and while most local areas showed increased rates of ACS admissions, a small number had made significant reductions.

There are two-fold geographical variations, even when the data is standardised for age, sex and deprivation.

The indirectly standardised admission ratio of observed-to-expected ACS across local authorities varies from 0.65 to around 1.34 – one is, by definition, the national average, and values greater than one have more ACS admissions.

The authorities with the worst ratios cluster most in the North West. When the distribution is shown as a normal distribution, 14 local authorities are more than two standard deviations above the mean – there is around a one in 20 chance this would happen by chance.

So what causes this variation? Chance, underlying health (there is some weak correlation), some oddity of the measure? Or is it quality of care?

We need to do further research to clarify this, however for now the assumption must be that it is quality of care – it seems morally wrong not to at least question local practice if it's shown to be an outlier.

This problem raises an issue that will continue with QualityWatch, and with any other series of indicators or metrics: how should they be interpreted?

Like any measure, they can only be judged fit for purpose by being used and by attending to feedback. Measures need to be honed as much as any other tool and you can help us in this. We will be producing more 'Focus on' reports in the future, and we hope you will help by giving feedback.

You can start now: have a look at the report, in particular page 14. Why do you think there is variation? Are you in an authority with a high ratio? Let us know why you think it is so high. Together, we can improve the system.

Comments

Appears in

  • 10/10/2013
  • Ian Blunt