Not made to be measured: why evaluating integrated care initiatives is so difficult

Knowing which of the growing number of initiatives to join up care in the health service actually work for patients is crucial. Based on her experience with an earlier wave of integration initiatives – the “Pioneers” – Eilís Keeble looks at what needs to be done to make this possible.

Blog post

Published: 13/06/2019

Initiatives to join up care and keep people well outside hospital have proliferated at an astonishing pace across the NHS in recent years, culminating with the planned rollout of Integrated Care Providers (ICPs) and Systems (ICSs) across England under NHS England’s new Long Term Plan.

Knowing which of these, if any, actually work for patients will be crucial to success, and to the health service’s ability to show it is using staff and money wisely. But our experience with an earlier wave of integration initiatives – the “Pioneers” – shows a lot needs to be done to make this possible.

The Integrated Care and Support Pioneer Programme covered 25 sites across England, selected in two waves in 2013 and 2015 for having the most “ambitious and visionary” plans for health and social care integration. The Nuffield Trust is part of a five-year evaluation of the programme and has been monitoring health and social care indicators for any changes in the extent of care coordination and its consequences in the Pioneer areas compared to others. You can see our dashboard tracking these indicators across the Pioneer sites on the website of our colleagues at PIRU.

We started out with a set of over 40 measures that had been identified previously as being potentially useful, but ended up with a relatively short list that we could actually use in the dashboard. The reasons for this lie in five key pitfalls that we ran into when trying to measure the success of the sites. Many of these will remain a problem for evaluation of the ICSs and ICPs – but for each of these, I identify actions that could make it easier next time.

Integrated care crosses organisational boundaries but the measures don’t

Health and social care integration involves many different organisations. Some Pioneers were based around clinical commissioning groups (CCGs); others around lower tier local authorities; others both. This means that the population covered by an indicator often did not reflect exactly the population that the Pioneer was targeting. Sometimes indicators for entire areas of care were not available at the level of the Pioneers. For example, the adult social care outcomes framework data are only available at upper tier local authority level, areas which can encompass multiple Pioneers.

The STPs and ICSs cover much wider areas than the Pioneers which should go some way to resolving these issues. But the CCGs and lower tier local authorities of which they are made up still do not always line up with one another – as shown below in a map of CCGs and local authorities in Greater Manchester. This makes it unclear how data available across different administrative boundaries will be able to track the same population. Aligning health and local authority bodies so they cover the same areas would be the simplest way to resolve this from an analyst’s point of view, but it would require a big reorganisation. The alternative is to aggregate data up from smaller areas -  something that is possible for hospital data but not other services at the moment.

Hospital data is great, but is only part of the picture

Reducing the need for emergency hospital care was a stated aim of many Pioneers, so hospital data was an important area to cover in the dashboard. But it misses what is happening to huge swathes of services which people regularly come into contact with. Neither general practice nor social care have national datasets tracking activity in any level of detail, for example.

The Clinical Practice Research Datalink (CPRD) provides a representative sample for general practice at a national level, but not necessarily for the populations covered by the Pioneers. The recent publication of GP appointments data by NHS Digital provides some much needed information on activity in primary care, but it is several years away from being a useable time series. Unifying the various GP data systems and promoting accurate recording of data such as conditions would provide valuable information on an area where the majority of health problems are treated.

The expansion of community services data to include adults as well as children is promising, but the statistics are still experimental and there are issues over coverage. Meanwhile, publicly available social care data is largely dependent on annual surveys produced at the level of social care delivery - but not necessarily the level of Pioneer delivery. Collecting comparable information on social care use at the individual level would give us a much better insight into the effect of new initiatives on these important but often overlooked services and the patients who use them.

Methodology changes complicate time trends in areas already lacking data

To know whether things are getting meaningfully better or worse ideally requires at least a couple of years of data before and after the intervention. But frequent changes to methodology and definitions mean that this is often difficult to find. The interrupted graph below on the proportion of adults who are physically active is an example – how could we analyse what worked in this time period?

During the period of interest to the Pioneer evaluation, the methodology for the entire GP patient survey changed significantly twice and the questions on out-of-hours care, an area relevant to integration, also changed once.

Of course, we want statisticians to innovate as new data becomes available and better methods emerge. But there is a delicate balance between that and losing information from a very useful time series. This can be an especially big problem when there is a paucity of other data on the topic – as is the case for general practice.

As researchers we need to take the time to respond to consultations on changes to datasets and the people responsible for them need to think carefully about the implications of changes for measuring the success of expensive policy initiatives.

Big changes in indicators generally aren’t what they seem

An early and slightly disappointing lesson you learn as a health care researcher is that a lot of indicators just don’t change that much from year to year. Shortly afterwards, you learn that when a big improvement does appear it is generally best to question it.

When hospitals change the way that admissions are coded, it can have a very dramatic effect on apparent performance. A lot of quality assurance is done on the datasets which record these. But this doesn’t always catch all the errors.

In the left-hand graph below, a large portion of emergency admissions were switched to being coded as transfers in 2008/09 (which the methodology excluded). In the right-hand graph, a third of emergency admissions were mistakenly treated as sensitive records and anonymised in 2016/17. This meant they didn’t have any address information and therefore, couldn’t be linked to a local authority. Both these changes looked like big improvements but further investigation showed them to be mere illusions.

Finding a true counterfactual isn’t always possible

To establish whether the Pioneers were having any impact on the populations they covered we needed to compare them to areas that weren’t classified as a Pioneer, or to regional and national figures. However, over time it became increasingly difficult to identify and attribute any improvements to the effect of the Pioneers themselves. Integrated care initiatives became increasingly popular over the period the Pioneers were active. Soon we had the national new care models programme (Vanguards), the Better Care Fund and many local initiatives. Therefore, many areas that were classified as non-Pioneer probably had something potentially similar in process - while many of the Pioneers became Vanguards or covered Vanguard areas themselves.

Without solid controls, cause and effect may be hard to establish and important impacts may not be measurable. These difficulties mean we need to manage expectations about what can be shown.

A partial picture

As enthusiasm for integration continues we need to focus on ensuring we have the data and systems in place to be able to prove that the investment was worthwhile. Right now, there is a lot of work to do. But for each of our five problems, we have highlighted that there are possible solutions.

We need to be realistic that not all these issues can, or will be addressed by the time the current crop of initiatives are assessed. That means being cautious about any claims of success – or failure.

Working out whether an initiative has succeeded needs to be taken as seriously as making it work in the first place. Otherwise, the scope for error or disillusion is all too clear.

Suggested citation

Keeble E (2019) “Not made to be measured: why evaluating integrated care initiatives is so difficult”, Nuffield Trust comment.

Comments

Appears in

  • 13/12/2021
  • Sarah Scobie