Community-based interventions: how do we know what works?

Blog post

Published: 26/06/2013

Over the past four years the Nuffield Trust have been asked to look at a range of service innovations and assess whether they lead to a change in service use – most typically a reduction in inpatient hospital activity, which is something that seems to have become the holy grail of health service planning.

Our new report summarises observations from our studies and efforts that might help those planning and evaluating new services in the future. In particular, the report should provide useful learning for the new health and social care integration ‘pioneer’ sites that will be appointed by the Department of Health by September 2013 (the deadline for applications is Friday 28 June).

One of the problems in evaluating care for people with complex health problems is that simply looking at change in health care use over time can be deceptive. The difficulty is that people are often selected to receive a new service because they have a high level of hospital admissions – and we know in that case there will be a natural tendency for their use of inpatient care to fall anyway, whatever we do to them.

Though they may not be achieving the magical reduction in emergency admissions, there are times when labelling these as failures is overly harsh and we are in danger of rejecting good ideas if our evaluations are too restrictive

This means to assess the impact of something you need some form of comparator group. As randomised trials may be difficult in these settings, that means using some clever stats and data linkage.

Unfortunately, having looked at something like 30 different service models we are finding it hard to uncover those successful in reducing emergency admissions – something that a recent review by Sarah Purdy noted too.

The new service models we have studied have the right aims, and very often require considerable effort and energy to implement. Though they may not be achieving the magical reduction in emergency admissions, there are times when labelling these as failures is overly harsh and we are in danger of rejecting good ideas if our evaluations are too restrictive.

Our new report draws out nine key points but for me three of them stand out.

First, there is a tendency for us to be asked to look at services that are relatively new – there is an impatience for results that is not realistic. Major service changes take time to implement successfully, and even longer for the effects of that change to filter through to the health of patients in such as way as to avert the crises that lead to emergency admissions. Large scale integrated care models in the USA have developed over decades not months.

Second, though emergency admissions of a group of people receiving a new service may not have changed, other signs can be used to test for progress over shorter time periods.

There are some nice examples from improvement science that help to map out the logic of how an improvement occurs – and this type of analysis can help provide more sensitive (usually process) metrics for shorter term monitoring. They can also help us identify a wider set of outcome measures.

Third, the methods we have used have generally been what’s called summative – that is we turn up as external independent evaluators armed with a questions like ‘does it work?’ and proceed to give a yes/no answer. Such summative analysis clearly has a place but may be too simplistic given the nature of the complex change underway.

To help, quantitative methods can potentially be used to provide regular interim reporting on progress to show how a project is developing over time – giving it the chance to develop iteratively.

Moreover, the quantitative approaches can be supplemented with qualitative analyses that help understand the context – in particular why things are working or not – and what changes are necessary and possible.

Ideally, formative evaluation would be linked to quality improvement initiatives with technical assistance to monitor fidelity to the intervention and share learning between sites – as used for example in the USA to evaluate the Pioneer Accountable Care Organisations .

There is no escaping that a more complex evaluation model may cost more – which may mean we have to use them more sparingly, perhaps doing a full blown evaluation only after a realistic bedding in period and following a formal test of whether something is worth evaluating. This might give the space needed for service innovation to develop, as well as save money.

Suggested citation

Bardsley M (2013) ‘Community-based interventions: how do we know what works?’. Nuffield Trust comment, 26 June 2013. https://www.nuffieldtrust.org.uk/news-item/community-based-interventions-how-do-we-know-what-works

Comments