Don’t look back in anger – retrospective matching when an RCT won’t do

Cono Ariti explains how 'retrospective matching' evaluation methods can provide a valuable alternative to RCTs.

Blog post

Published: 15/06/2015

Evaluation is on everyone’s lips these days as the plethora of new and existing initiatives, such as Integrated Care Pilots, the Better Care Fund and most recently the Vanguard sites are under scrutiny to show that these new models are actually delivering on their promise. Like the attendees coming to our Evaluation of complex care 2015 conference you may well be grappling with the question of whether a Randomised Controlled Trial (RCT) is a possible way to evaluate the complex, community based interventions in your area.

In the world of clinical research the RCT is held up as the gold standard of evidence for the efficacy of healthcare interventions. The strong advocacy of researchers such as Dr Ben Goldacre has piqued interest in RCTs outside of the clinical realm and into community and public health interventions. This trend has been picked up by influential bodies such as the Behavioural Insights Team that encourage grantees to adopt RCTs in the evaluation of their programmes, many of which are complex and community based.

So with all of this momentum, why aren’t all interventions evaluated using RCTs? In an influential article, Professor Nick Black pointed out that not every intervention is appropriately evaluated using an RCT. He identified some problematic areas:

  • Ethical issues – some interventions cannot be randomised ethically, for example exposure to harmful toxins. In addition obtaining informed consent can also be problematic when many community interventions are applied to groups such as GP practices
  • Not generalisable – the patients chosen for RCTs are not usually typical of the general target population and so results from RCTs may not translate to wider populations in practice
  • Difficult to untangle what worked – in very complex interventions not all components could be prescribed a precise protocol definition and so a RCT may not elicit information about the whole intervention
  • Cost/time implications – in many cases, but not all, RCTs can be expensive and take a considerable time to provide results

It may be possible to overcome these and other issues surrounding the use of an RCT but in general this is not the case. So if an RCT is the gold standard, are we settling for lesser evidence by choosing to adopt another approach? How should the confused evaluator proceed?

At the Nuffield Trust we have been using methods that use linked data from existing routine datasets such as Hospital Episodes Statistics (HES) plus sophisticated matching algorithms to perform evaluations that mimic RCTs. The idea is to find control subjects from the routine data that look, as far as is possible, similar to the intervention service users and examine differences between the two groups that might be result of the intervention. Typically we might match on age, gender, predictive risk score, previous hospital usage and existing comorbidities for example. For those interested in the technical details, have a look at this article by Nuffield Trust researchers, and for those who still remember their calculus, you can read about the real gory details in this article.

The Nuffield Trust has successfully employed these methods in studies across the health and social care spectrum including the Effect of the British Red Cross ‘Support at Home’ service on hospital utilisationThe impact of the Marie Curie Nursing Service on place of death and hospital use at the end of life and An evaluation of the impact of community-based interventions on hospital use. We are currently planning to use these methods in the Cabinet Office funded evaluation.

Matching methods are a very promising development but they are not a panacea and will not rescue a bad study design or poorly thought out evaluation. Particular areas that an evaluator needs to be aware of when considering a retrospective matched study are:

  • Data relevance – they can only be used where the key outcome measure is routinely collected at person level
  • Important attributes missing – the matching process can create very similar groups in terms of the data used to match, but there might be other, hidden factors that explain differences between the intervention and control groups
  • User consent – need access to data and permission to link data over time – issues of consent have become more prominent in recent months

We are producing a “How to” guide to conduct matching studies with our very own eleven step program so evaluators can navigate the process and identify those areas where they need to call in more focussed expertise. Look out for this at the end of June.

We are supportive of the use of RCTs where possible, even in complex interventions, and are currently working with one group to implement a pragmatic RCT in telephone coaching for patients with long term conditions (shameless plug: they will be speaking at our event). However, we do accept the argument that RCTs have limitations and there are many other approaches to evaluation that can be used to generate valid findings.

Suggested citation

Ariti C (2015) ‘Don’t look back in anger – retrospective matching when an RCT won’t do’. Nuffield Trust comment, 15 June 2015. https://www.nuffieldtrust.org.uk/news-item/don-t-look-back-in-anger-retrospective-matching-when-an-rct-won-t-do

Comments