Evaluating, fast and slow: reflections from the Rapid Evaluation Conference

Rapid evaluation should not be mistaken for ‘quick and dirty’ research, so what does it mean?

Blog post

Published: 18/02/2019

On 29 January the Nuffield Trust hosted a conference on rapid evaluation in collaboration with the three nationally-funded rapid evaluation teams – ‘RSET’, ‘BRACE’, and the Improvement Analytics Unit. Calls for rapid evaluation are increasingly common in health care, but it brings many challenges. This conference aimed to bring together evaluators and those who commission evaluations in health care to discuss and share ideas for ensuring their success and usefulness.

We want to share what we think were some of the key issues that came from the day.

What does “rapid evaluation” mean in practice?

For some people “rapid evaluation” means a six to 12-month study that needs to start quickly, rather than the conventionally long evaluation that takes two years or more. It can also refer to an evaluation approach which involves regular reporting, formative feedback and close working with those that have commissioned the work. This means that decision-makers are not left waiting three years for a final report that confirms success or failure.

In this way, a rapid evaluation can support the ongoing process of implementation but also organisational learning so that knowledge is captured about processes of adoption, and adaptation, as the intervention proceeds. A rapid evaluation team can also make suggestions about the types of outcomes data and processes that could allow people on the ground to keep monitoring how an intervention is going long after the evaluators have departed.

Why the timescale matters in evaluation

At the outset of evaluations, very often there are high policy expectations and hopes that findings will be positive, identify areas of good practice to scale up and inform future investment decisions. The reality is, however, far messier because:

  • Evaluation findings can be reported too late to influence policy and investment decisions.
  • An innovation being evaluated may run alongside other service interventions, which makes it difficult to ascertain exactly which changes are having the greatest impact over time.
  • There is not always sufficient time or resources to conduct high-quality service evaluations, especially within busy health care organisations.

What is the place of rapid evaluation in the NHS?

The challenges for rapid evaluation to produce high-quality findings that inform decisions in a timely manner are not to be underestimated. To aid learning, we need to be sensitive to context, because the context in which an innovation is implemented and its impact are closely intertwined. You cannot understand one without the other.

At the same time, we also heard a real appetite to build greater evaluation capacity at the front line of the NHS. This might in turn generate a more critical approach to fads and fashions in health care which lack clinical or management evidence, are based on low quality analysis, or demonstrate a “pro-innovation bias”.

The discussion left us in no doubt that rapid evaluations, and the questioning mindset and skills they promote, have an important role to play in identifying what innovations do and do not work in the health service and, most importantly, why.

What people and skills does rapid evaluation require?

Identifying “who” is far easier than “how”. Few will deny that patients and the public need to be more actively engaged in evaluations. The big challenge is obtaining a broad range of views that reflect the population, not only small groups of highly engaged volunteers. We need to engage patients throughout the entire evaluation cycle, not leave this until the end. For example, the BRACE and RSET teams are consulting patient groups to identify priorities for services to be evaluated, asking them to provide early, critical feedback on project topics.

Interpersonal and communication skills must not be overlooked. Sometimes we will have to convey difficult messages, and broker difficult conversations, chiefly where unexpected or negative findings are uncovered – for example, where an intervention turns out to be more costly, or the benefits to patients fall short compared to existing practice. Evaluators always need to retain critical distance, even if they have established trusted relationships within health care settings, to maintain a study’s quality and independence.

Researchers and evaluators have the tricky role of capturing ‘reality in flight’ – monitoring an intervention, understanding the evolving responses to it on the ground, and carefully managing client expectations about what it might realistically deliver. A point was raised that an evaluator is ‘an independent interlocutor’ with a contribution to make to organisational decision-making. In shorter, rapid evaluations, it is therefore important that evaluators engage in reflective practice and provide formative feedback to organisations about emergent findings.

Do constraints on time mean a need to cut corners?

No. ‘Rapid’ does not equate to ‘quick and dirty’ research or ‘evaluation lite’. Rather it means doing analysis that is just as rigorous as a longer-term study, but working more nimbly with larger multi-disciplinary teams. Investing in diverse skills and expertise helps, and so does having the analytical tools and project management support readily in place. Tools that are transferable between evaluations are particularly useful, such as economic modelling using pre-existing datasets and protocols that map out the typical steps involved.

Everyone involved should be realistic about what can be measured, and what cannot, within the timescales given because a shorter evaluation of impact will carry more uncertainty. Where there are strong pressures within a health system – for example, commissioners needing to demonstrate a return on an investment within a 12-month budget cycle – it is critical that evaluators are clear about any ambiguous findings. Rapid evaluations will not be suitable in all instances and many complex health interventions do require longer-term studies.

Getting rapid evaluations right will not be easy. But we believe this approach can deliver useful feedback about what works (and what does not) in a timeframe that reflects the realities facing health care leaders and professionals. Watch this space to see what we discover.

This project is funded by the National Institute for Health Research Health Services and Delivery Research Programme (project number 16/138/17). The views and opinions expressed herein are those of the authors and do not necessarily reflect those of the Health Services and Delivery Research Programme, NIHR, NHS or the Department of Health and Social Care.

 

RSET logo 18/02/2019

Chart

Read more

Suggested citation

Comments