Getting the most from evaluation

How can we get the most out of evaluation in health? Dr Alisha Davies explores.

Blog post

Published: 31/03/2015

There is an awful lot of interest in evaluation at the moment – no bad thing for us at the Nuffield Trust, given that we have a long track record in certain types of evaluation.

Most recently, NHS England have said that the new Vanguard sites, new models of care described in the Five Year Forward View, will need to show “a commitment to co-design local and national metrics and to demonstrate progress against them, including real-time monitoring and evaluation of health and care quality outcomes, the costs of change, and the benefits that accrue ”.

The Integration Pioneers are expected to collaborate with national partners to evaluate their initiatives and add to the evidence base; and many are also carrying out their own more specific evaluation tailored to local needs.

The Prime Minister’s Challenge Fund, improving access to general practice, is also accompanied by national and, in some areas, local evaluation. As is the Better Care Fund, with clinical commissioning groups (CCGs) and local authorities keen to demonstrate a measurable impact of integrated health and social care initiatives.

Translating evaluation on paper into practice is hard

With all these new models of care, when it comes to evaluation, the questions to answer are simple:

  • Does it work?
  • Has patient experience improved?
  • Has the service saved money, and whose?

Unfortunately, these simple questions are often easy to ask but difficult to answer, requiring strong analytical skills and resources. They may also require other things – such as time to show an impact and a willingness to find a common language across different disciplines.

A broad family of different practices sit under the heading of evaluation. At one end of the spectrum, are approaches that focus on local learning and improvement and identification and testing of good practice. At the other end, might be a complex mixed-methods approach including qualitative, quantitative and cost effectiveness analyses – typically led by external academic partners.

There is no single approach, but rather each evaluation is specific to the service/initiative within the local context and available capacity.

Losing sight of the goal

There is the risk that demands on local innovators to evaluate service developments and support nationally led evaluations can become overwhelming. As service developments become more and more complex, reaching agreement on what success looks like, let alone how to measure it, becomes increasingly difficult. In the midst of RAG rated dashboards, we must not lose sight of the purpose of evaluation – to understand if this service is making a difference to patients, carers and across populations.

Where is the local support?

Despite a national and local willingness and engagement in evaluating and learning, and considerable experience within the system, my impression is that there are huge variations in local capacity (including availability of public health input) and levels of external support.

To strengthen local capacity, shared learning networks such as the Integrated Care And Support Exchange (ICASE) and working with academic partners such as the Collaborations for Leadership in Applied Health Research and Care (CLAHRCs) teams are a good starting point. The power of sharing simple tools – such as the wording of data sharing agreements, or development of commissioning support unit (CSU) capacity for evaluation within core contracts – should not be underestimated in helping progress locally.

Despite this, for many people charged with organising an evaluation of a shiny new service, the task can seem daunting. How does one sort out the basic elements of the evaluation? This includes things like clarifying the intervention and aims before designing any evaluation, considering the size of the cohort needed to demonstrate change, finding a suitable comparison group, selecting relevant short-term process measures and long-term outcome measures, ensuring data governance requirements are met, and adding the value of patient and staff voice through qualitative data.

We believe that around the NHS there are many examples of where people are working through these issues and there is a real benefit in sharing that experience. So in June, we are hosting a free one-day national conference to address these points and many more. Our aim is to bring together commissioners and providers, senior policy makers, and experts in evaluation to explore the application of more robust evaluative methods to complex care settings in a way that remains relevant to local contexts.

In practice, that means we’ll be discussing the direction national policies are taking toward evaluation, exploring key methods and tools, and walking through a number of evaluation case studies. We hope that it will result in a useful resource for those conducting their own health service evaluations. For more information and to register interest in attending the conference, visit the ‘Evaluation of complex care 2015’ webpage.

Suggested citation

Davies A (2015) ‘Getting the most from evaluation’. Nuffield Trust comment, 31 March 2015. https://www.nuffieldtrust.org.uk/news-item/getting-the-most-from-evaluation

Comments