Refining the case finding model

Blog post

Published: 27/08/2013

The Nuffield Trust has just published a paper on a new predictive model. We hope that the paper and the accompanying details can help both commissioners and providers of care refine the ways that they use risk stratification and case finding tools.

The market for predictive modelling tools has grown a lot in the last few years. Alongside the old familiars, such as Patients At Risk of Re-hospitalisation (PARR) and the Combined Predictive models, are a range of tools – some grown in the UK, some imported, as discussed at our previous predictive risk conference.

Of the various models used in England, the PARR tool has had the most widespread distribution, probably because the data to run PARR is easily accessible and the software to run it was free.

The original Combined Model was developed some time ago and it is likely that patterns of admission and the thresholds for admission have changed

Yet the PARR tool is restricted to looking at the probability of readmissions, whereas other models, such as the Combined Predictive Model (CPM), look at a more useful measure (the probability of admission), but rely on using GP data which is much harder to access and use.

The Nuffield Trust is not a commercial vendor of software or analytical tools – so why have we developed another model?

Well, firstly we had to do something in relation to our work on the evaluation of telehealth and telecare where we had used the Combined Model to risk adjust. However, as part of that work we had always planned to see whether the conclusions from that analysis were sensitive to the type of risk models used.

Secondly, we thought it was time to revisit the Combined Model. The original Combined Model was developed some time ago and it is likely that patterns of admission and the thresholds for admission have changed.

This may well affect the weight given to different variables. We also realised that we had a larger population to study than the original model and identified a wider range of potential variables – particularly around primary care.

Finally, we also had lots of questions about how these models work best (answers for those with a short attention span in italics):

1. What difference does it make to the model adding or excluding GP data? There is a slightly improved sensitivity for lower risk cases.

2. Does local calibration consistently improve model performance? To our surprise, not really.

3. Do additional outpatient and A&E variables improve the model? Yes.

4. What do we do about the lag time from data being available? We built the model assuming a 60 day data lag.

5. Can we build more powerful models than the CPM? As far as we can tell, yes – a bit.

Though it’s nice to say the positive predictive value (PPV) and sensitivity is slightly better – as far as we can tell – to be honest the differences are not huge. In our experience these models all perform in roughly the same way – if you use the same test.

As we have noted before, the choices of which predictive model to use is more than scratching out a few more percentage points on sensitivity and specificity.

What we think is important in this work is that we have explored some of the variables that you need to think about when choosing and implementing a model. As we said, we are not a commercial vendor but we make our findings available to all, in the hope that it can help us refine the way we use case finding in the NHS.

Suggested citation

Bardsley M (2013) ‘Refining the case finding model’. Nuffield Trust comment, 27 August 2013. https://www.nuffieldtrust.org.uk/news-item/refining-the-case-finding-model

Comments