Earlier this year, the Journal of General Internal Medicine published new research, which showed that neither doctors, nurses nor case managers were able to predict which patients were at highest risk of readmission to hospital.
This finding is important because if we are to tackle the health problems that manifest as unplanned hospital admissions then we need to be able to predict and prevent these events at the individual level. Unplanned admissions cost the NHS an estimated £11 billion a year, so reducing them could lead to improvements in individual patients’ health status and, at the same time, generate net savings for the health service as a whole.
An alternative approach to clinicians predicting hospital admissions is to use statistical models instead. In 2006, the Department of Health (DH) invested in two predictive models (or ‘risk stratification tools’) for the NHS in England. Named PARR and the Combined Predictive Model, DH funded these partly in order to make free models available for primary care trusts (PCTs) to choose.
NHS Scotland and NHS Wales subsequently developed analogous models called SPARRA and PRISM. These four models form part of a wider family of other predictive tools used around the world, which are offered mainly by commercial and academic providers.
It is worth bearing in mind that a predictive tool such as PARR actually consists of two components.
Predictive Tool = Predictive Model + Software
First there is the underlying predictive model (a statistical algorithm, typically a regression formula or a neural network). But secondly, there is the software on which this model is run in practice. For the PARR tool, this software was called PARR++. Often such software provides additional functions. For example, it might present or manipulate the underlying data in a variety of useful ways. So as well as presenting the output of the predictive model, the software may also display a range of other analyses too.
Predictive models need to be updated from time to time to reflect changes in clinical practice, and in epidemiology, demographics and clinical coding. Last week, however, the Department of Health announced that it would not be commissioning a ‘national upgrade’ of PARR, nor of the Combined Predictive Model.
So what should NHS organisations do if they are currently using one of these tools to identify high-risk patients?
It is clear that current Government policy is to promote ‘plurality’ in the information market. This stance comes with certain advantages: a competitive market might lead to innovation for example. But there may be disadvantages too from the loss of economies of scale, and from losing the comparative benchmark of a national model.
I suggest that when choosing a predictive model, commissioners should base their choice on at least five factors:
- Outcome to be predicted (e.g. unplanned admissions in the next 12 months).
- Predictive accuracy of the model (see our overview for a discussion of different metrics, including model sensitivity and positive predictive value).
- User-friendliness of the software on which the model is run.
- Availability of the data on which the model is run.
- Cost (if any) of the model, of the software, and of obtaining the data on which the model is run.
In this new world, one can envisage a range of different algorithms being offered on the same software. This is already happening in some parts of the country, where the Combined Model and a model to predict stroke can be run side by side.
Earlier this year, the Nuffield Trust published a predictive model for social care. We have made this model available on an open-source basis for use by software developers. Currently, we are developing a new predictive algorithm (called ‘PARR-30’) that will predict readmissions to hospital within 30 days of discharge.
The market for predictive tools is clearly evolving and we are currently exploring what sort of low cost or free options should be available to the NHS in future.
In the United States, where there is a proliferation of predictive algorithms, the Society of Actuaries produces authoritative independent advice on the relative merits of different algorithms, based on their comparative performance on a set of test data. Perhaps the time has come for similar comparative analyses in the UK.