This was the question set by the Secretary of State.

We’ve been there before, and the added value of previous ratings relative to the costs is not clear either way. Nor indeed is the potential for ratings to have an impact in the future if there were improvements in its design and use.

So what might ratings add today? There are two obvious gaps.

First, there is currently no independent comprehensive assessment of quality across all providers and across the full spectrum of performance. Second, there is nothing from a single trusted source that is simple for the public to use.

Should these gaps be filled?

The answer depends in part on what the main purpose of a rating is. There could be at least five: to increase accountability to the public, users, commissioners of care, and (for publicly funded care) to Parliament; to aid choice; to help improve the performance of providers; to identify and prevent failures in the quality of care; and to provide public reassurance as to the quality of care.

Ratings are associated with better provider performance, but there is also the risk that the measured becomes what is managed

Our analysis suggests that ratings could improve accountability provided they were simple, valid and were reported publicly. Ratings could aid choice among users and commissioners.

There is a big gap here: trying to choose in particular a care home, domiciliary care provider, or a general practice is not helped by either the confusing array of information from different sources, or more often a lack thereof. The public is left in the dark. This is the space Ofsted fills for schools.

Ratings are associated with better provider performance, but there is also the risk that the measured becomes what is managed. More sanctions resulting from a rating mean more perverse effects: the overall impact depends less on the rating per se, but rather the wider system in which it is embedded.

For hospitals, a ‘whole institution’ rating is more of a managerial concept than a clinical one – an aggregate rating should include service-level information in the future. That is what patients need.

A rating by itself is unlikely to be useful in spotting lapses in quality, particularly in hospitals. Here the analogy with Ofsted’s ratings of schools breaks down: hospitals are large and complex, seeing large numbers of different people 24/7, people who are sick and can die.

Put another way, the risks managed by hospitals vastly outweigh those managed in schools. For social care providers, the risks may be lower, but many are still dealing with frail, ill and otherwise vulnerable individuals. There should therefore be a clear ‘health warning’ on the rating.

On reassurance, while the public may be forgiving of a rating system’s ability to spot some lapses in quality, reassurance is more likely if the public could be confident that there was a rapid and effective system of investigating and dealing with failure. This is where the proposed new 'inspector' of hospitals could have a role.

The rating should not just be an aggregate statement, but a set of ‘dials’ covering the three ‘Darzi’ domains of quality: experience, effectiveness and safety, and possibly the quality of governance.

The rating should be based on routine data and inspections. The information should be refreshed at least quarterly. Bringing financial performance into a rating for quality risks a provider making inappropriate trade-offs between financial issues and the quality of care.

Any rating should be developed over time, its design involving key stakeholders including groups representing users and the public, and drawing on existing work. We suggest a 'road map' approach over the next five to ten years.

The most obvious organisation to do the rating would be the Care Quality Commission (CQC). But the CQC would need political support, support from the main national stakeholders, resources, time to develop, as well as stability over a period of time.

Any new system should be fully evaluated to assess its benefits versus drawbacks. Consideration should be given to road testing any new system to avoid any unintended consequences or perverse effects.

If the Government does press ahead with ratings, it may be easier to start with ratings for social care and for general practices.

Ratings for hospitals might work but potential benefits would only be realised if some key conditions are fulfilled, such as: no extra burdens on providers given all the current monitoring requirements; support and time is given to develop the rating system; the design and presentation of the rating is sector-led with groups representing the public and users of care meaningfully involved; market research is done on how the ratings might be presented and used by the public; the rating system links closely with systems designed to spot lapses in quality; and an evaluation of the costs and benefits occurs from the very beginning.

This blog was first published on the Health Service Journal website. 

Email to a friend

Your message will be:

I thought you might be interested in this page on The Nuffield Trust website.

Comments (2)

I am surprised and saddened that, having listed several shortcomings of a composite rating, you come down broadly in favour of them. If the measured becomes the managed, the rating is unlikely to spot lapses in quality and safety, and a single rating is not meaningful, then I find it implausible that "choice" and "accountability" alone would justify such an expensive and potentially distorting exercise. I appreciate the emphasis on road testing and refining, and also the advice against using rankings, but much of the uncertainty in rankings arises form the source data (see for example http://www.york.ac.uk/media/che/documents/papers/researchpapers/rp16_pub...) and is a particular problem when using data already collected for another reason, which is a convenient and cheap approach but ultimately introduces more bias, uncertainty and potential for gaming. We must not forget the case of Addenbrooke's which lost one of its "stars" because 4 junior doctors had worked the wrong number of hours that year (http://onlinelibrary.wiley.com/doi/10.1111/j.1740-9713.2005.00126.x/pdf).

22 March 2013

A well judged report that sensibly highlights issues of opportunity costs and priorities. For the last 20 years the time required and cost of developing new performance measures of use to and used by, the public and patients have been consistently underestimated. In the hospital sector priority should surely be given to expanding the coverage, and increasing the transparency, presentation and use of departmentel level indicators, with a particular focus on high risk areas. If there is a political imperative for composite ratings the report is right to point to care homes (and other forms of social care) as the prioirity target : there is least current information here and most public support for greater public information.

Clive Smee
22 March 2013

Have your say

The content of this field is kept private and will not be shown publicly.
By submitting this form, you accept the Mollom privacy policy.