Should there be 'Ofsted-style' ratings for health and social care providers?

Blog post

Published: 22/03/2013

This was the question set by the Secretary of State.

We’ve been there before, and the added value of previous ratings relative to the costs is not clear either way. Nor indeed is the potential for ratings to have an impact in the future if there were improvements in its design and use.

So what might ratings add today? There are two obvious gaps.

First, there is currently no independent comprehensive assessment of quality across all providers and across the full spectrum of performance. Second, there is nothing from a single trusted source that is simple for the public to use.

Should these gaps be filled?

The answer depends in part on what the main purpose of a rating is. There could be at least five: to increase accountability to the public, users, commissioners of care, and (for publicly funded care) to Parliament; to aid choice; to help improve the performance of providers; to identify and prevent failures in the quality of care; and to provide public reassurance as to the quality of care.

Our analysis suggests that ratings could improve accountability provided they were simple, valid and were reported publicly. Ratings could aid choice among users and commissioners.

There is a big gap here: trying to choose in particular a care home, domiciliary care provider, or a general practice is not helped by either the confusing array of information from different sources, or more often a lack thereof. The public is left in the dark. This is the space Ofsted fills for schools.

Ratings are associated with better provider performance, but there is also the risk that the measured becomes what is managed. More sanctions resulting from a rating mean more perverse effects: the overall impact depends less on the rating per se, but rather the wider system in which it is embedded.

For hospitals, a ‘whole institution’ rating is more of a managerial concept than a clinical one – an aggregate rating should include service-level information in the future. That is what patients need.

A rating by itself is unlikely to be useful in spotting lapses in quality, particularly in hospitals. Here the analogy with Ofsted’s ratings of schools breaks down: hospitals are large and complex, seeing large numbers of different people 24/7, people who are sick and can die.

Put another way, the risks managed by hospitals vastly outweigh those managed in schools. For social care providers, the risks may be lower, but many are still dealing with frail, ill and otherwise vulnerable individuals. There should therefore be a clear ‘health warning’ on the rating.

On reassurance, while the public may be forgiving of a rating system’s ability to spot some lapses in quality, reassurance is more likely if the public could be confident that there was a rapid and effective system of investigating and dealing with failure. This is where the proposed new 'inspector' of hospitals could have a role.

The rating should not just be an aggregate statement, but a set of ‘dials’ covering the three ‘Darzi’ domains of quality: experience, effectiveness and safety, and possibly the quality of governance.

The rating should be based on routine data and inspections. The information should be refreshed at least quarterly. Bringing financial performance into a rating for quality risks a provider making inappropriate trade-offs between financial issues and the quality of care.

Any rating should be developed over time, its design involving key stakeholders including groups representing users and the public, and drawing on existing work. We suggest a 'road map' approach over the next five to ten years.

The most obvious organisation to do the rating would be the Care Quality Commission (CQC). But the CQC would need political support, support from the main national stakeholders, resources, time to develop, as well as stability over a period of time.

Any new system should be fully evaluated to assess its benefits versus drawbacks. Consideration should be given to road testing any new system to avoid any unintended consequences or perverse effects.

If the Government does press ahead with ratings, it may be easier to start with ratings for social care and for general practices.

Ratings for hospitals might work but potential benefits would only be realised if some key conditions are fulfilled, such as: no extra burdens on providers given all the current monitoring requirements; support and time is given to develop the rating system; the design and presentation of the rating is sector-led with groups representing the public and users of care meaningfully involved; market research is done on how the ratings might be presented and used by the public; the rating system links closely with systems designed to spot lapses in quality; and an evaluation of the costs and benefits occurs from the very beginning.

This blog was first published on the Health Service Journal website. 

Suggested citation

Dixon J (2013) ‘Should there be 'Ofsted-style' ratings for health and social care providers?’. Nuffield Trust comment, 22 March 2013. https://www.nuffieldtrust.org.uk/news-item/should-there-be-ofsted-style-ratings-for-health-and-social-care-providers

Comments