Patients should not be left in the dark over care quality

Commissioned by the Secretary of State for Health, the Nuffield Trust publishes a report examining whether rating providers for quality is a policy worth implementing.

Press release

Published: 22/03/2013

A major independent review into whether the Government should introduce 'Ofsted-style' performance ratings for hospitals, general practices, care homes and other social care providers is published today by the Nuffield Trust.

Commissioned by the Secretary of State for Health, the Rt. Hon Jeremy Hunt MP, the Nuffield Trust’s review notes that there is no independent and comprehensive system attempting assessment of the quality of care across all hospitals, GP practices, care homes and other providers.

This has led, they conclude, to a clear gap in the provision of comprehensive and trusted information on the quality of care of providers which might inform the public, and which might be used to improve the accountability of care providers to the public.

Informed by a wide-ranging consultation exercise with the public, professionals, regulators and policy-makers, the review suggests that, firstly, ratings for care homes and other providers of adult social care services, and potentially for GP practices could be useful for the public.

There is a major gap in the information available to the public on the quality of care of their local hospitals, GP practices, care homes and other providers, people are left in the dark Dr Jennifer Dixon CBE, Chief Executive of the Nuffield Trust

The benefits of rating hospital performance are less clear-cut. This is because hospitals are large, complex institutions delivering a large range of often high risk services.

A single summary score of a hospital’s performance risks masking examples of good and poor care across different departments and wards.

Therefore, the review recommends that any attempts to rate quality in hospitals should focus most closely on individual departments and clinical services, such as oncology or orthopaedic departments, rather than indicators based on management performance, such as finances.

Where a rating system is unlikely to be useful on its own is in spotting lapses in the quality of care. This is particularly true for complex providers such as hospitals, which manage vastly larger risks than those handled in schools; the public sector analogy most frequently drawn.

Indeed the report notes that the biggest risk to a rating system, however good, would be that it becomes discredited by a lapse in care in a provider rated as ‘good’ or ‘excellent’.

To fulfil such a function, ratings would have to be combined with other approaches, such as robust surveillance, inspection and special investigations.

Commenting on the publication of the report, Nuffield Trust Chief Executive Dr Jennifer Dixon CBE, who led the review, said:

“There is a major gap in the information available to the public on the quality of care of their local hospitals, GP practices, care homes and other providers, people are left in the dark.

“The information that does exist is spread across a number of sources, which may reduce its impact and use by the public. One aggregate, comprehensive rating of providers may provide more clarity and simplicity for the public, especially if it came from one 'official' trusted source.

“However, this is not a simple task and it’s clear from the responses we received to our consultation that there is more appetite for introducing ratings in social care and possibly general practice, than in hospitals which tend to be more complex in the range of services they provide.

“It is important to be clear about the purpose of a ratings system. For example, a rating per se is unlikely on its own to be useful in spotting lapses in the quality of care, particularly for services within complex providers like hospitals.

“It is here where the analogy with Ofsted’s ratings of schools particularly breaks down: hospitals are large, with many departments and different activities, seeing large numbers of different people every day, carrying out complex activities many 24/7, and in which people are sick and can die.

“Put another way, the risks managed by hospitals vastly outweigh those managed in schools.

“Constructing a summary rating for hospitals is possible but would be a difficult and complex task. Ultimately, the goal should be to introduce ratings that drill down to the level of individual departments and clinical services so that patients can have a much truer understanding of the quality of care provided in those departments.

“There should not be undue reliance upon any one indicator – a rating should be made up of a range.

“A summary rating may be easier to construct for social care due to the range of possible assessments being more limited and the types of services having a tendency to resemble each other across the sector.

“Either way it will take time to develop information at this level – a stable roadmap is needed for the next five to ten years rather than the chop and change that has disrupted development in the past.”

The review recommends a number of factors that should be considered by the Government if it decides to implement a new national ratings system:

  • any extra burden that a rating might impose on providers (or commissioners of care) which might detract from patient care is assessed explicitly and minimised as a priority. To help, inspections by the rating organisation would need to be developed effectively to target providers by risk;
  • the organisation doing the rating (we recommend the Care Quality Commission) is given the resources and time to manage and develop a new strategic direction, political support and support from other stakeholders, as well as stability from disruption over a period of time;
  • the design and presentation of the rating is sector-led with groups representing the public and users of care meaningfully involved. This way the rating might reflect more what really matters to patients, and win the hearts and minds of staff attempting to improve care. There would need to be alignment with existing frameworks for assessing quality and a consensual process agreed to further development of the rating in future;
  • further market research is undertaken to better understand how to communicate ratings to the public, particularly those in areas with limited choice of provider;
  • there is clarity as to how the rating fits with wider activities to help support providers to improve, for example commissioning, and the work of other regulators;
  • the rating system links closely with systems designed to spot, investigate and manage lapses in quality and the rating signals appropriately and early where there are concerns being investigated;
  • an evaluation of the costs and benefits occurs from the very beginning; and
  • there is support for the development of ratings over the medium term (subject to an evaluation of the results) by political and other key stakeholders and a roadmap for indicator development is established over the next five to ten years. The emphasis here should be to develop assessment of individual clinical services (particularly within hospitals) and for groups of patients most at risk.

On the question of which organisation should carry out the rating, the report singles out the Care Quality Commission which already carries out many tasks which would support a rating, for example inspection and data analysis.

However it admits that this would shift the organisation’s focus beyond its current role around compliance regulation, and would require additional resources, significant support over a period of time, and a forgiving timetable given the complexities and pitfalls inherent to the enterprise.

The review notes the instability in the organisations overseeing performance ratings in the NHS in the past. This should be avoided in future, the report warns, as such instability has in the past reduced the time for regulators to develop the system of ratings, and evaluate their impact.

Comments