Last week saw the release of three surveys looking at patient satisfaction with the NHS, prompting a flurry of comment and opinion.
The British Social Attitudes (BSA) survey, sponsored by The King’s Fund, saw a drop in overall satisfaction with the NHS from 70 per cent to 58 per cent. The Public Perceptions of the NHS research, conducted for the NHS by Ipsos MORI, saw satisfaction holding relatively steady at 70 per cent.
Meanwhile, the GP patient survey, also conducted by Ipsos MORI, showed that 88 per cent rated their overall experience with their GP practice as good.
At first glance, the question is why are the results so different? Indeed, some of the commentary over the past week has focussed on playing the surveys off against each other, dismissing one and endorsing another, often depending on which agrees with existing positions.
But are we missing something if we do this? A more interesting discussion would centre around what each of these respective surveys actually measures, and then ask what they tell us collectively.
In some ways the GP patient survey is the most straightforward to interpret in that it measures a definable experience and satisfaction is high. When it comes to the more general surveys the picture is undoubtedly more complicated.
One of the criticisms of these surveys is that they are not measuring patient experience and so are of limited use. How can we give a true rating when we have not used something?
Firstly, we should remember that 9/10 people responding will have used the NHS in the previous year. So, we know that experience will be a factor when people are answering this question.
What we don’t know is the extent to which actual patient experience is driving overall satisfaction relative to other variables including staff advocacy, media coverage, political affiliation and satisfaction with Government.
We do not just experience the NHS as patients – we are taxpayers, we have friends and family who use it and we read about it. Its political, cultural and social importance is reflected in the fact that it is the second most important issue to us when we are deciding how to vote.
We know all of the above variables will have a bearing on satisfaction, what is difficult to determine is the extent to which each variable is having an impact at any one time. This is where interpretation becomes as much art as science.
When were the surveys conducted? What else was happening? What was the background noise? What do other data sources tell us?
There is a relatively large difference in the satisfaction ratings between the BSA survey and the Ipsos MORI Public Perceptions data (58 per cent and 70 per cent respectively).
If we accept, as most commentators have, that any fall is not as a result of a drop in quality of NHS services, we need to look for alternative reasons for the change. One explanation may be timing. The fieldwork periods were different and background noise will have an impact, but given the proximity of the surveys any effect will be limited.*
We also need to look at what the overall survey is asking. The BSA survey measures attitudes towards a range of issues from state of the economy to views on education. It is important to understand this framing when reviewing the findings. We know that how we feel about the NHS is associated with how we view the state of the nation and vice versa.
The Public Perceptions data only asks questions about the NHS and health. This does not make it any more or less accurate than the BSA survey, but it does tell us that it is measuring something different.
In this respect it is interesting to look at one of the explanations John Appleby posits in his analysis of the results. He asks whether the satisfaction rating in the BSA survey data can be seen as a “surrogate” vote?
Our data on overall satisfaction with Government shows that there has been a marked drop since the fieldwork period for the 2010 survey, suggesting that some of the fall in the NHS rating shown in the BSA survey will be a comment on how the Government is performing more generally.
Another explanation given for the results of the BSA survey is that they are a comment on the reforms and how they were communicated to the public. Without dismissing this, it is easy for the chattering health classes to forget that not everyone has been following the reforms with quite the same zeal they have.
Our work from December 2011 shows that only 29 per cent say they knew a great deal or a fair amount about the changes the Government is making to the NHS. This is not to say that the noise around the reforms has not filtered through to the public at a general level, and we know any proposed changes will cause concern, but to say that the drop in the BSA survey rating is solely a comment on the reforms is overly reductionist.
So, we know that patient experience is holding up well and, despite a few signs that it is starting to drop, so is overall satisfaction when asked in a specific NHS context. Those delivering services may well feel relatively reassured by these findings, particularly given the current challenges they face.
However, the BSA survey does suggest that when asked in a broader social, political and economic context, satisfaction has dropped considerably and it is this that may concern policy-makers and in particular politicians.
The challenge when looking at these surveys is to try and understand what is driving the results. It is rarely, if ever, only one factor although the temptation to attribute causality in this way is perhaps understandable.
Far more useful for those working in the NHS is to look across the data sources, look back at what we know from previous work and then try to determine what is happening. Only by adopting a considered approach can you begin to understand where the public is on the NHS.
Our relationship with the NHS is not merely a transactional one. It means different things to different people at different times. It is important to remember this.
* Fieldwork for the BSA survey was carried out between 4 July and 10 November 2011, whilst fieldwork for the Public Perceptions of the NHS research was carried out between 14 November and 9 December 2011.
Dan Wellings is a Research Director at Ipsos MORI. Please note that the views expressed in guest blogs on the Nuffield Trust website are the authors' own.
Wellings D (2012) ‘Beyond the headlines: what is happening with NHS satisfaction rates?’. Nuffield Trust comment, 21 June 2012. https://www.nuffieldtrust.org.uk/news-item/beyond-the-headlines-what-is-happening-with-nhs-satisfaction-rates