Chronology
1988
Background
Peak of housing boom
NHS events
Measles/mumps/rubella (MMR) vaccine introduced
Mrs Thatcher announces NHS review on ‘Panorama’
Cows with BSE slaughtered
Community care, an agenda for action
Public Health in England
Nurse regrading
Project 2000 agreed
1989
Background
Fall of Berlin Wall
Soviet Union pulls out of Afghanistan
Tiananmen Square – China
First Direct – phone banking
Water privatisation
Interest rates hit 15 per cent
NHS events
Working for patients (NHS reforms)
Hepatitis C virus discovered
General management in family practitioner committees (FPCs)
Caring for people
1990
Background
Poll tax riots
Electricity privatisation
Hubble telescope in space
NHS events
NHS and Community Care Act
GPs’ new contract
1991
Background
Gulf war
John Major – Prime Minister
NHS events
Implementation of NHS reforms
Patient’s Charter
Clinical Standards Advisory Group
Beverley Allitt case
British beef “safe”
1992
Background
Maastricht EU Treaty
Conservative election victory (4th term)
Charles and Diana separate
Sterling leaves ERM; FTSE 100 reaches 3000
42 polytechnics become universities
NHS events
Inquiry into London’s health services (Tomlinson Report)
Select Committee report on maternity service
The health of the nation
1993
Background
Internet becomes popular
NHS events
Review of regional function
Calman Report on hospital staffing
Establishment of National Blood Authority
1994
Background
Mandela is President of South Africa
National Lottery
Channel Tunnel opens
NHS events
14 regions reduced to 8
1995
Background
NHS events
GP out-of-hours dispute
Reorganisation of cancer services
First human Creutzfeldt-Jakob disease (CJD) death
1996
Background
Railtrack and British Energy privatisation
Dunblane massacre of children
FTSE100 reaches 4000
NHS events
NHS electronic network starts
European Commission (EC) bans British beef exports
Digital imaging at the Hammersmith
Unification of districts and family health services authority (FHSAs) as Health Authorities
Regions become outposts of NHS Executive
Three Conservative White Papers: Choice and opportunity; Primary care: delivering the future; and The NHS: a service with ambitions
Office for National Statistics replaces Office of Population Censuses and Surveys (OPCS)
Academy of Royal Colleges formed
1997
Background
General election: Labour landslide; Tony Blair Prime Minister
Spice Girls
Hong Kong reverts to China
US Tobacco industry financial settlement with patients in USA
Food Safety Agency
Death of Diana, Princess of Wales
Scots vote for devolution
Text messaging
NHS events
Centenary of King’s Fund
Dolly the sheep – first mammalian clone
NHS (Primary Care) Act
Substantial E. Coli outbreak
New Influences
Each generation has expectations that cannot be fulfilled. Job security was no longer taken for granted, the concept of the family was less rigid, support in the form of Social Security was under threat, dreams of ever-increasing prosperity faded, and negative equity emerged with the decline of the housing market. Many things that people believed they were entitled to was no longer guaranteed. Young adults born in 1961–1981, Generation X as some called it, had a different and sceptical view of society.1 The NHS might not be there from cradle to grave. Their elders, in turn, discovered that young doctors and nurses sometimes lacked the vocational attitudes they expected. Although clinical medicine continued to advance inexorably, the health service was, as ever, in financial disarray. In its first issue of 1988, the British Medical Journal (BMJ) called for a new Health Commission.
Let us be charitable. Let us assume that Mrs Thatcher and her health ministers really do believe that the NHS is bigger and better funded than ever before, and that the concern voiced by the health professions is whingeing in response to tough, effective management. Then how do we convince the government that the NHS is moving towards terminal decline, and that innovatory thinking is needed to solve the crisis? … The message is that, after years of squeezing, the NHS has finally no more juice to give … Britain is not alone in facing a health crisis; in every Western country, each year brings new and better treatments for populations that are living longer than ever. This is the insatiable demand that politicians have been citing to excuse their refusal to find more money. But in fact there are many ways of skinning the cat.2
Bevan had said that the service must always be growing, changing and improving; Sir Patrick Nairne, a former Permanent Secretary at the Department of Health, doubted whether changes should include alteration in the basic organisation and financial structure.3 He saw three developments as desirable. First, the NHS was a most important public service, but no public service thought less about the public. The NHS should treat people as responsible individuals and take them into its confidence. Second, better links with private medicine and local authorities were desirable. Third, the distrust between every level from central government to the hospital should end. Clinicians, administrators, district teams and regional teams criticised each other, and ministers. The NHS was the largest glasshouse in the world, and risked its own survival if it could not resist throwing stones. Not surprisingly, health was a media favourite. Major ethical issues were raised by the tabloids, to the surprise of doctors who were sometimes naive in their comments.4 By 1996 the BBC was considering filming a natural death for a scientific programme. ‘Dr Kildare’ and ‘Emergency – Ward 10’ had glamorised medicine. Newer soaps, for example, ‘Casualty’ and ‘ER’, did not.
New forces were at work in health care internationally:5
- The power of big buyers – governments, private payers and patients were demanding cost-effectiveness
- The rise of sophisticated consumers – patients were more knowledgeable, changing the doctor-patient relationship
- New technology – including molecular biology
- Shifts in the boundaries of health and medicine – with the recognition of the complex relationship between the environment and medicine
- The ethics of controlling human biology – death and dying, and the legitimacy of rationing. From transplant surgery to fertility drugs, technology strained the ability of traditional morality to provide authoritative guides to behaviour.6
In 1988, the Department of Trade and Industry published a Green Paper on anti-competitive practices. Subsequently the Monopolies and Mergers Commission investigated whether the professionally imposed restrictions had an adverse effect on the public interest. The ethical code of the medical profession precluded advertising to the public. The Conservatives encouraged the provision of information to the public so that it could decide in a medical marketplace. The Commission supported an embargo on advertising by consultants, but considered that the restrictions on GP advertising operated against the public interest.7 There followed a series of organisational initiatives, which included a new GPs’ contract, the NHS reforms, The health of the nation, the Patients’ charter and Community Care.8 Previously, major organisational changes had taken place on a single, appointed day. Now change became continuous, varying from place to place. Central was a move towards a market, made possible by a hierarchical system of accountability from local management through regions to the Secretary of State.
Medical progress
Health promotion and The health of the nation
Health promotion and illness prevention were increasingly seen as part of routine medical care and incorporated into the practice of many GPs. An emphasis on more targeted screening for problems and disease in its early stages replaced the earlier enthusiasm for a more general approach. Attention was paid to smoking, raised blood pressure, misuse of alcohol, diet, and cancer of the breast and cervix.9 The effectiveness of screening procedures and the problems of ensuring that they were actually carried out were examined. Much remained contentious in the young science of health promotion and it seemed that, no sooner were proposals implemented, than a study would appear casting doubt on their merit or cost-effectiveness.
A population-based approach aimed to reduce risk factors by influencing the price of alcohol and tobacco, reducing salt in processed food or attempting to reduce social inequality. Disasters could also alter attitudes: the fire at Kings’ Cross Underground in 1987 was followed by a ban on smoking in public places. Finally came a focus on ‘green’ issues, the belief that life style, environment and ecology should be linked. We should look after the things that look after us, and design agricultural, industrial and social systems to prevent environmental hazards. Population and resources needed to be in balance.10 Public health physicians believed that health promotion spread wider than medicine into environmental issues and politics. The evidence that variations in health were correlated with income, both within nations and between them, was strong. Some people saw health promotion primarily in terms of social policies that redistributed income and believed that health care systems should be based on primary care, the participation of citizens and the principles of the World Health Organization’s (WHO’s) ‘Health for all’.11 An increasing number of countries, including New Zealand and the USA, were publishing health strategies based on WHO Health for all 2000 targets. The British government was seen as slow in following suit. In 1988 an independent expert committee, assembled by the King’s Fund, produced The nation’s health, a strategy for the 1990s.12 In 1991, the Faculty of Public Health Medicine produced a report on UK levels of health,13 centring its approach on risks, patterns of behaviour and how to alter them. In October 1990, Kenneth Clarke announced his intention to devise health targets and measure performance. The Chief Medical Officer (CMO), Donald Acheson, saw an opening for a project after his own heart. Clarke’s successor, William Waldegrave, published The health of the nation in June 1991 as a consultative paper. It was timely because WHO had provided a framework, the public were ready to hear the message, not least because of the AIDS epidemic, the need for health care assessment was widely recognised with the publication of a report on the future of Public health in England,14 there was a political consensus that more needed to be done, and it was a good diversionary tactic at a time when the government was under much pressure on the NHS. After consultation a White Paper, The health of the nation, was issued in July 1992.15 Unlike the report of the faculty, the government rejected an approach based on risks and patterns of behaviour, opting for a disease-based structure. Five key areas were selected in which it was known that intervention could significantly reduce mortality or morbidity. National targets were set for the year 2000 and the contribution the NHS might make was examined.16 The health of the nation received a cautious welcome, for the government had shown some commitment, although critics believed that its approach was limited and that it overemphasised individuals’ ability to control their own health.
Key areas: Health of the nation
- Coronary heart disease and stroke
- Cancers
- Mental illness
- HIV/AIDS and sexual health
- Accidents.
Some saw it as a rejection of the wider WHO ‘Health for all’ strategy and the objective of redressing social inequalities and encouraging community participation.17 Although originating in the Department of Health, the strategy involved many government departments because significant improvement involved society as a whole. As time passed, there were doubts about the achievements. Many targets had been set in line with trends that were already apparent. Mortality rates for stroke and heart disease continued to fall, but sometimes changes were in the wrong direction; for example, obesity was rising, as were teenage smoking, drinking by women, and suicide.18 Although somewhat tardily, government now accepted that variations in health existed between different areas, ethnic and income groups, and that greater understanding was needed if effective action were to be taken. A working group looked at these variations, but did not stress the effect of poverty, which was, after all, not primarily the responsibility of the Department of Health.19
Changing clinical practice
With advancing technology and shortening length of stay, patients in hospital now were likely to be very sick indeed or to be admitted briefly for investigation or minimally invasive surgery. New forms of treatment demanded mental and physical stamina from patients who were far better informed about what was happening. A 48-year-old man, after his third heart transplant, said “I am just trying to enjoy life. It is not all a bed of roses.”20 Patients with cancer were subjected to the most intensive protocols of chemotherapy, and emotional support might be lacking. Those with distressing or terminal illness were in need of comfort and continuity of care, difficult with continuously changing teams of doctors and nurses.21
At a time when they were ill and vulnerable, people might not like to be in mixed-sex wards. Hospital surveys of patient satisfaction invariably showed high ratings, but systematic interviews in a large random sample of hospitals showed major problems in communication. Patients often did not receive information about the hospital, their condition or its treatment. Many were in pain and often they were not offered pain relief. Discharge planning and follow-up was poor.22 Since 1948, medical educators had urged the inclusion of social, ethical and non-technical issues into the student curriculum, hoping that this would produce more humane and self-motivated physicians. Although Sir Lancelot Spratts roamed the wards in fewer numbers, empathy was not always to be encountered.23 The General Medical Council (GMC) issued new and clearer guidance to doctors, including advice on ‘fly-on-the wall’ TV programmes showing daily life in hospital or general practice. These were not always made with respect for the patients concerned. The GMC stressed the importance of informed consent by patients, and that doctors should be particularly vigilant where children, vulnerable people and the mentally ill or disabled were concerned.24
Interest in complementary medicine grew; more people went to non-orthodox practitioners, spending substantial sums, but they did not turn their backs on conventional health care.25 In the hierarchy of evidence, from the anecdotal to the randomised controlled trial and the meta-analysis, complementary medicine ranked low, but there was increasing pressure to give patients what they asked for. The medical profession relaxed its attitude and, increasingly, complementary medicine became part of the NHS. It was estimated that 60 per cent of health authorities and 45 per cent of GPs were either commissioning or providing it. Because it might be cheap to the NHS, there was a temptation to offer it in the absence of any evidence of effectiveness, especially in areas of care where conventional medicine was unsuccessful, for example, in the management of chronic low back pain.26 Acupuncture and aromatherapy might be provided as part of mainstream care – as in cancer, where patients facing rigorous types of treatment might find at least psychological benefit. Political parties supported its development as an issue of choice for patients, and bodies were established in 1996 to regulate and register chiropractors and osteopaths.27 Because of the lack of evidence of its clinical effectiveness, the Nuffield Institute for Health in Leeds set in hand a literature review while, in the USA, the Agency for Health Care Policy and Research awarded a contract to Beth Israel Hospital to measure its effectiveness.
The quality and effectiveness of health care
Interest increasingly centred on clinical guidelines. In 1990, an academic consortium of 12 US centres teamed up to develop guidelines on topics in which there was evidence of marked variability from place to place, and high costs. Cataract, aortic aneurysm resection and carotid resection were among those selected. John Wennberg, at Dartmouth, published an Atlas of health care in the USA, showing that operation rates and hospital beds were related more to the number of specialists than to any measure of clinical need. There was little evidence that populations receiving aggressive care lived longer. Supply appeared to drive demand, defying most people’s basic economic beliefs.28 Calls for a similar approach in Britain were often ignored. Health technology assessment threatened clinical freedom and, although doctors did not want freedom to use ineffective forms of care, they wished to maintain the right to decide what was effective and not be delayed by procedures that slowed down innovation or might be overly concerned with cost containment.29 The appointment of Michael Peckham as the first NHS Director of Research and Development in 1990 increased the momentum in the UK.30 Peckham’s position made it possible to establish a regional research strategy and network, and to obtain earmarked resources for research when new financial arrangements were under consideration.31
The influence of the enquiry into maternal deaths, and the subsequent report by Lunn and Mushin on anaesthetic deaths was enhanced by the commitment of senior members of the specialties. A further report on 19,000 perioperative deaths in 1992/93 by the National Confidential Enquiry into Perioperative Deaths (NCEPOD) showed a lack of high-dependency units in many of the hospitals in which deaths had occurred, and that patients were sometimes returned to ordinary ward areas too soon. Faults in care were revealed that could be remedied.32 Patients who were ‘outliers’, on a ward not normally dealing with their problem, had poorer outcomes. During 1992, three further studies began, into stillbirths and deaths in infancy – the Confidential Enquiry into Stillbirths and Deaths in Infancy (CESDI), counselling for genetic disorders, and homicides and suicides by the mentally ill. However, the Royal Colleges only haltingly went ahead with the audit, and did not always work with the other professions whose contribution was essential to a good outcome. It also became apparent that studies needed to consider long-term effects as well as the immediate results.
Evidence-based medicine
Archibald Cochrane had argued for randomised controlled trials in the belief that it was not known whether most clinical interventions did any good. Increasingly, clinicians and those purchasing health care became interested in ‘evidence-based medicine’, the conscientious, explicit and judicious use of current best evidence in making decisions about the care of individual patients.33 Evidence-based medicine became a central health service policy, a new gospel for government ministers and clinicians. Previously it had been thought adequate to understand the process of a disease and use treatment known to interrupt or modify that process. However, if the outcome rather than the process was examined, some forms of care did not produce the expected improvement. Trials, now numbered in hundreds of thousands, revealed that some procedures, such as dilatation and curettage in women under 40 years of age, were either of doubtful value or harmful. How little of medical practice had a firm basis in evidence? How much of what was firmly based was applied in the front lines of patient care? New editions of textbooks were often out of date and doctors’ knowledge, even of the basics of disorders such as high blood pressure, declined as they grew older. Evidence-based medicine was closely linked to continuing medical education.
There was little wrong with the proposition that the best available scientific evidence should be used in patient care, but there was an implication that the only medicine that should be practiced was based on controlled clinical trials. Yet, despite years of study and huge financial investment, the research to answer many questions, for example, the best way to treat neck pain, was not available.34 New technology, for example minimal access surgery, became established without such an assessment. Even if most people clearly do better with one form of treatment, there is no guarantee that every individual will react in the same way; patients have the right to make a choice between different forms of treatment. In primary health care, many conditions are simple and self-limiting, no clear diagnosis may ever be reached, and controlled trials are not always practicable.
Guidance on effectiveness, drawn up by groups rooted in economics or public health, was greeted by managers with enthusiasm.35 It was not always accepted by clinicians as representing ultimate wisdom, particularly if authorities refused to fund new forms of treatment not yet shown to be good value. It was one thing for managers to challenge clinical decision-making; now management sometimes dictated it. Although evidence from trials was increasingly incorporated into guidelines, clinicians did not automatically behave in accordance with them. Experience showed that, where they were developed locally – for example, practice prescribing policies – they were more likely to be followed than if they were developed centrally.36 Robert Brook found that the great motivators in the USA were one-to-one contact with respected colleagues, or money. The effort required to develop guidelines based on research findings was considerable. Centralisation of effort was worthwhile to prevent the local application of dubious patterns of care.
The Department of Health and the NHS Executive made improving clinical effectiveness a key priority and invested heavily in fostering evidence-based health care.37 In 1991, a research and development strategy was launched in the hope that clinical, managerial and policy decisions would be based on sound and pertinent information. The UK Cochrane Collaboration was established in 1992 as part of an international network to prepare, maintain and disseminate systematic reviews of research on the effects of health care.38 Its Director, Iain Chalmers, giving evidence to a House of Lords committee enquiring into medical research, told peers that his medical training was so out of date that, in the first four years of his career, he did more harm than good. Against some opposition from the Joint Consultants Committee (JCC), the Department of Health established a multi-professional Clinical Outcomes Group and a subsidiary National Centre for Clinical Audit. Directorates of research and development were created in the regions. A standing committee on health technology was established to assess the methods.39 Several forms of treatment, for example, screening for colo-rectal cancer, were listed as priorities for assessment. Evidence-based medicine was turning into an industry with an NHS Centre for Reviews and Dissemination, the UK Clearing House on Health Outcomes, Effective Health Care Bulletins from the Universities of York and Leeds, and a CD-ROM providing a summary of systematic reviews. The impact at local level was patchy.
The NHS attempted to absorb and synthesise differing philosophies of quality improvement, effectiveness and audit. David Taylor, working at the Audit Commission, listed 25. Many followed quality philosophies from other sectors of public service and industry, particularly those pioneered by Deming and Juran, and used by Japanese industry in its search for reliability and market dominance.40 These saw quality as organisation-wide and a responsibility of management, challenging traditional assumptions that it was largely a matter for the professionals. Quality was seen as a continuous process of evolution in which ‘every defect was a treasure’ enabling matters to be improved. Don Berwick, responsible for the quality programme at the Harvard Community Health Plan, contrasted traditional systems of inspection, discipline and penalties with the alternative, participation and incentives.41 Another approach, business process re-engineering, redesigned the way care was provided to improve matters for both patients and staff.
Hospital accreditation was also introduced. Britain had the Health Advisory Service, and accreditation had long been required in North America, where independent assessment was a precondition for payment. The Department of Health was wary of introducing accreditation into the UK. Would major hospitals always reach an appropriate standard? In 1989, the King’s Fund launched its own pilot study that examined hospital organisation and assessed the extent to which standards were being met, and action was taken when they were not. The pilot evolved into a national accreditation scheme and, by 1995, a third of the country’s hospitals had submitted themselves voluntarily to the procedure.42 To achieve accredited status, a hospital had to demonstrate compliance with organisational standards that fulfilled legal obligations and respected the rights of patients. The standards were process-orientated, but covered every aspect of the hospital’s systems and organisational procedures. An independent team of surveyors – for example, a trust chief executive, a director of nursing, a consultant or clinical director and an operational manager – then visited the hospital for two to five days, provided a verbal debriefing to its staff, and submitted their findings to the accreditation committee.43 The standards were high and few hospital trusts met them in their entirety. The King’s Fund extended the programme to general practice and primary health care, and to community and mental health services; other groups also entered the accreditation field.
In 1971, McKeown had suggested that health services had only a small effect on health or longevity.44 While this might have been true in the nineteenth century, advances in treatment for some conditions had undoubtedly led to improvement in outcome. It was, however, difficult to disentangle the effects of health care and environmental improvement, for, in most conditions, improvements in diet and nutrition were also having an effect. Bunker challenged the McKeown hypothesis, attributing a gain of about 1.5 years to clinical preventive services, in particular, diphtheria immunisation. The contribution of the curative services seemed twice as great. Cancer treatment had not had much effect, but there had been major improvements in survival from heart disease and renal failure, a reduction in strokes (probably from the treatment of high blood pressure), and far better results in diabetes, tuberculosis and maternity services. For the population as a whole, Bunker considered that this meant that medical science could claim responsibility for an average gain of three to four years, out of about seven years’ total increase in life expectancy since 1950 in Britain and the USA. Bunker pointed out that the public demand was for improvement in the quality of life, not just survival. Wellbeing was a major goal of health care, for example, the treatment of depression, osteoarthritis and cataract surgery.45 There was pressure to develop measures reflecting the effect of medical intervention on morbidity.46
The GMC had been created in the nineteenth century to identify professionals and protect the public from quackery. In 1997 it obtained new powers to deal with serious deficiency in clinical competence.
The drug treatment of disease
Self-medication with medicines bought over the counter (OTC) had long been a feature of people’s lives. A report from the Nuffield Foundation in 1986 argued that pharmacists were an under-used resource.47 They could make a greater contribution to primary health care, especially as the public increasingly looked to them for advice on the increasing range of OTC preparations. Sales were equivalent to a third of the NHS drugs bill, and governments worldwide saw self-medication as a way to shift some of the cost onto patients.48 From the late 1980s, it became easier to reclassify medicines from prescription-only status to allow counter sales when they were safe in use, had only minor side effects, and had well-defined indications. Among the medicines reclassified were ibuprofen for pain, acyclovir for cold sores, corticosteroid preparations for surface use, and H2-antagonists such as cimetidine for indigestion. In 1992, the Medicines Control Agency, Britain’s drug-licensing body, streamlined its procedures for deregulating drugs. Major changes were under way in the pharmaceutical industry.49 Two major mergers were those of SmithKline and Beecham in 1989 and Glaxo-Wellcome in 1995. Attempts to reduce expenditure on drugs in the USA and cuts in drug prices in Europe placed the industry under pressure. Drug prices could no longer rise at 10 per cent a year, as unbranded generic drugs increased their share of the market. The pharmaceutical industry commonly spent 10–15 per cent of turnover on research and development, a proportion far higher than most other industries. The cost of development, testing and gaining approval for new drugs, many of which would never be introduced to the market or be profitable if they were, raised the stakes. Seldom was a drug now introduced for a previously fatal condition, as in the early days of the NHS. New ones were often potential replacements for previous ones of considerable potency. The comparative advantages of new forms of therapy were smaller, so larger trials were required, and new statistical techniques were needed.
Shortlisted drugs for the Prix Galien 1995
- Losartan potassium – selective angiotensin II receptor antagonist for the management of hypertension; first major development since the angiotensin-converting enzyme (ACE) inhibitors
- Lamotrigine – anti-epileptic
- Tacrolimus – immunosuppressant for liver and kidney transplants
- Dornase alpha – recombinant DNA enzyme reducing the viscosity of mucus in cystic fibrosis Resperidone – anti-psychotic for schizophrenia (winner)
- Interferon alpha-2b – long-term treatment of hepatitis C.
Since the introduction of streptomycin and drugs for major psychiatric disorders, improved medicines had meant that fewer patients needed to be in hospital.50 This process was a continuing one; for example, cancer patients could often be treated as outpatients, and drugs that relieved nausea and vomiting associated with cancer chemotherapy meant that the length of hospital admission could be shorter. Patients undergoing surgery recovered more rapidly following an anaesthetic using new agents. The pharmaceutical industry was at pains to demonstrate that the savings achieved in hospital overheads were not outweighed by the cost of drugs. The industry tried to enter the wholesale distribution chain, to influence those providing health care or to become providers of health care themselves. ‘Disease management’ was pushed by the pharmaceutical industry, the proposal being that the care of patients with long-term conditions such as diabetes and asthma should be contracted out to the manufacturer supplying the product on which patients depended.51 An idea from the USA, its limitations on the choice of treatment patients might receive caused concern on both sides of the Atlantic. The need for blockbuster drugs to maintain profits was urgent. Firms overhauled their research programmes. It was predicted that, by the turn of the century, every new drug would be touched in its development by biotechnology and genetic manipulation.52
Greater knowledge of the functions of individual genes and their amino-acid sequences opened new therapeutic possibilities, and the possibility of designing new drugs. SmithKline Beecham spent $125 million on a stake in the Human Genome Sciences company in return for rights to develop products from its huge gene database. ‘Combinatorial chemistry’ made it possible to produce new chemical entities at a remarkable speed, and high-speed screening systems were developed to assess them. Alliances with university departments and biotechnology companies gave the large companies an expanded horizon.53
Genetic engineering was now being used to produce large amounts of well-known proteins, including insulin, growth hormone, hepatitis vaccine, interferons and monoclonal antibodies, and drugs reducing the frequency and severity of relapse in multiple sclerosis. The production of erythropoietin in substantial quantities by recombinant DNA technology made it possible to treat the anaemia that commonly accompanied renal failure. Clinical trials showed substantial improvement in wellbeing but it was also used illicitly by athletes to improve performance.54 In 1991 a monoclonal antibody, centoxin, was launched for the treatment of Gram-negative septicaemia. It bound and neutralised bacterial endotoxins and, though costly, could possibly save the lives of patients who would otherwise die of this infection after burns, trauma or gastrointestinal surgery.55 The pharmaceutical industry developed derivatives of erythromycin, one of the earliest antibiotics, that were more stable, more active and had a more prolonged action. Originally held in reserve for penicillin-resistant infections, they were found useful, particularly in respiratory diseases.56 A new type of antimicrobial treatment emerged, the antiviral drugs, hitting herpes, shingles and AIDS, although there was a risk that viral resistance might occur.57
One of the earliest of the synthetic drugs, aspirin, obtained a new lease of life. It was known to inhibit platelet function. Several reports suggested that it significantly reduced cardiovascular mortality and morbidity after heart attacks, and it also appeared to have a beneficial effect in cerebrovascular disease and strokes. Even a small daily dose seemed effective, and doctors regularly gave aspirin to any patient at risk of the two conditions.58 Drugs that reduced serum lipid concentrations proved to be effective in reducing major coronary events in people with ischaemic heart disease.59
Effective treatment was now available for acid-related disorders such as duodenal ulcer and oesophageal acid reflux. The H2-receptor antagonists did not suppress acid secretion completely, and were challenged by omeprazole, a ‘proton-pump inhibitor’ that blocked the transport of hydrogen ions into the stomach, and healed most duodenal ulcers within two to four weeks.60 This became the treatment of choice for resistant ulcers. The possibility that Helicobacter pylori might cause ulcers was open to a simple therapeutic test. Did eradication of the organism help? In 1988, Marshall and his co-workers announced that a combined antibiotic-bismuth regimen healed ulcers quicker and better than H2-antagonists. Some improvement occurred when a single antibiotic was given, more when two were combined, and very substantial improvement with triple antibiotic therapy. Despite nearly universal initial scepticism, within a few years, research workers had developed screening tests for the infection. There was progressive acceptance that there had been a major therapeutic advance that reduced the need for hospitalisation and for longer or more traumatic forms of treatment. When there was evidence of infection, eradication of Helicobacter became the accepted therapy in gastric and duodenal ulceration. The evidence of a link with stomach cancer also strengthened. The search was on for the simplest, shortest, most effective and best tolerated treatment.61
The first of a new generation of antidepressants, fluoxetine (Prozac), was introduced in 1987. These selective serotonin reuptake inhibitors worked by increasing levels of serotonin, a neurotransmitter. Unlike previous antidepressants, they appeared to have fewer side effects and, by 1995, some 500,000 people in Britain were taking them, including children. Reserpine was found to be effective in schizophrenia. A new drug with some, albeit modest, effect on Alzheimer’s disease (Aricept) was introduced.62 Sumatriptan helped migraine. The management of night-time asthma was improved by the introduction of salmeterol, a long-acting inhaled beta-2 agonist that produced effective relief for 12 hours.63
In 1998, sildenafil (Viagra) achieved instant cult status as an effective treatment for male impotence. Since the introduction of oral contraception in 1961, attempts had been made to reduce the hazards from thrombo-embolic complications. The hormone content of modern pills was about a sixth of the early preparations, reduction having occurred in stages as new health risks emerged. Progestogen-only pills were also available for women in whom oestrogen was undesirable, as were injectable preparations. In 1995, evidence suggested that two of the newer ‘third-generation progestogens’ were associated with an increased risk of venous thrombo-embolism and the Committee on Safety of Medicines issued a warning. The increased risk, though small, and less than the risk of thrombosis in pregnancy itself, received intense coverage in the media, alarming about half of the 3 million women using oral contraception.64 Slimming drugs, given in combination also produced adverse reactions – heart disease.
With a public increasingly well informed, the greater popularity of alternative medicine fuelled melatonin mania in 1995. A secretion of the pineal gland, melatonin seemed to reset the body clock, and help sleep and jet-lag. In spite of little scientific evidence, it suddenly became wildly popular, particularly in the USA, where articles appeared suggesting that it also prevented ageing.65
The British National Formulary, which gave doctors and dentists updated information about medicines, went electronic in 1995. Tools for computer-based prescribing multiplied, and provided support for clinical decisions. They could check for potential interactions, calculate the appropriate dosage and suggest suitable preparations. Their use could also make for economy, increasing the number of prescriptions issued in generic form.66
Radiology and diagnostic imaging
Improvements in scanner technology meant that both magnetic resonance imaging (MRI) and computed tomography (CT) scanning could be carried out more rapidly. The introduction of ultra-fast CT, with imaging times of 0.1 second, made it possible to show calcification in the coronary artery walls, which seemed to occur much earlier in the development of coronary artery disease than had been thought, opening the possibility of an early diagnostic test. Special scanners were developed, for example, for sick newborn babies. Spiral CT scans combined with dye injections could show the rate of blood flow in the spleen and kidney. All general hospitals in the 1980s had wanted CT scanning; in the 1990s, all wanted MRI. However, doctors sometimes had to be content with a visiting mobile unit operated by the private sector, which made it difficult for those undertaking the imaging to discuss patients with the clinicians.
Improved isotope techniques made it possible to image body functions as well as structure. White blood cells could be labelled with an isotope, and gamma camera pictures could show the areas of inflammation where cells were concentrated, which lit up like a neon sign. Monoclonal antibodies, similarly labelled, would be concentrated in the tissues for which they had been prepared, and then imaged. Sugar compounds, which were concentrated in metabolically active areas such as tumours, could be demonstrated by positron emission tomography (PET), making it possible to identify secondary cancer rapidly without further distress to the patient. Bone mineral density measurement was improved by the introduction of dual-energy absorptionometry in 1987. This provided an accurate and repeatable way of assessing osteoporosis. The technique used two different X-ray sources that could separate bone and soft tissue because of differential absorption. Measurement of total body composition (lean and fat body mass) was also possible, and because the radiation dose was low, serial readings could be obtained to demonstrate the need for, or effectiveness of, treatment for bone loss.
New and complex imaging systems – CT, radionuclide scanning, digital subtraction angiography and MRI – now accounted for up to a third of the examinations in modern radiology departments. They all yielded digital data and it was possible to transform other examinations into digital form. In 1985, the Hammersmith Hospital expressed a wish to develop a filmless radiography department, and over the next ten years, the first such system in the UK was created, partly from central funding but mainly from charitable donations. All forms of imaging equipment were interfaced to the computer system. Straight X-ray images were recorded on special screens and read digitally by laser. The data created were vast, as each chest X-ray required more storage space than the Bible. Images were fed to immense computing facilities, for distribution by fibreoptic cable to workstations throughout the hospital. The high definition of the images and the ability to magnify areas of interest and change their density and contrast were found by clinicians to be a substantial advance. More complex images, such as those produced by scanning, could be displayed, rotated and examined in three-dimensional form. Digital radiography opened immense possibilities for change and improvement in the NHS.67
Infectious disease and immunisation
Although immunisation had made a substantial impact on infectious disease, the ultimate goal of the programmes was disease eradication. The global eradication of smallpox had been achieved, and in 1988, WHO announced the goal of the global eradication of poliomyelitis by the year 2000. Individuals could be protected by immunisation, but when most of a population was protected, transmission of a disease from one person to another became uncommon, reducing the risk even to those who were not immunised (herd immunity). This had been achieved for poliomyelitis in the UK.
To ensure that more babies were immunised while attending baby clinics, and before their mothers returned to work, the basic immunisation schedule was rescheduled in 1990, starting earlier, at two months. Uptake of immunisation was used as an index of the performance of local health services. In 1988, the CMO’s annual report published maps of the achievements of district health authorities (DHAs).68 Districts with the lowest rates of immunisation tended to have a dense and mobile population, but all were improving as targets were set and GPs were offered financial incentives. Each year GPs achieved higher rates of immunisation. More districts achieved first 90 per cent and then 95 per cent cover. The incidence of many childhood diseases was at the lowest ever level. There had been 46,000 cases of diphtheria and 2,480 deaths in 1940, but from 1986 to 1995, there were only 28 cases with one death. Cases of poliomyelitis fell from nearly 4,000 cases in 1955 to 28 between 1985 and 1995, 19 of which were vaccine related. With increased use of pertussis (whooping cough) vaccine and a coverage of 94 per cent, notifications of cases fell to 1,873 in 1995.69 Measles vaccine had been introduced in 1968 and had proved successful. Rubella vaccine had long been advised for schoolgirls aged 11–14 years to reduce the risk of multiple fetal defects from infection in early pregnancy. A combined vaccine for measles, mumps and rubella (MMR) had been used in the USA since 1975, and was introduced in Britain in 1988 with the aim of eliminating these illnesses.
Date of general introduction of vaccines
Diphtheria |
1940 |
Bacillus Calmette–Guérin (BCG) for TB |
1953 |
Whooping cough |
In the mid 1950s |
Tetanus |
Mid-1950s |
Poliomyelitis |
Salk 1956 Sabin 1962 |
Measles |
1968 |
Measles/mumps/rubella |
1988 |
Meningitis (Hib) |
1992 |
Coverage of more than 90 per cent was achieved, with a dramatic effect on all three; clusters of cases were often importations. Immunisation against a common cause of meningitis, Haemophilus influenza type b (Hib), was introduced into the routine programme in 1992 and, by 1996, a national coverage of 95 per cent had been achieved. The decline in notifications was dramatic and there was only one death in 1995.
Infectious disease
Worldwide, new agents responsible for infectious disease were continuously being identified; 22 between 1973 and 1994.70 Some, such as Lyme disease, originally identified in other countries, were subsequently found in the UK. The European Commission became increasingly involved in public health issues, and established surveillance and information networks for individual diseases. As a result the guidance on the reporting and management of outbreaks was more consistent.
Tuberculosis had seemed on the path to elimination. Cases declined tenfold between 1948 and 1987, although high levels of immigration from the Indian sub-continent in the late 1960s slowed the fall. In the UK, there were about 5,000 new cases a year but the numbers started to rise again. Internationally a major cause of the rise was HIV infection, which when combined with resistant strains of Mycobacterium led to rapid death. Migration of people, poverty, deprivation and homelessness were also responsible for the increased number of patients. Drug resistance and poor clinical results were often caused by patients failing to take drugs as prescribed. An approach called ‘directly observed therapy’, in which health care workers made sure that patients took the proper medicines for six to eight months, emerged as a major breakthrough. Cure rates as high as 95 per cent were possible, even in poor countries.
Reported episodes of food poisoning were also on the increase, some international in origin as food increasingly crossed national borders.71 Salmonella enteritidis infections had been rising for 25 years; by 1988, there were more than 20,000 cases annually. The problem appeared to be infected eggs.72 In December 1988 Edwina Currie, a junior health minister, warned people that most of the egg production was infected with Salmonella. Almost all chickens and eggs in Europe derived from two genetic strains, bred for food conversion efficiency and egg production rather than disease resistance. Although not clear at the time, the problem was worldwide, not just British. Mrs Currie’s comments led to a food scare of then unparalleled intensity: a crisis in the industry, the slaughter of flocks, and her own resignation. She was, in fact, largely correct. Her vivid presentation of important health education issues had been useful and her departure was regrettable. Subsequently, under the aegis of the European Union, a European surveillance system was established – Salm-Net, based at the Communicable Disease Surveillance Centre, Colindale, part of the Public Health Laboratory Service (PHLS). Other processed foods were also implicated. Pâté was found to be a significant cause of food-borne listeriosis. The largest recorded outbreak of food-borne botulism in the UK occurred in 1989: 27 people were affected and one died. They had eaten one brand of hazelnut yoghurt that contained hazelnut conserve sweetened with aspartamine rather than sugar. A combination of inadequate sterilisation and a changed composition had allowed Clostridium spores to survive.73
In 1996, an outbreak of E. coli 0157 infection from contaminated meat products killed 17 people and affected a further 400.74 A subsequent report called for the establishment of an independent Food Safety Agency, a proposal accepted by the Labour government. In 1986 there had been an outbreak of a brain disease in cattle in southwest Britain.75 The condition, bovine spongiform encephalopathy (BSE), was thought to be related to Creutzfeldt-Jakob disease (CJD), scrapie and kuru, all forms of degenerative brain disease transmissible by food; it appeared that the sick animals had been fed meat products. The Southwood working party (1988) were entirely satisfied that BSE arose from a change in rendering methods for dealing with sheep carcases, with the result that the system used was inadequate to destroy the scrapie agent which entered bone meal and was subsequently fed to cattle. It recommended that all lymphoid tissue and all central nervous tissue should be removed from the human food chain, that sick animals be destroyed and BSE be made a notifiable disease. The government banned feed products made from ground cattle and sheep remains and ordered the slaughter of infected livestock. How effectively these measures were carried out was open to question. The existence of the disease led many countries to restrict meat imports from Britain. A CJD Surveillance Unit was established in Edinburgh in 1990; the PHLS were not involved, there being no initial evidence of a human–animal relationship. A ten-year debate began, marked by contradiction, warning from meat producers against hysteria and unwavering reassurance from the government.
1986 |
BSE identified |
1988 |
Food chain concern |
1990 |
“British beef safe” |
1992 |
BSE cases in cattle peak |
1995 |
First human CJD death |
1996 |
BSE–CJD link |
1997 |
Beef on the bone ban |
1998 |
Enquiry began in 1997 |
Concern mounted in 1995 about transmission to humans when three farmers died of the disease; the media sounded alarm bells because it was a newsworthy story, government initially rejected a connection between what became known as variant CJD (vCJD) and BSE, took a low-key approach to prevent unwarranted panic, and medical scientists were divided in the evaluation of the possibility. In 1996 ten cases of a highly stereotyped variant of CJD in people below the age of 42 years pointed to a link between BSE and CJD, and strengthened the possibility of transmission from animals to humans. The credibility of government advice on public health issues had been undermined. The government, which for years had maintained that there was no evidence of transmission from animals or meat products to humans, had to revise its stance. There was public alarm, greater slaughter of cattle, government anxiety at the prospect of compensation for farmers and for victims’ families who might argue that past reassurances were fallacious, and repercussions with the European Community after a worldwide ban on British beef was instituted.76
Overshadowing the antibiotic treatment of infection was the escalating problem of antibiotic (and antiviral drug) resistance, thought by some to herald the dawn of the ‘post-antibiotic’ era. About one patient in ten in acute hospitals had an infection acquired after admission. Elderly patients, neonates and those on immunosuppressive treatment were particularly vulnerable, and multi-drug-resistant organisms such as Staphylococcus aureus, Gram-negative bacilli and enterococci were present in many hospitals. The development of new antibiotics had largely ceased and only one, vancomycin, remained of real value in the management of resistant infections.
Blood transfusion had long been known to lead, on occasions, to illness – for example, to liver disease. In 1989, the virus for hepatitis C was isolated. The first test available gave many false-positive results and the blood transfusion service delayed screening donors until 1991 when a better one was introduced. During the interval, a substantial number of recipients contracted the disease and it was necessary to look back to identify patients who, unknowingly, might have been infected.77
Since surveillance of Legionnaire’s disease had begun in 1979, there had been 100–200 cases every year, about half in travellers returning from overseas, and the rest British in origin. Travel-related disease remained a problem. Malaria was a regular import, with over 1,000 cases annually. In the former Soviet Union, where the immunisation rates were low, an epidemic of diphtheria accompanied the break-up. Ebola fever recurred in Zaire, and people were discouraged from going there.78
Sexually transmitted disease and AIDS
Sexually transmitted diseases were now second only to respiratory tract infections as a cause of reported morbidity from communicable disease in Europe.79 Travel and migration raised concerns about the international emergence and spread of resistant strains, possibly as a result of non-specific use of broad-spectrum antibiotics. Trichomoniasis decreased; syphilis remained constant. Genital wart infections, chlamydia and herpes simplex virus infections were increasing. Gonorrhoea had increased steadily during the first three decades of the NHS, but the ten years from 1977 had seen a decline in the number of cases. Then the number of cases levelled out for a decade, but began to fall again from 1990. Radical innovations in treatment could not take credit, for effective drugs had long been available and drug resistance did not produce substantial difficulties. Early treatment and contact tracing were probably responsible. Surveys demonstrated that, in most countries, there were small, definable, sexually active groups that maintained the endemicity of disease through intragroup sexual contact. Young and poor people, non-white ethnicity, inner city residents, prostitutes and their male clients were, in different countries, the most visible members of these groups.80
The 1988 prediction had been for a rapid growth of epidemic of AIDS; it was taking only 11 months for the number of cases reported in the UK to double. The projections were revised down over the next few years as evidence suggested that transmission among homosexual and bisexual men declined markedly between 1983 and 1987, with the adoption of safer sexual practices.81 In England, the annual number of reports of HIV infection levelled off in the 1990s, as did overt disease. However, the experience of poor countries was increasingly important because of international business and tourist travel. Edwina Currie said that the best protection for a businessman travelling overseas was to take his wife with him. In Thailand, with its lethal mix of cheap sex and heroin addiction, numbers rocketed in 1989 and cases were no longer largely among homosexuals. A ‘second epidemic’ began in heterosexual men and in prostitutes with the lowest costs and the greatest number of contacts, followed by further cases in children.82 In the UK, the great majority of heterosexual patients were infected abroad; most were refugees who had fled from conflict in Africa. The annual incidence of AIDS in injecting drug abusers rose steadily to about 100 per year; some came to London from Europe to benefit from more liberal social security and health care policies. By the end of 1994, there were 20,400 people in England who were HIV-positive, and the cumulative total of people with AIDS was 9,510, of whom 6,434 had died. Major health education programmes continued; a poster aimed at drug addicts read ‘Shooting up once can screw you up. Forever.’ Initially, AIDS policy was based less on a purely public health approach and more on the doctrine of individual rights. Debate on the ethics of unlinked anonymous testing ended in 1989/90 with the introduction of surveys on accessible populations. There had been objections on the grounds of confidentiality, the difficulty of treating people once identified, and the personal consequences of infection. There was no way of telling individuals that they had tested positive and might spread the disease. Reports of HIV-infected surgeons working in hospital stirred up anxieties about the transmission of HIV from patients to doctors and from doctors to patients.83
Surgeons, who might puncture their gloves during an operation, became concerned about the risk to themselves. British and American authorities published guidelines on clinical practice. Some surgeons argued for compulsory testing of patients. Testing for anti-HIV-1 and 2 improved screening of blood donations in 1990.
A residential and support centre for people with AIDS, Lighthouse, opened in London in November 1988.84 It harnessed a vast amount of unusual and lucrative support, particularly from the performing arts, and provided for skilled counselling, long-term support in the community, and provision for terminal care and respite admissions.
In the early 1990s, attitudes began to change. AIDS had remained, in general, a disease of large cities. Ministers took a firmer line. When drugs that seemed to influence the disease began to appear, there were stronger scientific grounds for testing and contact tracing was given greater prominence.85 Trials for the first major breakthrough in treatment, zidovudine (AZT), were hard to interpret. They suggested little benefit from use early in the disease, although there was some effect on fully established cases. The virus seemed to require a protein, reverse-transcriptase, to reproduce; additional drugs – for example, 3TC and protease inhibitors – were developed to block its production. The Delta controlled trial showed that drug combinations prolonged life and delayed disease progression.86 Some patients, believing that they were slowly dying, had to readjust to the possibility of continued life. Almost overnight clinical practice changed, although multiple drug regimens cost 50–75 per cent more than single drug regimens and perhaps over £10,000 per year per patient. It was another pressure health authorities found difficult to resist. For the third world, preventive measures remained the only viable approach. Slowly HIV infection came to be treated less as an exceptional condition to which different rules applied. Young people became used to intrusive questions from insurance companies. In 1996 the first over-the-counter HIV test was approved for sale in the USA, a doubtful advance as people might find themselves to be negative, despite taking risks. Government-sponsored health promotion advertising had not specifically targeted the gay community – although the voluntary organisations did so. The gay community now argued that it had been neglected and, for the first time, government campaigns were directed towards it.
AIDS diagnoses and AIDS deaths in England and Wales. Source: CDR Review, January 199687
Year |
Diagnoses |
Deaths |
1983 |
46 |
20 |
1984 |
100 |
45 |
1985 |
225 |
112 |
1986 |
438 |
253 |
1987 |
614 |
325 |
1988 |
820 |
376 |
1989 |
947 |
615 |
1990 |
1,101 |
717 |
1991 |
1,220 |
870 |
1992 |
1,386 |
992 |
1993 |
1,397 |
1,135 |
1994 |
1,103 |
975 |
1995 |
1,468 |
1,723 |
1996 |
1,468 |
1,481 |
1997 |
1,103 |
746 |
Genetic medicine
Genetic medicine developed rapidly.88 The most important, and the largest, research programme of the decade was the human genome project, aimed at creating sequence maps of all the nucleotides in humans, in the hope that basic knowledge of structure would cast new light on the cases of disease. Three agencies in the USA and workers in Japan and Russia were working on the project, and Europe with smaller resources was doing what it could. More than 2,000 disorders were said to be caused by inheriting a single faulty gene. In 1995 alone, over 60 disease genes were isolated. By 1996, the genes responsible for most common single-gene disorders had been isolated and characterised. There was increasing interest in conditions caused by the interaction of multiple genes – for example, insulin-dependent diabetes, high blood pressure and atherosclerosis – although, in the short term, few such patients were likely to benefit from basic research. Several cancer susceptibility genes were identified from studies of families showing ‘autosomal dominant inheritance’, in particular breast cancer, in which 10 per cent of patients had a family history of the disease in a close relative. In the USA, genetic tests went on open sale. DNA analysis became a standard investigation for ever more disorders. The genetic state of family members and pregnancies at risk could be determined for an increasing number of conditions, including haemoglobinopathies, Duchenne muscular dystrophy, cystic fibrosis, Huntington’s chorea and phenylketonuria. The main clinical impact was on the detection of ‘carriers’, presymptomatic and prenatal diagnosis. Information and counselling for women at risk was provided, often in an ad hoc way, in cancer genetics clinics that were often funded from research moneys.89 These provided access to preventive services, screening, diagnosis and treatment, so those at increased risk might be identified and encouraged to act accordingly. Few doctors were knowledgeable in this new field and primary care teams required education and specialist support. Patients and relatives needed help and counselling from doctors, social services and voluntary organisations.90 Because there was a possibility that, in future, genetic techniques would enable the identification of people at risk of developing a wide range of diseases, such as asthma, cancer, diabetes and heart disease, both before and after birth, genetic experts believed that advances in technology would have a major impact on health care, its costs and its ethics. There was a risk that genetic screening, because it identified those at high risk, might lead to their exclusion from insurance schemes, health benefits and even employment. Geneticists asked for a strict regulatory framework; the government instead set up the Human Genetics Advisory Commission.91
In the previous 40 years there had been great improvement in the survival rate from cystic fibrosis, an inherited disease in which one person in 20 is a carrier, and which has an incidence of about one in 400. Cystic fibrosis produces progressive lung damage in childhood and adolescence, reduced by physiotherapy, antibiotics, better nutrition and support from the development of centres with a special interest in the disease.92 It became one of the most intensively researched of the simple genetic diseases, and the discovery of the gene responsible gave an insight into what went wrong.93 Several approaches to treatment were developed; none worked particularly well. Increased understanding of the basic science did not necessarily lead to cure, but it provided new starting points for the quest for practical treatment.
Surgery
The excitement associated with surgical advance tends to distract attention from the more common procedures undertaken by the NHS. The number of operations performed was rising about 5 per cent per year. In 1992, the Royal College of Surgeons revised its guidance on day surgery, the conditions to which it might be applied, and the information that should be given to patients.94 Day surgery could be used more widely as a result of advances in anaesthetic techniques. Short-acting anaesthetics, such as propofol, introduced into clinical practice in 1986, and pain-relieving drugs provided good operating conditions and quick recovery, making it easier for patients to go home rapidly. The costs of day surgery were substantially less than admission, and purpose-built units increasingly provided an efficient environment for the staff and greater convenience for day patients. In 1994, the NHS Executive suggested that 60 per cent of all elective surgery could be conducted as day cases. Managers tried to persuade clinicians to deal with as many patients as they could in this way.
Minimal access surgery
Laparoscopy was initially used largely for diagnostic purposes, but the development of high-quality imaging systems, ever-smaller electronic chip cameras, versatile instruments and linear stapling devices opened new fields in gynaecological, urological and general surgery. Rapid adoption of minimal access surgical techniques followed pioneering work on laparoscopic removal of the gall-bladder (1989). The laparoscope was fitted with a high-resolution colour television camera; the average length of stay seemed shorter, and two weeks later it was difficult to see that an operation had been done at all.95 Soon most cholecystectomies were carried out in this way. Heralded as a major advance in surgical therapy that reduced hospitalisation and quickened post-operative recovery, the escalation in its use was accompanied by increasing concern that it caused more deaths and debilitating post-operative morbidity than conventional surgery. The speed of the introduction of the techniques meant that clinical reports inevitably related to small series with only a short period of follow-up, comparative studies available were often flawed, and the evidence that it was better than standard operations was uncertain.96 However, district general hospitals (DGHs) rapidly took up the new approach. Endoscopically assisted hysterectomy, appendicectomy and hernia repair were also introduced, joining day-case arthroscopy and transurethral prostatic resection to reduce inpatient stays. Minimally invasive surgery was used to treat prolapsed intervertebral disks (‘slipped disks’) causing back pain. In spite of the emphasis on evidence-based medicine, minimal access surgery was adopted largely without formal assessment of its efficacy. The new procedures often took longer to perform and few surgeons had experience of the special methods. Even in the best hands there might be clinical disasters, and not all surgeons prepared themselves adequately for the new techniques. An entire generation of surgeons required training in the new methods. A minimal access surgery training unit was established at the Royal College of Surgeons of England, and other Royal Colleges established guidelines for training in the new methods.97
Orthopaedics and trauma
The pressure on orthopaedic units was increased by the steadily rising numbers of fractures of the femur; the main hope for reducing the load lay in prevention, perhaps by hormone replacement therapy. Greater numbers of total hip and knee replacements also increased the load on orthopaedic departments, and the results of knee surgery were getting better all the time.98 Although 60 different replacement hip joints were on the market, only the Charnley pattern, widely regarded as the gold standard, had 20 years of follow-up results published.99 Hip replacement was increasing in younger patients and, given enough time, most would begin to wear. There was a case for some orthopaedic surgeons to specialise in revision operations, which were difficult and required familiarity with bone grafting techniques and the use of custom-made components.100 Because of the risk of deep venous thrombosis complicating joint replacement, anticoagulant prophylaxis was increasingly used. Education in the management of fractures and the development of new techniques continued in Switzerland, and a new centre was established near Davos for experimental surgery and to provide training. Because of greater public participation in sport, sports injuries were becoming a large part of the work of orthopaedic departments, essentially self-inflicted injury proving expensive to the NHS. The management of backache, a major and long-standing problem, remained unsatisfactory. Neither orthodox nor alternative medicine offered any reliable form of treatment; normal activity might even be best.101 Imaging procedures, in particular MRI, were increasingly adopted and began to replace arthroscopy in, for example, the diagnosis of sports injuries of the knee.102
Accident and emergency (A&E) departments were under increasing pressure, and the beds required to support them were in short supply. On occasion they, and intensive care services, also had to cope with terrorist incidents, and disasters such as the air crash on the M1 motorway in January 1989, and the crush injuries and asphyxiation at Hillsborough football ground in April 1989 when 95 people died. The services were, however, increasingly well prepared. In 1988, when faulty signalling caused an express to plough into a packed commuter train in Clapham, 35 were killed and 500 injured but, in general, those alive when the rescue services arrived, survived. A policy of rapid transfer to hospital had been balanced by stabilisation before removal. Paramedics could maintain the airway and start intravenous fluids, and were helped by doctors trained in emergency medicine. It was believed that the management of patients with serious injuries would be improved if it were centralised on large trauma centres serving a population of 2 million. Seeing over 50,000 patients a year, the North Staffordshire Royal Infirmary, Stoke-on-Trent, was selected as a pilot centre in 1990. Six years later there was little evidence that it had improved survival. Doctors had argued for major trauma centres for 30 years,103 but closure of local accident departments aroused vociferous protests. Now, however, the general standard of accident departments had improved substantially and the additional gain from larger units seemed not so great. Helicopter evacuation was also explored. They had been used to evacuate battlefield casualties since the Korean war, and the first civilian service was started in Cornwall in 1987. After that, ten further services began, including one based at the Royal London Hospital. From the outset, there were doubts about an expensive method of transport that was cramped and noisy, and made treatment on board difficult; weather conditions could also prevent the operation of the service.104 A review suggested that the London service did not improve survival over the short distances involved. What mattered was the speed of getting trained people to the incident to control haemorrhage and maintain the airway, rather than the method of transport used.105
Cardiology and cardiac surgery
Cardiology and cardiac surgery became ever more specialised; cardiologists might now specialise in cardiac imaging, interventional cardiology, or pacing and electrophysiology. Paediatric and fetal cardiology emerged as new specialties; children as well as adults could undergo cardiac catheterisation and operation through a catheter. Stenosis of the pulmonary valve could be treated in this way, and minimal access surgery also permitted the closure of heart septum defects in children. Cardiological services were increasingly decentralised, improving the access of the population. Much was being learned about the electrophysiology of the heart. Physiological pacing for disturbances of heart rhythm improved in sophistication. Dual chamber pacemakers, though more expensive, could sense heart activity, increase cardiac output and improve patient wellbeing.106 The treatment of high blood pressure with diuretics and beta-blockers had been shown, beyond dispute, to reduce the risk of stroke. The impact on coronary heart disease seemed disappointingly small and there was inadequate evidence to say whether newer drugs, ACE inhibitors or calcium antagonists, had an effect on either condition.107 After two studies of intravenous streptokinase, thrombolytic drugs (to break down clots) were widely adopted in 1988 for myocardial infarction, and aspirin was found to add to the benefit. The sooner blood flow could be restored, the more effective treatment was. Speed in diagnosis and transfer to hospital, and fast-tracking patients on arrival to minimise the delay in giving thrombolytics, was required.108 Health promotion campaigns were given added impetus, such as ‘Look after your heart!’ launched by the Health Education Authority in 1989. Giving up smoking, lowering lipids, taking moderate exercise and controlling high blood pressure were their mainstays.
Whereas surgeons operated on about 1,000 patients with congenital heart disease and about 5,000 with heart valve problems, these numbers were now dwarfed by procedures for ischaemic heart disease. Angina pectoris and coronary thrombosis were amenable to the new surgical techniques, no longer the sole preserves of university hospitals. Simpler problems and younger cases tended to be managed by percutaneous transluminal angioplasty, which rose 15 per cent each year (13,000 in Britain in 1995), and was even more commonly used in continental Europe and the USA. New techniques included the use of intracoronary stents, metal mesh devices introduced with a balloon catheter to keep blood vessels open and reduce the chance of restenosis. Thrombosis around the stent was a common complication until the design was improved, and drugs were given to reduce the likelihood. Angioplasty during the early hours of a heart attack was an alternative to thrombolytic drugs. Where should it be undertaken – in subregional centres or in district hospitals where it would be more readily available? Coronary artery bypass grafting (CABG) was also increasing at 10 per cent a year, and showing no signs of reaching a plateau. The cardiovascular community had been studying the effectiveness of CABG for 25 years, and angioplasty for 15; large trials had been undertaken and it appeared that bypass surgery resulted in a reduction of mortality over four or five years compared with medical treatment, at least in those at moderate or high risk of death.109 The pressure on cardiac surgical units was almost overwhelming, as patients on the waiting list were delayed because more urgent patients in hospital demanded attention. The stay in intensive care was shortened to the limit, and people were fast-tracked through progressive care and recovery facilities. Staff worked harder, longer and nearer the brink of safety.110 New surgical techniques continued to be introduced, including heart transplantation and implantable mechanical pumps for heart failure.111
Organ transplantation
Several new immunosuppressive drugs were introduced that had different effects on the immune system, including tacrolimus and mycophenolate; combination therapy improved results. In January 1989, Papworth celebrated ten years of heart transplantation; surgeons there were now operating on 120 people a year and following up 260 survivors. The number of heart transplants carried out annually grew, as new centres were opened and the number of organs available for transplantation increased, assisted by publicity, including a special edition of BBC TV’s ‘That’s Life’. However, a shortage of organs for transplantation meant that many who might benefit did not survive long enough to do so. Pigs were bred with genetic modifications to make them more suitable as a source of organs for transplantation, although some saw the risk that such organs might transmit retrovirus infection to humans.112 Liver transplantation became more common, and techniques were developed that enabled the liver to be divided to transplant into two recipients. Small bowel transplantation had been attempted in the 1960s and 1970s without much success. The problems of long-term intravenous feeding, and better immunosuppressive drugs, led to further attempts. The success rate improved enough for it to become an option for the treatment of end-stage intestinal failure, mainly patients with anatomical abnormalities or intractable functional disorders. The survival rate was poorer than for many other transplant procedures – roughly two-thirds a year after operation, and a third after three years.113
Renal replacement therapy
The number of patients receiving renal replacement therapy rose steadily to about 65 per million, but a report prepared for the Department of Health in 1996 said that all patients with renal failure up to the age of 80 years should automatically be offered treatment, and those over 80 should be carefully considered for it. The target was now 75–80 new patients annually per million and Britain continued to lag behind the rest of Europe in the numbers under care.114 Antony Wing argued for more units and better accessibility, for easy geographic access affected whether patients received the care that they needed. Nurse-practitioners played a key role in home supervision of the patients, who were increasingly elderly. The introduction of the Tenkhoff catheter made continuous ambulatory peritoneal dialysis (CAPD) easier because, with careful technique, it allowed CAPD to be continued for several years. About a quarter of patients receiving renal dialysis continued to feel unwell because of anaemia caused by deficiency of erythropoietin, a hormone controlling red blood cell formation. A successful transplant solved the problem because the new kidney would produce this hormone, but not all patients could receive a transplant. Recombinant human erythropoietin became available but a year’s treatment cost £5,000 – yet another call on health service funds that might be better spent on basic treatment, more transplants and patients undergoing dialysis.115
Neurology and neurosurgery
Neurology seemed to have more than its share of genetically determined disease, for example, Friedreich’s ataxia and Huntington’s chorea. Advances in genetics stimulated a better understanding of the mechanisms of neurological disease. In the rare diseases where there was (muscular) weakness because of the failure of transmission of nerve impulses to muscles, many of which were related to the immune system, the exact site of the problem could often be determined and the biochemical mechanism understood. Gene mapping made it possible to identify missing genes in muscular dystrophy, opening up the possibility of antenatal diagnosis and selective abortion of those males affected, and the assessment of embryos before implantation. Such expertise was available in only a few places – large units with good laboratory facilities.
New imaging techniques improved the understanding of metabolic processes in health and sickness, in brain tumours, stroke and Alzheimer’s disease. Positron emission tomography, in which glucose labelled with radioactive tracers was injected, made it possible to study biochemical activity in the brain. The introduction of virtual reality techniques improved the accuracy of brain surgery. New drugs such as GABA antagonists (such as lamotrigine) were a major advance in the management of epilepsy. Interferon-beta was used in an attempt to modify the immune process in multiple sclerosis, and to reduce the frequency of relapse. The management of chronic pain steadily improved, and the intensive care of patients with head injury was better. Imaging techniques and interventional radiology radically altered the treatment of cerebral aneurysms. New forms of drug therapy helped patients with incontinence and sexual dysfunction. Improved rehabilitation techniques underlined the fact that, although many neurological diseases were incurable, none was untreatable. Specialist teams of nurses, physiotherapists and social workers improved the care of people disabled by neurological disease, in hospital and in the community. Patient support groups and charities concerned with diseases such as motor neurone disease and multiple sclerosis provided physical and emotional support to patients, raised money for research and put pressure on government to improve public accessibility for people with physical disabilities.
Ear, nose and throat (ENT) surgery
The treatment of sensory deafness continued to improve as better multi-channel cochlear implants, commercially produced on a worldwide basis, became available. The earliest implants in the UK were funded by charities but, in March 1989, Graham Fraser, of University College Hospital (UCH), took one of his patients to meet MPs at the House of Commons. So impressed were they that David Mellor, then Minister for Health, obtained £3 million to establish six cochlear implant centres. By 1996, 800 adults and 600 children had been implanted in a continuing programme.
Sophisticated imaging and fibreoptic instruments, initially introduced in Austria and Germany, transformed the diagnosis of nasal and sinus disease that could now be assessed, and sometimes treated, in the outpatient department. Like other forms of minimal access surgery, the technique suddenly became popular, but the potential for damage to surrounding structures, the optic nerve and the brain, was high.
Ophthalmology
The use of lasers of different types was increasing. Treatment with them was quick, relatively painless, and some techniques could only be performed by laser. Photocoagulation was used for treating proliferative retinopathy (in which there was abnormal development of retinal blood vessels), diabetic retinal disease, macular degeneration (where there was damage to the area of the retina providing the most detailed images), retinal holes and chronic open-angle glaucoma.116 A new method of treating short sight was introduced into the UK in 1989, modifying the refractive power of the cornea by reshaping the central area with excimer laser keratectomy.
Cataract surgery was ideally suited to day care, 90 per cent of patients being treated in this way. Improved techniques and equipment continued to reduce the period of visual rehabilitation after cataract surgery to about a week. The new methods involved the use of costly equipment, such as the phacoemulsifier which is used to break up the now opaque lens before its removal and replacement. The use of self-sealing, sutureless wounds and foldable intraocular lens implants that could be introduced through a smaller incision made for an easier post-operative period.117
Cancer
The ageing population was inevitably leading to an increase in the number of people developing cancer. A major problem was undetected spread at the time of first treatment, such that the aim was often cancer control rather than cancer cure.118 The hope was that molecular genetics would help to identify those at risk, assist early diagnosis and provide new forms of treatment. Cancer of the lung remained the commonest form, with 40,000 new cases in the UK each year. Whereas survival rates for several forms of cancer had improved over the previous 20 years, the advances had been modest because the great improvements had occurred in rarer cancers, accounting for less than 10 per cent of the total. In surgery, the trend was towards operations that were less extensive and destructive. Radiotherapy remained a major form of treatment, used for about half the 200,000 people developing cancer each year.119 It could be delivered ever more accurately using the new scanning techniques, having a potentially curative role for about two-thirds of patients and a palliative role in the remainder. Chemotherapy was increasingly used, opening the possibility that new drugs might, in time, become first-line treatment in common cancers. The appropriate balance of chemotherapy, surgery and radiotherapy was the centre of clinical trials. Newer drug regimens were expensive; intravenous immunoglobulin for chronic lymphatic leukaemia was perhaps most costly in terms of life saved. Several new anti-cancer agents were promising, including taxol (shown to extend life by an average of a year in patients with ovarian cancer), taxotere and gemcitabine. The problem of bone marrow toxicity from curative chemotherapy remained a problem but could be helped by using growth factors for mature blood cells, both red and white. Granulocyte, macrophage and red blood cell stimulating factors became available, as new technology based on genetic engineering moved from the laboratory to the bedside.120
Breast cancer had traditionally been regarded as a surgical disease, chemotherapy being reserved for treating locally advanced primary disease and secondary spread. Recognition that spread often occurred early led to more conservative surgery.121 Radiotherapy followed by limited surgery gave survival comparable to that of radical mastectomy. Several years of tamoxifen significantly prolonged survival and combinations of chemotherapeutic agents gave good results, although it remained uncertain whether chemotherapy could completely replace surgery. In 1989, a study of chemotherapy as the initial treatment for smaller and operable tumours began, to reduce their size and make surgery easier.
The national breast cancer screening programme of women of 50–64 years, recommended in the Forrest Report,122 started in 1988. It replaced the earlier recall system and aimed to screen that age group at least every five years. A major investment in equipment was required and the scheme used the lists of patients registered with GPs (the family health services register) to identify those to be called. Roughly a million women a year were screened. To begin with, there were probably too many biopsies creating unnecessary anxiety in many women. The detection rate was about 6 per 1,000 and the apparent incidence of breast cancer rose 25 per cent in the age group being screened. There was, however, a reduction in the total mortality among women aged 55–69, but this might have been due either to screening or to the widespread use of tamoxifen for proven cases over the same period.123 Some questioned the programme, as it appeared to bring about only a relatively small reduction in mortality, and at a substantial cost. Although the early trials had reported a 30 per cent relative reduction in mortality in women over 50 years of age, subsequent ones showed less benefit. Critics of the programme suggested that £27 million was being spent on a programme that might be saving few lives and engendering needless anxiety among many women.124 Assessment of the programme for cancer of the cervix was affected by an apparent reduction in the incidence of the disease. The benefits of screening for cancer of the colon by a single fibreoptic examination at the age of 60 were examined.
In the late 1980s and early 1990s, a number of hospitals discovered that there had been repetitive errors in planning radiotherapy. In Exeter (1989) it was found that, over five years, 260 patients had been given too high a radiation dose; in Stoke (1990) 1,000 patients had been given a 25 per cent under-dose; in Cambridge (1995) 25 patients had been treated incorrectly. There were also diagnostic errors; over several years, the bone tumour service in Birmingham had treated some patients unnecessarily and wrongly reassured others that tumours were benign. More than 1,000 women were recalled for cervical smears when it was found that a nurse had used the wrong technique to take them.
It was increasingly recognised that patients with cancer had better outcomes if cared for in hospitals treating many patients, or if they were part of a clinical trial. The establishment of the UK children’s cancer study group in 1977 led to greater use of paediatric oncology centres, where those with leukaemia, retinoblastoma and Wilms’ tumour of the kidney had better outcomes. In some of the rarer adult cancers there was also evidence of better survival rates where specialised care was available and the caseload of a unit was high.125 In October 1993, Kenneth Calman, the CMO, appointed an Expert Advisory Group on Cancer. The group recommended that everyone should have access to a uniformly high quality of care to ensure the maximum cure rates and the best quality of life. It proposed that the service should be structured at three levels. Primary care teams were the focus of care. Designated cancer units would be created in many – but not all – district hospitals. They would have the expertise to treat the more common cancers such as breast, bowel and lung. Designated cancer centres would treat the less common cancers that required particular skills and specialist equipment. They would be highly specialised and serve a population of at least a million. They would deal with children and adolescents, undertake complex procedures, such as bone marrow transplantation, and have sophisticated diagnostic facilities. With a death rate of 150,000 a year from cancer, improvement of survival by 5–10 per cent was considered realistic. The problem was to implement the system in a service that was increasingly fragmented, competitive and dispirited. Clinicians and managers mapped out the role of individual hospitals within region-wide schemes, designating cancer centres and units in 1997.126 Smaller DGHs would be subordinate to the larger centres, where sub-specialisation was possible. So much of cancer therapy was now available on a day basis that GPs had no problem working directly with cancer centres.
Obstetrics and gynaecology
The eleventh report of confidential enquiries into maternal deaths was published in 1988. It showed that, over the previous 40 years, the number of deaths had roughly halved every ten years, from 1,480 in 1955–1957 to 163 in 1982–1984. Pulmonary embolism, diseases of high blood pressure in pregnancy, anaesthetic deaths and amniotic fluid embolism were the four most frequent causes.127 In 1992, the proportion of women in their early 30s bearing children exceeded that of women in their early 20s for the first time, as more women delayed childbearing. Emergency contraception, commonly by using oral contraceptive tablets within 72 hours of intercourse, became available. Deferring pregnancy was a gamble; women in their 30s and 40s took longer to become pregnant, and were at higher risk of fetal abnormality and permanent childlessness.128 Screening for fetal abnormality by amniocentesis and karyotyping of fetal cells became common practice. The association of Down syndrome with changes in placental hormone levels in maternal serum led to calls for screening for all who wanted it. Pre-test counselling was required, so that women understood better the options and the risks. Many abnormalities of the central nervous system, the heart, the kidneys and the gut could be picked up by ultrasound, some leading to termination of pregnancy and others to delivery in a special centre where immediate neonatal surgical care was available. Doppler studies to assess fetal blood flow were found useful in assessing fetuses that were small for the length of pregnancy and were therefore at risk.129
Live births |
Home births |
% at home |
|
1988 |
654,363 |
6,084 |
0.9 |
1989 |
649,357 |
6,560 |
1.0 |
1990 |
666,920 |
6,929 |
1.0 |
1991 |
660,806 |
7,398 |
1.1 |
1992 |
651,784 |
8,704 |
1.3 |
1993 |
636,473 |
9,900 |
1.6 |
1994 |
628,956 |
11,168 |
1.8 |
1995 |
613,257 |
11,732 |
1.9 |
Source: National Statistical Office 1996.
In 1991/92 the Health Select Committee produced a landmark report, reversing 20 years of medical advice that all births should take place in hospital, concluding that this policy could not be justified on the grounds of safety. It was well crafted, comprehensive and controversial. It was ‘pro-midwife’, stating that women had a strong desire for continuity of care throughout pregnancy, and midwives were seen as best placed for this.130 Childbirth had become increasingly professionalised, and women felt they had decreasing influence over the situation. To mitigate the problem, some hospitals developed ‘birth rooms’ that were more domestic in nature. Midwives began to reassert their traditional role as an alternative to the high-tech environment which, though lifesaving on occasion, had less to offer the average woman.
In 1991/92 the Health Select Committee produced a landmark report, reversing 20 years of medical advice that all births should take place in hospital, concluding that this policy could not be justified on the grounds of safety. It was well crafted, comprehensive and controversial. It was ‘pro-midwife’, stating that women had a strong desire for continuity of care throughout pregnancy, and midwives were seen as best placed for this.131 The Royal College of Obstetricians and Gynaecologists opposed the report, and there was no immediate change in government policy, but a small expert group was appointed. Its report appeared in 1993 as Changing childbirth.132 Half of the expert group were consumers rather than professionals, and it was chaired by a Minister, Baroness Julia Cumberlege. This was unusual, for it made it less easy for government to walk away from the findings if they proved inconvenient. It was concluded that women should be given greater choice of maternity care, for example, about the place of birth and the type of carer. The aim should be to create ‘women-centred’ services, asking women what they wanted, testing satisfaction and monitoring clinical results. The midwifery profession welcomed the report and its principles were incorporated into later extensions to the Patients’ Charter. The medical profession was less enthusiastic, but government accepted that women should have a greater say in maternity care. Midwives took greater responsibility for the management of normal labour, and consultants, who needed to spend much time with high-risk cases and on patient counselling, might welcome less involvement when pregnancy was proceeding smoothly. There was a slight increase in home delivery, but even though observational studies supported the view that home could be safe if women were well selected, midwives found it costly to obtain cover against litigation.133 There had never been good evidence of the effectiveness of frequent and regular antenatal visits. An assessment of a shorter six or seven visit schedule, in place of the traditional 13 check-ups, showed the outcome to be much the same. Those attending specialist clinics received more scans and day admissions, without demonstrable clinical benefit.134
Infertility treatment
In 1990 the Human Fertilisation and Embryology Act was passed, in line with recommendations of the Committee of Inquiry chaired by Dame Mary Warnock in 1984.135 The Human Fertilisation and Embryology Authority (HFEA) was established to inspect and license the hundred or more centres carrying out any infertility treatment involving the use of donated eggs or sperm, treatment that involved the creation or use of embryos outside the body, and research on embryos. The Authority maintained a register of all such treatments and the success rates of individual clinics. It established a code of practice and guidelines, for example, on surrogacy, sex selection, research on fetal tissue and the age limits of those donating gametes.136 The number of procedures undertaken and the success rate rose steadily from 1985 onwards; the success rate reached roughly 20 per cent per treatment cycle and, increasingly, the HFEA provided information to the public about donor insemination and in-vitro fertilisation. Another technique, the stimulation of ovulation by drugs such as clomiphene, became more common, for it was easier and cheaper. It was, however, more prone to result in multiple births. Taken together, by 1996 over 100,000 women a year were seeking help. Properly supervised, following protocols and carrying out the necessary range of tests before undertaking treatment, it took up an increasing amount of clinic time. Most multiple births became associated with fertility treatment, there being a trebling of the number of triplets. Inevitably many were premature, throwing strain and substantial costs on neonatal units.
Abortion
Of roughly 800,000 conceptions a year in English women, some 150,000 were terminated under the Abortion Act 1967, mostly at less than 13 weeks’ gestation. One pregnancy in three ended in termination in some inner-city districts. Abortion services were sometimes contracted out to the private sector, relieving the strain on NHS gynaecological units. In 1990, the limit for medical termination of pregnancy was reduced from 28 to 24 weeks in line with prevailing clinical opinion. A new ground for abortion – the risk of grave permanent injury – was established and, for this, there was no time limit. The law was clarified to permit selective reduction of multiple pregnancies on the same grounds as an abortion. Initially described in 1978, the procedure was mainly used to terminate an abnormal fetus. However, it was increasingly used when there were many babies following the use of fertility drugs, to reduce the risk of premature delivery and increase the chance of survival of the remaining fetuses.137 Drugs established a place in the management of induced abortion. Mifepristone, a steroid compound that blocked the action of progesterone required for the maintenance of pregnancy in its early stages, followed by vaginal administration of a prostaglandin, was shown to be effective in producing abortion in early pregnancy but did not achieve the acceptance initially predicted.
Hormone replacement therapy (HRT)
Long-term HRT was increasingly promoted as effective in reducing the risk of osteoporosis and cardiovascular disease in post-menopausal women, and was available in a variety of forms - patches, creams and implants.138 The benefits were thought to be related to the duration of use and to outweigh risks such as an increased incidence of breast cancer and a higher risk of deep vein thrombosis or pulmonary embolism.139 By the early 1990s, about one in ten women in the appropriate age group was using HRT, and among women doctors, the number was nearer a half.140 The rate of use steadily increased, and roughly half of those starting HRT remained on the therapy for many years. Because of uncertainty about the benefits and disadvantages of long-term use, the Medical Research Council (MRC) began an international controlled trial in 1996.141
Paediatrics
Neonatal paediatrics was ever more effective in keeping small premature infants alive, thereby creating ethical and financial problems. Respiratory difficulties were now treated more successfully using surfactant and, in cases of acute heart or respiration failure, oxygen using a heart-lung machine (extracorporeal membrane oxygenation). The work of Ann Greenough in Cambridge led to the careful application of physiological principles to the ventilation of small babies, and reduced the risk of pneumothorax. It was known that the smaller the child, the higher the risk of long-term disabilities, although in the better units the quality of survival was often good. Just how small and premature should a baby be before it was deemed inappropriate to use high technology to preserve life? At 28 weeks the outcome was often excellent; at 23 weeks survival was rare and severe abnormalities common. While better neonatal care had increased the survival of extremely immature babies, a third of those weighing less than 1,700 g at birth had retinopathy of prematurity, the cause of which was not fully understood.142 Current practice was to attempt to save premature infants of 25–26 weeks’ maturity but not those of 22 weeks.143
Child health surveillance was increasingly undertaken by GPs, who received additional money if they were accredited in this field. Immunisation was one of the most cost-effective strategies for reducing mortality and morbidity in children. Minimally invasive investigation revolutionised paediatric diagnosis. Ultrasound – cheap, safe and widely available – had many applications, from congenital dislocation of the hip to heart disease. Endoscopic techniques and MRI also helped. About 80 per cent of children in hospital had diseases with genetic or familial implications. ‘Molecular’ diagnosis was introduced, and gene therapy was increasingly used, as for example in immunodeficiency diseases. Transplants of heart, lung, liver and intestine were now options in specialised units.144 Intrauterine surgery became technically possible in the late 1980s, but remained semi-experimental because of the problems after opening the mother’s uterus and operating on such fragile patients.
Bone marrow transplantation for thalassaemia could now be offered to many patients with severe disease. Oral chelating agents that assisted the excretion of iron surplus, as a result of repeated blood transfusion, were also under trial, in the hope that they would render subcutaneous injection obsolete. As in-vitro fertilisation improved, in 1995/96 it became possible to diagnose pre-implantation, and both thalassaemia and sickle-cell disease.
One of the most horrific episodes of the decade was the case of Beverley Allitt, a nurse at Grantham and Kesteven Hospital, who was sentenced for murdering four children, attempting to murder another three and causing grievous bodily harm to six others on a paediatric ward in 1991.
Geriatrics
In the early 1960s, it had been shown that many elderly patients in the community were suffering from disorders not reported to or identified by their doctors.145 Screening programmes found problems that might be remediable, for example, with sight, hearing, mobility, social situation and mental state. Several later studies, not all of which concentrated on these simple but important faults, were unable to replicate these findings. The 1990 GP contract required practices to contact those more than 75 years of age to identify their health and social needs. Some GPs found that the health visitors and nurses in the practice could do this effectively, but others disputed whether problems were discovered that were not already recognised.
Seventy per cent of people admitted to an acute medical service were now likely to be over 65 years of age. The guiding principle of acute emergency medical care was now the absence of distinctions based on age, a principle followed to a greater extent in some units than others. Integration of geriatrics into general medical provision was economic and made it easier to develop rotas for junior doctors. Grimley Evans, Professor of Clinical Geratology at Oxford, pointed out that the result of modern aggressive medicine, for example, thrombolytic therapy for heart attacks, might be at least as good in old people as in middle life. There was no rationale for separating old people from the rest of the human race. It was important that they should have equal access to specialist medical care, for example, to cardiology and gastroenterology, as well as to the skills required in the management of disease in old age.146
At the outset the NHS had accepted responsibility for the long-term care of the chronic sick, although the standard of care was often unacceptably low. Over the years the NHS service improved, with an accent on rehabilitation. Categorisation of people into those for health provision (that was free) and those requiring social support (that was chargeable) was difficult. Although frail elderly people often required both social support and health care, such responsibilities increasingly passed to the social services and the private sector. New nursing homes opened, sometimes managed by private companies, in which the quality of nursing care could be high. Specialised homes developed to care for people who were mentally infirm. Hospice care for terminal illness was more widely available.
Grimley Evans thought there were three issues to address.147 First, care must be as efficient as possible, for there were intriguing variations in policy and cost; the cost of support in the community might be far higher than residential care, and little was known about the comparative merits. Second, efforts should be made to reduce the need for care in later life. Third, there needed to be agreement about how care was to be paid for. Was it to be by insurance, taxation or the liquidation of personal assets? The public money spent on residential accommodation was rising rapidly because of the ready availability of social security payments. After an Audit Commission report that criticized the spending of money on community care by the NHS, local authorities and the social security services, Norman Fowler commissioned Sir Roy Griffiths to review the way public funds were spent. Griffiths reported in March 1988, and argued that local authorities were well placed to manage care in the community, and should act as arrangers and purchasers but not monopoly suppliers of community care services. Purchasing and provision should be separated. Responsibility for care should be placed as near to the individual as possible and people should have a greater say and a wider choice in what was being done to help them. They should be helped to stay in their homes as long as possible. He recommended a bigger role for GPs in assessment and the notification of needs to local authorities.148 The Conservative government was not enthusiastic about local authorities and feared that spendthrift councils would invest millions in over-staffed unionised institutions. It was 17 months before a decision was reached, the government then accepting most of Griffiths, promising a White Paper and legislation.149 The NHS and Community Care Act (1990) led to a fundamental change in the system of funding residential and nursing homes, transferring the responsibility from the Department of Social Security (DSS) to local authority social services departments. An open-ended and rapidly expanding budget was replaced by a limited one based on individual assessments of need. On the whole, the transfer went smoothly, local authorities developing care managers who could organise support using either public funds or personal contributions. The funds for care were not earmarked and, arguably, they were less than were required. Long-stay care was increasingly phased out of the NHS. By the mid-1990s, only 10 per cent of elderly people receiving long-term residential care did so in NHS facilities. The remaining 90 per cent had to pass a means test to qualify for funding from local authorities, now the principal budget holders for state-financed long-term care. The Department of Health issued guidance about the type of continuing care that the NHS should provide. Largely inpatient-orientated, it identified people whose clinical condition was unstable, who needed complex medical, nursing or other clinical care, frequent and not easily predictable intervention, or regular supervision by health service personnel.150 The boundary was not easy to define and the Commons Select Committee criticised the government for failing to make clear the minimal level of provision people could expect, and from whom.151
Mental illness
In the 1960s the enthusiasm for early discharge and rehabilitation had come from psychiatrists. Then other forces became involved. Some sociologists saw the disabilities of people in hospital as partly a result of incarceration, and civil liberties lobbies sought to restrict or abolish compulsory detention. Health ministers, dismayed by embarrassing hospital scandals, saw abolition of the mental hospitals as a solution, but consistently reminded the NHS that community support must be available. This policy, however, required more money rather than less, because community care was staff-intensive and it was desirable to develop a local service before the asylums were closed. Government did not provide extra funds for double running-costs, and regions could seldom meet the bills. Nevertheless, in the 40 years from 1954 to 1993, the number of hospital beds fell by almost two-thirds to 50,278. Of 130 large mental hospitals open in 1960, 38 had closed by 1993, mostly in the late 1980s and early 1990s. A further 21 had agreed closure dates. Only 14 did not intend to close this century. In those remaining, the number of beds had fallen continuously to an average of 223 per hospital in 1993, a fraction of their former size. The numbers of patients per nurse and per consultant had fallen, at least in part due to the reduction in patient numbers. The average length of stay had fallen from 162 days in 1986 to 76 days in 1993.152
Many features of the services for the mentally ill had improved. Shifting the emphasis from hospital had produced a more humane pattern of care, less disabling for patients and greatly preferred by them. Patients had a greater voice in their care and might be consulted about service plans. Voluntary organisations played a more substantial role, and multi-disciplinary teams were increasingly to be found. However, as money was needed for the new community health teams, beds were sometimes closed before alternative facilities were available. There could be delays for emergency admission, and people with less urgent problems might not be admitted at all. Patients sometimes remained in hospital long after they were well enough for discharge, because there were insufficient residential places in the community. Particularly in the cities, the full range of community services was seldom available, certainly not 24 hours a day.
National policy for mental health was restated in the context of The health of the nation, and there were subtle changes.153 Placing acute local units on DGH sites, which ‘might have poor access and be over-large’, was losing its lustre. The methods of care, were changing. Better drugs were available and new methods of psycho-social intervention were being used. Most people could be treated on an outpatient basis, but those who were admitted now seemed to be more disturbed, aggressive and sometimes violent. Mental illness services were increasingly managed from a community base. Attempts were made to develop services locally, providing acute beds, beds for patients likely to need sanctuary or long-term rehabilitation, and specialised facilities for adolescents and for people needing to be housed under secure conditions. In the place of the old asylum or the more recent DGH wing, small residential units with 24-hour nursing, or intensive home care, were encouraged as effective and cost-effective. The aim was a care programme for each patient, with assessment, a care plan and a key worker. Social services departments co-operated in the provision of facilities and key workers, supported by a range of housing and hostel accommodation.
Theory and practice did not always coincide. More dispersed services carried the risk that some individuals with mental health problems could be lost to the system, become homeless, or end up in prison. The disabilities of chronic schizophrenics did not melt away when the hospital gates closed behind them. Living a chaotic life and rejecting care, they required close supervision but did not always receive it. Sometimes because of shortage of accommodation they would be caught in a game of ‘pass the parcel’ from one agency to another.154 Consultants who disagreed over the diagnosis and placement might dispute clinical responsibility. Many areas had no system of monitoring long-term care. Sometimes the private sector was used for secure accommodation, or for the management of addiction or behavioural problems.155 Increasingly public attention was focused on the plight of former psychiatric inpatients adrift in an uncaring, uncomprehending society, and the burdens imposed on families.156 In 1993, Ben Silcock, a diagnosed schizophrenic, climbed into the lion enclosure at London Zoo and was badly mauled. Georgina Robinson, an occupational therapist, was killed by a resident in a mental health centre. An inquiry led by Sir Louis Blom-Cooper demanded fundamental changes in the 1983 Mental Health Act to end ‘the chaos of community care’, so authorities would have to give a precise prescription of where patients would live and the treatment they would receive, by compulsion if necessary. “Everyone knows what has happened,” said The Independent. “Hundreds of ex-patients have lost touch with the agencies involved in their care and now have miserable lives. At worst patients have become a danger to themselves and others.”157 Professionals were expected to make a sophisticated judgement about the potential risk patients presented to themselves and others; how were the rights of patients to maximum liberty and greatest chance of improvement to be combined with the protection of the public from any possibility of harm?158 The Mental Health (Patients in the Community) Act 1995 established the concept of supervised treatment in the community, and gave the supervisor authority to take and convey patients to hospital if it seemed desirable.
Some psychiatrists had always been convinced that patients with schizophrenia were often discharged prematurely and excessive throughput of cases was not conducive to good care.159 The Clinical Standards Advisory Group (CSAG), a source of advice to ministers, reported that the quality of care for schizophrenics was unsatisfactory in over half the districts, because of low morale, poor communication between health and social services, and lack of strong local leadership, particularly from psychiatrists. It warned that the movement of community psychiatric nurses away from consultant-led teams into primary care teams might divert nursing care away from the most ill and vulnerable.160 The Royal College of Psychiatrists, asked to report on 39 killings and 240 suicides by mentally ill people over the previous three years, concluded that patients’ refusal to comply with treatment was often responsible, but inadequate use of care plans by mental health staff was widespread. The recommendations of previous inquiries had seldom been acted on. Confusion over professional responsibilities, communication failures, lack of face-to-face contact between clinicians and patients, and insufficient use of legal powers were all to be found.161 Following the NHS reforms, a comparatively simple system in which central government and the NHS had been responsible for funding and operating local services – mainly hospital based, with a limited amount of community support from local authorities – was replaced by a system that might be more community orientated, but which carried a danger of fragmenting responsibility and funding.162 Government, recognising the problems, published a Green Paper on mental health, with an emphasis on delivering a more reliable and focused spectrum of care at a local level, including emergency access facilities.
Counselling
The changing focal point of psychiatry from hospital to the community was accompanied by a growth in counselling. As organised religion declined, GPs had increasingly been the source of solace. Increasingly, some displaced this function onto the community psychiatric nurses and employed counsellors who worked with their practices. Crisis services with out-of-hours help lines developed to provide immediate professional support. Adults who had been victims of sexual abuse asked for treatment. Debriefing after traumatic and stressful experiences became commonplace; a psychoanalyst and a biographer of Jung, whose car was robbed, was offered counselling by the police for his traumatic experience. Major incidents, deaths after road accidents or football stadium disasters, would lead to the influx of a team of counsellors to care for the near-victims, relief workers and even those who were mere spectators. The effectiveness of counselling was challenged, for though it met real and symbolic needs, it was costly, and it was uncertain what was achieved.163
General practice and primary health care
A ‘primary care-led’ NHS
Before 1948, GPs were central to health services, a position they lost with the growth of specialist medicine. The fifth decade saw a return of their power that few would have predicted. GPs were given financial incentives to improve standards. Primary health care teams grew larger and were increasingly well housed. The number of GPs in England rose by 10 per cent between 1985 and 1995, and 31 per cent were female. Average list sizes fell. Nearly half GPs were in large partnerships, four to six in size. The distribution of general practitioners, evened out by the Medical Practices Committee, was remarkably uniform but doctors who had qualified outside the UK were to be found predominantly in the old industrial areas where single-handed practice remained more common. Management was given powerful levers to encourage better staffing, premises and cost-effective local services. Computerisation of many FHSA administrative activities was complete, and the register of patients was used for clinical activities in health promotion and cervical cytology.
Primary care had four main tasks: the care of acute illness, the management of chronic disease, health promotion and, increasingly, organisational matters. Patients’ problems changed only slowly. Every ten years the morbidity survey was repeated, to coincide with the census. For the first time, between September 1991 and August 1992, statistics were collected from spotter practices using practice computer systems.164 Seventy-eight per cent of people were found to have consulted their GP at least once. As ever, the commonest reasons were respiratory diseases (31 per cent), nervous system disorders including ear problems (17 per cent) and musculo-skeletal problems such as arthritis (17 per cent). Patients expected longer consultations. More consulted for preventive health care, immunisation, contraception, screening and advice than for any other single disease grouping (33 per cent). Traditionally it was believed that GPs treated people without bothering too much about scientific medicine. One practice analysed its work and showed that the treatment of four patients out of five reflected the findings of good clinical trials.165 The GPs’ contract was reviewed, negotiated, modified and reviewed repetitively throughout the decade. A White Paper in 1987 laid out ministers’ goals.166 Government was becoming convinced that firm negotiation would be necessary if they were to be achieved. Meetings with the profession began in 1988. Quality was to be raised through competition and financial incentives, for example, target levels would be set for immunisation and cervical cytology. Patients should be given better information about services. GPs should receive more of their pay from capitation, health promotion would become an explicit part of the contract, and practice in deprived areas would be assisted by extra payments based on Brian Jarman’s criteria.
Whereas the changes of 1965 had altered the structure of practice, those of 1989/90 were more concerned with the process of care. Few were based on firm scientific evidence of benefit to patients, but there was nothing new in this. The ideas came from many sources, including the Royal College of General Practitioners (RCGP), the work of Brian Jarman, and Julian Tudor Hart’s views about the anticipation of the problems of patients and the long-term management of chronic disease.
The changes affecting general practice
- 1990 contract
- Financial incentives for health promotion
- Greater role for nurses
- Computerisation
- Stronger management
- Fundholding and commissioning
- Medical audit
- Patients’ charter
- Shortage of doctors
- Two White Papers (1996).
Negotiation of the 1990 GP contract
It was legal for the Secretary of State to decide the terms on which family doctors worked, but, in law, there had to be a clear attempt to negotiate alteration. The profession liked the status quo while the government wished for change. There being little spare money, its distribution would alter. Some payments seemed outdated. Why should GPs be paid for group practice or vocational training when this was near-universal, or for seniority as opposed to quality? The GPs had rejected the good practice allowance. The Department of Health sought the same end in a different way, using incentives to encourage activities that were a proxy for quality. Because of the need to define the activities, the proposals seemed mechanistic. There was no new money to ease the introduction of performance-related pay. The package on offer meant more work for the same money; better organised practices would gain and others would lose. Deprived area payments would help those in the inner cities but there would be less for GPs elsewhere. Seniority awards, which went to the older and most powerful members of the profession, were at risk. The deal was hard for the profession’s leaders to accept, and it was unusual for government to lay down so precisely what doctors should do in clinical terms, as in the assessment of those over 75 years of age.
After hours of negotiation, the key differences were identified and the General Medical Services Committee (GMSC) met the departmental team for a weekend at Selsdon Park. In 1952 and 1965, Ministers, not officials, had led the Department’s team, and agreement was helped by substantial new money. Only at the end of the negotiations in May 1989 did the Secretary of State, Kenneth Clarke, meet the profession’s negotiators. During a ten-hour meeting, both sides made concessions, seniority awards were reprieved, there was agreement, and both sides celebrated. The secrecy that had made negotiation easy made acceptance by the rank-and-file difficult. GPs angrily rejected the package, one saying that the remaining problems could be solved by “hanging the profession’s negotiators”. Kenneth Clarke implemented the package. GPs had not believed that their contract could be altered without agreement and had been proved wrong. In 1966 the profession’s negotiators held a strong hand. General practice had been deteriorating and was widely regarded as second rate, and morale was low. GPs were leaving practice, Kenneth Robinson (the Minister) was the son of a GP, and Labour, just re-elected, saw the NHS as its political baby. In 1990, GPs were well motivated, worked in premises provided largely at public expense, had no difficulty in recruiting colleagues, and public attitudes to organised industrial action had changed. GPs faced a strong government determined on consumer-orientated reform, and their negotiators held but modest hands.167
The aftermath
Those with a clearer view of the future felt that the concurrent NHS reforms would make more fundamental contractual alterations necessary. In spite of all the anger, the 1990 contract did not represent a fundamental break with the past. Its outcome was not always as envisaged. Health promotion tended to be segregated in special clinics, rather than being incorporated into normal consultations. It was hard to convert the rhetoric of health promotion into contractual language or guidance to GPs about what they should do. Neither the GMSC nor the Department of Health had understood how effective financial incentives would prove. GPs appointed many more nurses, increased their minor surgery, organised their practices better, installed more computers, and achieved higher rates of immunisation and cervical cancer screening.168 Julian Tudor Hart himself conceded that the new contract had reached parts of the profession other contracts had not reached. First the front runners and then other GPs improved their staffing, organisation, equipment, and clinical and educational activities.169 The Medical Practitioners’ Union continued to argue that the GP’s future lay as a salaried public servant meeting the individual and collective needs of a geographically defined population.170 Younger doctors often agreed. GPs became willing to talk about contractual modifications where improvement was possible, for example, health promotion. Half new GPs were now women and many wished to alter their commitments from time to time. The 1990 contract encouraged part-time work and job-sharing; the previous system was better suited to a male profession working full time. Some doctors, nearing retirement also wanted to reduce their work. The number of part-time GPs rose, and of those still working over the age of 60 fell.171
Morale in general practice was poor, fewer doctors sought vocational training, and GPs increasingly rebelled against their 24-hour responsibility and night calls. They felt that the public, increasingly accustomed to consumerism and 24-hour services from many organisations, wanted primary health care to be available on the same basis. Ian Bogle replaced Michael Wilson as Chairman of the GMSC and tried to establish better relationships between government and the profession. Contractual changes were made that gave GPs wider discretion on whether and where an out-of-hours consultation should occur. As a result, there was increasing emphasis on telephone advice and the development of out-of-hours ‘co-operatives’. Co-operatives often established primary care centres, to which patients could be invited as an alternative to a home visit. Some GPs were prepared to consider radical revision of their contract, a salaried service or the replacement of a national contract by a patchwork of national and local deals giving GPs more choice over their working arrangements and workload. Increasingly, GPs favoured a contract for ‘core services’ to define what should be done for the money they received. However, if the boundary was drawn too wide, they might be asked to do too much; and if too narrow, it opened the path to commercial contracting of some activities to other professional groups. Government was also attracted by further contractual changes and, in 1994, began to foster the idea of an NHS led by primary care.
Following a further review, a more flexible framework was proposed, with a shift towards primary care and a movement of resources to accompany this.172 A White Paper, Choice and opportunity, was published in October 1996.173 At its core was a proposal to deregulate general practice by allowing schemes that would pilot different forms of contract. If these were successful, they could be made permanent. New options included: salaried doctors within partnerships; practice-based contracts that could embrace non-medical professionals, including nurses, therapists and managers; a single budget for General Medical Services, hospital and community health services and prescribing (that would open the path for cash-limiting primary health care); and NHS contracts for primary care with bodies such as NHS community trusts. They opened a pathway to flexibility in the methods used to provide primary care, to the anxiety of the GPs.174 The NHS (Primary Care) Bill was introduced into Parliament. In December 1996, a further White Paper, Primary care: delivering the future, set out a raft of 70 initiatives of a practical nature to complement the legislation. The proposals were broadly welcomed by the GMSC and the Royal College of Nursing (RCN), and covered education, research, clinical audit, resource distribution, the contractual options and problems of recruitment.175 The 1997 pay review, however, gave GPs less than they wished. Some of them argued that GPs should no longer negotiate for money alongside consultants, through the British Medical Association (BMA), but should go it alone. They sought a review of their contracts and the system of setting their pay.176
Practice premises
Stretched to bursting point by new staff and services, and packed to the rafters with paperwork and computers, the GPs’ premises had been operating at full capacity. Spending on premises had increased only modestly; the cost-rent scheme had done wonders after the 1965 GPs’ charter but was not appropriate to the changes now in progress. GPs, who after the NHS reforms became fundholders, often applied savings to the improvement of their premises. The movement of services from secondary to primary care-led to association of GPs, FHSAs and community health trusts. Schemes for new-style ‘primary care resource centres’, combining a range of services were rapidly put together and new government proposals sought to encourage this. In a way, the 1948 vision of the health centres was being reinvented.177
Nurse-practitioners and practice nurses
Practice nurses, to their surprise, were beneficiaries. GPs found additional and congenial assistance with a rising workload and had no difficulty in recruiting nurses keen to develop their career and popular with patients.178 They were recruited in their thousands at great cost to a rather surprised Treasury that had to pick up the bill. From fewer than 2,000 whole-time equivalents in 1984, the number of practice nurses rose to more than 9,000 in 1994. As their numbers grew, the range of their work increased. Nurses took over health promotion; although health visitors had always worked in this field, there were not enough of them, their employers restricted the work they did, and mothers and children came first. Practice nurses were subsequently involved in traditionally medical areas such as the care of chronic disease, the management of diabetes, asthma and high blood pressure.179 Assessment of new patients, the follow-up of old ones, history taking and prescriptions were all were possible within protocols agreed by the team. Projects assessed the role of practice-based nurses in the treatment of depression and the management of epilepsy. Nurses undertook triage and, with misplaced enthusiasm, a former trust chairman suggested that as GP competence varied, one answer might be to scrap GPs and bring in nurse-practitioners as gatekeepers to hospital services, relocating the GPs to the hospital A&E departments.180
The concept of the ‘nurse-practitioner’ became a semantic battleground.181 There were two options for nurses. One was to continue to do what they were already doing; only more of it and at a higher level of skill and training. Alternatively they might become a first point of contact, assessing patients (making a diagnosis), determining treatment including the drugs to be prescribed, and becoming a member of a medical team with a defined role and accountability within that team.182 Management saw that, as primary health care became central to the NHS and activities were transferred to the community, more staff would be needed. A study by the South Thames Region showed nurse-practitioners to be safe and effective, and in primary care they could work alongside GPs seeing medically unscreened patients. There was tension between the nursing profession’s vision of nurse-practitioners and the natural role which they established within primary care moving out of a nurse-led hierarchy into a multi-disciplinary team.183 British practice nurses and nurse-practitioners came from a variety of backgrounds and new courses were developed. The RCN developed diploma and degree courses for nurse-practitioners but, unlike courses in the USA that concentrated primarily on clinical and medical issues, they were more concerned with the philosophy of nursing in the community than with crisp professional issues of diagnosis, examination and treatment.
The boundaries between the hospital and primary care
Problems in organisations often occur at structural boundaries, and boundaries existed between general practice and the hospital service, and between health and local authority services. There might be delays for outpatient appointments and admissions and a long wait for a hospital report to arrive in the surgery. When patients were discharged there might be difficulty in obtaining continuity of medical and nursing care, and delay before patients from hospital could be transferred to residential care. It became policy to shift the boundary, where possible, towards primary health care.184 General practitioners and community nurses were encouraged to develop new skills and practice-based facilities. Shared care schemes were developed for chronic disease management, paediatrics, mental health and maternity care. Some GPs and specialists experimented with direct booking to surgical waiting lists, avoiding the need for specialist outpatient consultations. Often, however, schemes such as the encouragement of minor surgery in general practice had little effect on the demand for hospital care, because they encouraged patients to come for treatment who otherwise might not have done so. Though never shown to be efficient, hospital-at-home schemes achieved prominence, and sought to avoid hospital admission and facilitate early discharge. Increasingly rapid discharge of patients from acute hospital care for reasons of efficiency meant that, although specialised medical treatment might no longer be needed, nursing care often was. The best way of providing this was a matter of dispute. There might be referral to domiciliary nurses attached to practices (favoured by GPs who knew and trusted their colleagues), or hospital ‘outreach’ nursing (favoured by many consultants who trusted the nurses with whom they regularly worked). Outreach nursing services might care for patients recently discharged from orthopaedic, gynaecological or cancer wards, and provide a 24-hour help line and home visits. The administration of anti-cancer drugs in the home required a level of knowledge possessed by nurses who worked in an oncology unit, but not by district nurses. Sometimes, however, outreach services were established by hospitals to guarantee care. If the hospital wished to discharge patients on a Friday, domiciliary services might not be able to handle a new case alongside their existing work.
The nature of general practice
The RCGP was concerned about the future direction of general practice but was not clear about what it should be.185 GPs differed in their philosophies – some having a corporate and biomedical approach embracing other disciplines, for example, health economics and management science, while others wished to concentrate on personal and continuing care of individuals and families.186 Corporatists, such as Donald Irvine, saw organisation as inevitable and welcome. Doctors should define what they were trying to do with their resources, report their activity and evaluate what they had done. The practice might appoint directors, each with specific responsibilities, and this pattern of practice was early into fundholding. A second group, often with a left-wing outlook, had no objection to corporate organisation if it were community orientated. Practices should be accountable, reporting to their communities, give longer consultation times, look for cases of chronic disease and supervise those they found in special clinics.187 The practice should be a democratic team in which the doctor was not always the key worker. Doctors should be salaried and practices zoned so that each team was responsible for a particular ‘patch’. Epidemiology and the public health should play a greater part.188 Then there were individualists, for example, Iona Heath, a Camden GP, who followed a psycho-social model and thought the movement towards corporate general practice a threat to the nature of the discipline, traditionally committed to the needs of the individual. For them, a dynamic role as an agent of distributive justice held little appeal. GPs who were involved in the wider arena of primary health care, public health and management faced the ethical conflicts and professional tensions of the real world. They did not see that a retreat into an isolated, mystical priestly role was practicable.189
The nature of the consultation had been a major preoccupation of GPs in the 1950s, the Balint years. Small studies subsequently showed that the formality of dress influenced the respect accorded to a doctor, and a directive approach to the consultation, in comparison with a ‘sharing’ style, led to greater patient satisfaction and feeling of having been helped.190 Nicholas Bosanquet studied variations in general practice in urban and rural, affluent and deprived communities.191 He identified ‘innovative’ and ‘traditional’ practices, using as markers features such as the employment of a practice nurse, the use of the cost-rent scheme to improve practice premises, and taking part in vocational training. Innovators took anything on offer. It was they who rebuilt their premises using practice loans, available on excellent terms. Because their premises were good, they developed primary health care teams. Innovators tended to cluster in particular parts of the country, the more rural areas and those undergoing rapid economic development such as the Thames Valley. The GPs in the older industrial areas, often traditionalists, did comparatively little to develop their practices. New methods of working often passed them by, though their work was tough and patients’ needs were pressing.
Bosanquet repeated his study in 1992 and 1993, and found that practices in all areas had responded to the new contractual incentives, and invested heavily in equipment and services. This was particularly so in the more deprived practices, and was reducing inconsistency in standards. He found better equipment and premises, a wider range of services and a substantial increase in nursing and other staff. The pace of improvement had increased more between 1985–1992 than in the previous 20 years. In 1993, 94 per cent of group practices had a computer. ‘Traditional’ practices had all but disappeared, though the urban and city practices still lagged behind those in rural and suburban areas. Although doctors still opposed the principles of the 1990 contract, many saw it as having improved the quality of the service. General practice was moving towards greater uniformity, a greater workload, less variation in incomes and increased stress.192 Typical of the innovators was Marsh, a family doctor in Stockton on Tees. His practice area spanned both deprived and more salubrious areas. The group in which he worked expanded steadily over the year and his book, Efficient care in general practice, or How to look after even more patients,193 was a 1990s version of Stephen Taylor’s Good general practice. His group deployed the health visitors almost entirely in the depressed streets, for that was where the greatest health gain was to be had, and where immunisation and child health needed most attention. He explored the feasibility of an experienced practice nurse caring for patients with minor illnesses, and found that trained nurses could diagnose and treat a large proportion of patients consulting GPs, provided there was immediate access to a doctor.194
There was increasing recognition, both in Britain and internationally, that part of the move towards accountability was acceptance of reaccreditation.195 Many GPs supported the idea. There was consensus that the process should be professionally led rather than imposed by government, and the GMSC produced draft proposals. These involved peer review and participation in postgraduate education, and mirrored the well-established methods used to select trainers in the vocational training scheme.196
Computers in general practice
Enthusiastic GPs had used computers since the 1970s, but the costs were too high to be borne unaided. Remote batch systems made retrospective analysis of data possible but did not help in day-to-day care, and there were anxieties about confidentiality. The development of personal computers in the late 1980s was crucial, for they became affordable and could be based in the practice. Systems were installed in increasing numbers and many software companies entered the market. It is no coincidence that the success was achieved mainly by small and responsive suppliers often run by former general practitioners. Although often incompatible with each other, all supported a patient register, and most the printing of repeat prescriptions coupled with drug incompatibility tables, call and recall schemes, and clinical information including consultation summaries. As systems became more powerful, full clinical records were increasingly kept on computer. In the hope of gaining commercially useful information, drug companies subsidised the acquisition of practice computers. By 1990, 80 per cent of GPs had a system and the Department of Health began to offer financial support. The installation rate accelerated, stimulated by the new contract that increased the importance of information systems. By 1995, 90 per cent of practices were computerised and systems were regularly used for prescribing, call and recall, and medical audit. Newer systems now provided intercommunication with the health authorities and the local hospital. The national Healthlink scheme grew rapidly, involving the majority of practices that were computerised, enabling doctors to inform authorities of patient registration, to invoice them, and in many practices to receive pathology and radiology results, and waiting list, breast screening and cytology information, and patient discharge letters.
‘Near-patient’ tests
For many years some laboratory tests had been performed near the patient. Tests had become simpler, and boiling urine in test tubes had been replaced by more convenient paper dip-stick tests for sugar, protein, ketones and blood. Such tests potentially could improve the speed and accuracy of clinical decisions and the reliability of monitoring chronic diseases. Some tests could be used by patients, and tests for pregnancy, blood sugar and cholesterol levels were available over the counter. Technological advances made possible smaller and cheaper desk-top analysers, affordable in general practice, for a wider range of estimations. An alternative was an electronic link with a hospital laboratory to reduce the time before results were available. The arguments for centralisation in a hospital laboratory included better quality control. The argument for peripheralisation was the speed with which information was available that might alter the initial diagnosis and avoid inappropriate prescribing.
Prescribing
The cost of prescribing continued to increase because of the interest in the care of long-term illness, for example, high blood pressure and asthma, and the accent on ‘case-finding’. The rising numbers of elderly people, many of whom were on several different forms of treatment, added to the increase. Because prescribing patterns and costs varied widely, GPs had long been sent information about their prescribing. In 1988 a better system was introduced, Prescribing Analyses and Cost (PACT). It compared each doctor’s costs, broke down prescriptions into six major therapeutic groups, (such as the cardiovascular drugs), and showed the percentage of items prescribed generically rather than by a brand name. Later, indicative drug budgets were produced, showing how much each practice would be likely to spend. Practices were encouraged to develop their own formularies of cost-effective drugs, and to prescribe generics. The Audit Commission calculated that about a fifth of the total GP prescribing budget was being wasted in over-prescribing drugs of limited value or expensive ones where cheaper ones were equally effective.197 With an increasing number of treatments available, which was the most cost-effective? Should pharmaceutical companies be required to submit economic analyses in support of requests for the listing of new products so that both comparative effectiveness and costs were considered?
The NHS review
Although NHS spending had increased in real terms by a third in the previous ten years, the fifth decade began in financial crisis. Technically the NHS was bankrupt. Regular reports of clinical disasters, bed closures and nursing crises were attributed to shortage of money. The existing system had several advantages, including easy movement of patients between hospitals and good cost control. However, there were few incentives for efficiency and some seemed perverse; extra work did not affect a hospital’s allocation for three years.198 There was widespread debate about the NHS and a Niagara of reports from think tanks. Conservative ones generally suggested that the days of a fully funded health service were, or should be, numbered.199 In November 1987, Mrs Thatcher asked David Willetts and the Centre for Policy Studies to review the NHS and produce proposals. In January 1988, Nigel Lawson, the Chancellor of the Exchequer, told the Prime Minister that, in spite of the pressures on the NHS, he would be reluctant to allow a further significant increase in NHS funding unless the service was reformed, and one could be sure that it would be well spent. Within a few days Mrs Thatcher, interviewed on ‘Panorama’ by David Dimbleby, said that the government would examine the health service. When ready, there would be proposals for consultation, and the review would be far quicker than any Royal Commission.200 Both government supporters and critics were caught by surprise. Whether she was referring to a government review or one by the Centre for Policy Studies was not clear, but within days public reaction made it clear that it would have to be governmental. The Whitehall machine was set in motion and the new contract for GPs was promptly relegated to second place. A Cabinet team led by the Prime Minister herself, operating behind closed doors, considered many aspects of the service. The medical profession, for the first time in the history of the NHS, was excluded from the process. So was virtually everybody else. Public consultation would not have produced a consensus for radical change. An editorial in the BMJ offered advice: a radical solution was required and more money was crucial.201 The best way to get that would be an insurance-based scheme. One civil servant, asked his private advice, said “either leave the NHS alone, because it is too hot to handle, or put in train some small changes that would be progressive in effect. The result might not be visible at the outset, but they would free-up and destabilise the system and allow it to evolve, in time producing changes that would be hard to predict but probably in line with what was required.”
Mrs Thatcher accepted Powell’s analysis that there was a potentially limitless demand for health care if it was provided free at the point of delivery, and that the NHS lacked the right economic signals to respond to the pressures. Had one been starting from scratch, she believed, one would have allowed for a bigger private sector in primary and secondary health care and given closer attention to additional sources of finance. But one was not, and the NHS inspired at least as much affection as exasperation.202 For the first five months, the review was all over the shop. No coherent plan emerged, although there was pressure for greater reliance on private health insurance. The review originally excluded primary care because family practitioner committees (FPCs) had been given independence as recently as 1985. The House of Commons Social Services Committee, reporting on The future of the NHS, dismissed many options that were being canvassed – tax concessions for private care, health maintenance organisation (HMOs) and budgets for GPs. Ian McColl, Professor of Surgery at Guy’s, showed David Willetts the commemorative plaque in the boardroom at Guy’s, recording the last meeting of the board of governors in 1974. Willetts learned to his surprise that the teaching hospitals lost their boards of governors under the Conservatives in 1974, not Labour in 1948. McColl argued that hospitals should go back to the more self-managed style of years past.
In August 1989, Kenneth Clarke replaced John Moore as Secretary of State, a crucial change. Earlier, he had been involved in the 1974 Reorganisation and he viewed Keith Joseph’s adoption of consensus management as a disaster. Its abolition was his starting point. He rapidly persuaded the Treasury to allocate an additional 1.8 billion to the NHS in England, as well as more money for nurses’ training under Project 2000. He focused the review on the delivery of care, rather than the source of funding. From then on, the key concepts were an internal market and self-governing hospitals, to free hospitals – like schools – from centralised control. Money would follow patients and incentives should encourage efficiency. By October, The Times was reporting that the idea of allowing GPs to hold the purse strings for primary and secondary health care had been revived.203 The family doctor would be an informed purchaser on behalf of the patients.
The White Paper
Key changes as a result of the 1989 reforms
- Regions and districts received funds according to the size of their resident populations, weighted for age and morbidity and for the differences in the cost of providing services. RAWP had almost established equity so it was easier to move from historic allocations to a weighted capitation system.
- The hierarchy of the NHS management was replaced by a ‘local dynamic’, devolving decisions to those closest to the people, and introducing greater local diversity, competition and choice.
- Purchasing and provision were separated. Districts became purchasers, losing their hospital management responsibilities to concentrate on the assessment of needs and commissioning the necessary services.
- Hospitals and community services could apply for self-governing status as NHS trusts (providers).
- GP practices with 11,000 or more patients could apply for their own NHS budgets to cover their staff costs, prescribing, outpatient care, and a defined range of hospital services, largely elective surgery.
- Systems of medical audit were introduced to ensure quality of service.
- Regional, district and family health services authorities were reduced in size and reformed on business lines, with executive and non-executive directors.
Working for patients was published in January 1989.204 It was a challenge to the status quo, the rigidity of organisation to which Enthoven had pointed, and the assumption that the employment of highly trained health professionals would ensure that users got what they wanted.205 Yet the Review accepted many basic principles of the NHS, to the surprise of the left that had predicted a move towards health insurance to provide additional money. The NHS would continue to be funded centrally from taxation, the simplest and cheapest way of raising money. It would remain largely free at the point of usage. There was no suggestion of major organisational change at the top of the management hierarchy. The idea that a major injection of funds was all that was needed was rejected. Instead, productivity would be improved by reforming incentives and management and the introduction of a ‘market’. The purchasing function would be separated from the provision of services. Health authorities would concentrate on the assessment of needs and contract for services; the services would be provided by hospitals and community units. Good performance would be rewarded, for money would follow the patients. It was clear, although not stated, that once contracts were in place, any limitation of services for financial reasons would be laid at the door of the purchaser, and no longer at that of the hospital. It was a model well suited to elective surgery, but less appropriate for elderly people and for psychiatric services. Markets have winners and losers; would the poor, deprived and handicapped be at risk? Just as the USA was considering health care systems more like those in Europe, the UK was moving in the opposite direction. No master plan was provided to guide implementation, even when working papers were published.206 There had not been time to work out the finer details. Many were alarmed by the concentration on ‘how to get there’ without a clear statement of ‘where we want to be’, and the lack of consultation and experiment.
The medical profession attempted, in public and in private, to influence events. They failed, concluding that the ethos of the NHS would be changed irrevocably and the reforms would lead to privatisation and Americanisation of health care. The idea of an increased reliance on market forces, albeit forces that were managed and constrained, was so radical that few, even in the Department of Health, understood or agreed with what was happening.
The structure of the NHS 1991–1996
Implementation and opposition
There was immediate concern in the medical profession and, after four months, the BMA rejected the proposals.207 Its diagnosis was that the system was simply underfunded and the reforms were born amid anger and bitterness.208 Sir Roy Griffiths believed that it would be impossible to implement change in the face of the united opposition of staff. There were calls for evaluation, perhaps along the lines of the Rand studies of pre-paid health care. Ministers steadfastly refused to consider this. It was the opponents of the reforms who were most vociferously demanding evaluation or pilot trials, and pilots would delay the reforms beyond the next election that might well be won by Labour. The NHS itself was a matter of faith and political will, and the USA, where evaluation of change was best developed, did not have a health care system envied by Britain. The King’s Fund planned an assessment, but the fiercely partisan positions suggested that evidence would fuel argument rather than resolve it. In August 1989, the BMA began a forceful poster campaign to keep pressure on MPs during the summer recess. One read ‘What do you call a man who does not take medical advice?’ The answer was apparently ‘Kenneth Clarke’; Clarke said that the correct reply was ‘healthy’. The profession failed to modify the plans, either during the consultation period or during their passage through Parliament.209 Some NHS staff sought a judicial review, which was rejected. Alain Enthoven found much to his liking in the separation of the demand and the supply side of the NHS, making money follow patients, and in greater local delegation. However, he commented on the lack of detail in the proposals and the absence of pilot projects. He was unclear how the proposals would work out in practice, and thought the timetable was amazingly fast, as did the Secretary of State himself.210 Even within the government there was uncertainty about whether the new and untried system would work.211 Much of the opposition to the NHS reforms was a matter of attitude. Those with a collectivist philosophy and a belief in altruistic co-operation could not accept a market-based approach with competitive elements.212
Kenneth Clarke made the running, changing little or nothing as a result of the doctors’ opposition, and demanding a rapid timetable from his officials. Politically he was the right person to argue, explain and defend the policy day after day.213 The electoral timetable meant that the reforms had to be implemented faster than NHS management believed possible. Implementation depended on regional and district managers; loyalty was demanded and dissent was discouraged.214 Legislation passed in 1990. In November of that year,William Waldegrave replaced Kenneth Clarke; less combative, he maintained the momentum in a quieter way. An East Anglia, simulation of an internal market, the ‘Rubber Windmill’, ended in chaos and supported the belief that some management of the market was essential. The reforms were implemented in April 1991. Accounts had to balance because over-spending could not be carried forward, leading to a further financial crisis. The new system of contracts had to be in place and there was anxiety that purchasers would make radical changes. Ministers thought that the whole point of the reforms was to increase efficiency, but to minimise crises it was agreed to go slow in the first year, with a ‘smooth takeoff’ and the maintenance of existing patient flows. The Chancellor of the Exchequer allocated an additional 4.5 per cent in real terms to help. The election was fought in April 1992, health being a central issue. The Conservatives won, and the new Secretary of State was Virginia Bottomley.
NHS trusts
Acute hospital trusts, the ‘self-governing hospitals’, aimed to allow managerially élite hospitals substantial freedom. Only if they failed to meet financial targets or service standards, or if corporate governance broke down, would this be curtailed. Few were expected to take the trust route although there had to be enough to demonstrate the advantages; some were turned down because of an inadequate business plan.215 It was hoped that hospital staff would support applications for trust status. In most places they rejected it, and local support ceased to be a precondition. At Guy’s, which had a reputation for resource management that was not entirely justified, the staff were deeply divided. Seen by government as a potential flag-ship of the reforms, Guy’s believed that trust status would ensure that a major building scheme would go ahead, guaranteeing hospital survival. Sadly its finances deteriorated to the hospital’s and Ministers’ embarrassment. Somewhat to people’s surprise, a number of community units saw the possibility of greater independence from hospitals, and applied for trust status. Fifty-seven trusts became operational in the first wave, a further 99 from April 1992 and, after the surprise Conservative victory that month, it became apparent that all units would do so. Between 1991 and 1995, NHS hospitals were progressively transformed into publicly owned self-governing bodies. Management was often strengthened and some problems, for example, the King’s College Hospital A&E department, became easier to resolve.
In place of the traditional authorities, an industrial model of governance was substituted. This was technocratic and, while it might promote efficiency and responsiveness, it also increased insecurity, the authority of the centre and short-term decision-making. Each trust had a board of directors including non-executive ones who brought skills from the business community. Interviews before appointment often had a political flavour. Key interest groups were excluded – the population, the local authorities and the clinicians. There had previously been members who represented the electorate on management bodies; they might have been viewed as a nuisance, but they were in a minority and they added an authority to the decisions that were taken. New chief executives from outside the NHS did not always share the ethos of public service. They often improved the use of resources and challenged established practices, not always correctly. The turnover of chief executives was substantial and expensive; sometimes they and consultants battled for power. On fixed-term contracts, sometimes as short as a year, they had their eyes on immediate problems, rather than the development of long-term collaborative arrangements with others whose assistance was essential to a good service.216
Trusts were able to employ staff, negotiate terms and conditions of service, own and dispose of their assets, retain surpluses, and borrow money from the government and the private sector. They generated their revenue by making contracts with districts, commissioning agencies and GP fundholders. In the early days, trusts were highly visible and much public interest was concentrated on them. The need to balance the books was paramount. Some of the early ‘flagship’ trusts, such as Guy’s and Bradford, faced multi-million pound deficits. To the embarrassment of the government, they made headlines by ward closures and redundancy notices. Their managers, appearing before the Select Committee on Health, made an impressive case that the delivery of patient care was in no way proportional to the number of staff employed.217 It was porters and domestics who had most to fear as jobs-for-life became subject to market testing and private contracting. Anti-waste campaigns had a new ferocity about them. As trusts were unable to guarantee the long-term future of their units, even professional staff, for example, midwives, might be employed on a short-term basis. Were people afraid to speak out because of their jobs?218
Trusts needed good financial information for their business plans and they needed it rapidly, but much of the information required to compare relative costs did not exist; the necessary systems were not in place, even at the resource management sites. Many hospitals had no price list. Block contracts, notional costs and wild price variations were commonplace. It took much work and a long time to sort things out.219 Extra-contractual referrals maintained the GP’s right to send the patient to the most appropriate unit, but generated substantial administrative costs.220 Relationships between the ‘purchaser’ district health authorities and the ‘provider’ trusts were initially tense. The health authorities had to learn to work with the trusts as equals, not subordinates. Over the first few years, there was little change in the pattern of patient flows, perhaps 5–10 per cent. Where changes were made, it was usually to create a local service for patients. District hospitals might provide services previously available only at a regional centre, and purchasers wished to develop these if the price was right. In the case of city hospitals, particularly those in London, peripheral purchasers would do their utmost to restrict central flow in favour of their local hospitals, many of which were new, with young staff and spare capacity. Large hospitals in major centres of population, for example, Stoke and South Tees, increasingly provided specialised services such as cardiac surgery, radiotherapy and urological surgery, stripping resources from more distant university hospitals. Teaching hospital trusts were at a disadvantage because of their high overheads and managerial complexity. Sometimes they treated purchasers with disdain and lost market share. Their countervailing advantage was that a high proportion of their medical and surgical consultants had sub-specialty expertise. This made them the natural place for junior medical training. They argued that only with a high volume of work could optimal care be provided; but that was not necessarily true of more common procedures. Some trusts progressively expanded their work and their catchment, others floundered. Acute trusts sometimes developed outreach services; community trusts looked at hospital-type day care. The borders could blur. Purchasers at first were poor at contracting, and hospitals had the clinical expertise to run rings round them. Contracting slowly became more sophisticated and more firmly based in an assessment of local needs. In 1996 the first trust to be threatened with collapse was Anglian Harbours Trust, when two health authorities planned to withdraw contracts for spending too much on management rather than patient services.
Doctors were now employed by the trust, and not the regional health authorities (RHA), so they began to think in a more corporate and local way. Each trust could define its organisational pattern. Clinical directorates were often established under medical control, on the ‘Johns Hopkins’ model. Decisions could be taken more rapidly, new patterns of staffing could be introduced, and services could be improved without bureaucratic delays. Because their unit budgets were determined by contracts with purchasers, it was easier to persuade consultants to change their patterns of work. Nurses, when they were appointed as directors, were in a dilemma. While they wanted to see the ‘big picture’ and to contribute to strategic planning, the chief executives looked to them primarily to run an effective nursing service and ensure that quality assurance worked. Trusts frequently used their freedom to determine conditions of employment to set high salaries for the chief executives. The Institute of Health Service Management defended these saying “they are certainly not outrageous in comparison with pay in other industries.”221 Ministers set guidelines for management costs and asked managers to live with the same pressure for cost efficiency and good value that they quite properly imposed on the rest of the NHS.222
The need for hospital trusts to generate income led to visible changes. Lilac coloured carpeting and easy chairs, smiling receptionists, a florist’s stall bursting with blooms, a bistro coffee bar and a newsagents would appear. Trusts spent money on glossy pamphlets on their services, and on logos. Leicester’s mental health service trust drew the wrath of the anti-hunt brigade when it used the county symbol of a fox, particularly as the unfortunate animal appeared to have a leg missing. Caring hands, trees of life and groups of happy people were popular images.223 Acute hospital trusts established private patient units to compete with private hospitals. Between 1988 and 1992, income from private units increased by 40 per cent to £157 million and the proportion of the UK private health care market (itself expanding) in NHS hands continued to rise.224 Private hospitals in their turn treated NHS patients referred by fundholders. The boundary between the NHS and private medicine was becoming blurred and the phrase ‘internal market’ seemed increasingly inappropriate.
As community trusts developed, the nature of the services that they provided came to differ from the previous community units. Midwifery was usually hospital based, and community midwives might work for a hospital rather than a community trust. In contrast, services for elderly mentally frail people, and those with mental illness or learning difficulties, were increasingly based in the community and managed by a community trust. Some chief executives thought that community care meant ‘no hospital beds’ and closed these without providing an adequate spectrum of support in the community. Others were innovative, co-operating with voluntary bodies, setting up new projects, and extending their services by developing local teams to provide intensive nursing after hospital discharge or rehabilitation after a stroke. Because their services were delivered in the home or within a neighbourhood, community trusts attempted to develop good relationships with GPs and the local population.
Fundholding
The central idea of fundholding was that, although patients could not be given unlimited money to purchase their own health care, GPs could act as informed purchasers while keeping an eye on priorities. In this way they could be involved in shaping local services. In June 1984, the Office of Health Economics held a meeting to discuss the future of the NHS. One participant argued that salvation could be achieved only through private finance – insurance. Alan Maynard, the York economist, thought the problem was on the supply side and best approached by the introduction of practice budgets.225 If GPs controlled the flow of money, good hospitals would flourish while bad ones would dwindle away. The meeting thought that, over a decade, the concept might be developed into something practical, and it began to be discussed in academic circles.226 When the Green Paper on primary health care was being drafted in 1985, Kenneth Clarke, then Minister, was intrigued by the idea. In the USA some HMOs controlled hospital costs by using primary care physicians; Department of Health officials,(including the author), visiting the USA in 1985, saw the advantages of bringing the budget of the hospital and primary care under the same financial umbrella. Centres could compare costs with each other and see which specialist or hospitals appeared to be the best buys.227 Kenneth Clarke’s return to the Department of Health turned budgets into practical politics, but the concept attracted little initial attention.228 Initial papers were written by an economist (Andrew Burchell) and myself, Geoffrey Rivett. The GP press were sceptical, particularly when they were told that, if the idea was poor, it deserved to die, but if it was good and GPs took it up, they – the press – would give it the publicity it needed. Most considered it a wild idea that would collapse and were surprised when the scheme, sometimes wrongly considered an afterthought, came to the forefront of the reforms. A GMSC negotiator said his first reaction was that GPs had three choices: they could join and control their own destiny; they could stay out and let others control them; or they could resign!
Fundholders were allocated money on the basis of their historic expenditure, although some regions ensured that the budgets were generous. They could then use it for practice staff, prescription medicines and hospital services such as laboratory tests, outpatient appointment care and around 110 elective operations, covering the vast majority of elective surgery. Procedures could be purchased from the private sector as well as the NHS. Hospitals could increase their revenue at the margin by serving fundholders’ patients and had an incentive to provide the care GPs wanted. Officials working on the details knew the risks of bankrupting the budget and introduced stop-loss systems. Some in Cabinet argued that efficient doctors should be able to make a personal profit from their management skill. Such incentives had created problems in the USA, and Kenneth Clarke saw the political damage if this untested idea were to misfire. Because it is difficult to define the boundary between emergency and elective care, and between medical and surgical cases, it was argued that the range of services for which fundholders received money should be sufficiently wide to avoid boundary disputes. Ministers were unwilling to go ‘one bridge too far’ by including emergencies and general medicine at the outset. Indeed, even the most enthusiastic fundholders initially believed they had quite enough to do to manage the modest range of services included.
Many in the hospital service believed that GPs were incapable of handling money, even were it desirable, and that fundholding clashed with the district as the purchaser, defining needs and contracting for services for the entire population. Those whose interests would be adversely affected did not mobilise opposition effectively. Some senior department officials were out of sympathy and work was carried out for Ministers in the face of their opposition. A few regions lacked enthusiasm. That there would be GP volunteers was never in doubt. The professional press, predicting total failure, would provide free publicity to any success. First-wave fundholders included some of the cream of the profession. Their prime motivation was the improvement of patient care, they were articulate and were not going to be pushed around by the Department or their professional colleagues. Were a new Minister to be privately sceptical of the scheme, attending a fundholders’ conference would be a conversion experience. Glennerster recorded the reactions of fundholders:
As I sat down and I realised what we were about to do, I thought this is a revolution happening here. No consultant has ever talked to me about what I might think of his service, or any of the general problems we might have in twenty years of professional life.
In April 1991, there were 720 GP fundolders in 306 practices.229 Budgets averaged £1.3 million. Initially practices could join if their lists were over 11,000, a level reduced first to 7,000 and then to 5,000. The expansion was driven by GP demand; it required courage, hard work and professional unpopularity. Few in prominent positions risked participation. It took consultants a year to recognise the extent to which fundholding moved power to family doctors; then they added their voice to the opposition. The services covered were expanded. In 1993, district nursing and health visiting, dietetics and chiropody were included. By 1994, 6 per cent of the total NHS budget, equivalent to £1.8 billion, was being spent by fundholders. Substantial variation existed; 80 per cent of the population was covered in places such as Derbyshire and Bury, but only 4 per cent in Camden and Islington. Inner cities, where the population was mobile and the workload high, were slow to adopt fundholding.230
Fundholders established counselling, physiotherapy and consultant outpatient clinics. They were energetic on behalf of their patients, and had more powerful levers when it came to negotiation. They did not have the ultimate responsibility of maintaining the viability of an essential district hospital, although they shifted patients to other hospitals only as a last resort. Patients were seen sooner in practice-run consultant clinics, the costs were lower because hospital overheads were not incurred, and GP-specialist contact was possible. Hospital management was afraid to lose referrals that outreach clinics could generate, and they rapidly increased in number.231 However, consultants in particular argued that outreach clinics were a poor use of scarce resources, did not provide adequate facilities for investigation, and undermined teaching and the role of the DGH.
Coverage of GP fundholding in England
1991/92 |
7% |
1992/93 |
13% |
1993/94 |
25% |
1994/95 |
35% |
1995/96 |
41% |
1996/97 |
53% |
1997/98 |
59% |
Source: Department of Health press releases.
In 1994, Ministers decided to encourage a variety of schemes.232 Pilot trials were launched into total purchasing in which GPs purchased all hospital and community health services. There were other pilots incorporating maternity services (1995) and mental health (1996). From 1996 there was community fundholding including staff, drugs, diagnostic tests and community health services; standard fundholding, covering about 20–30 per cent of the NHS budget for each patient; and total purchasing. Total purchasing practices controlled large sums, £30 million or more. They employed high-quality managers who, like managers in trusts, tried to use money to the best effect and to influence practice thinking. Non-fundholders increasingly formed large consortia, creating an alternative form of GP influence, locality commissioning, viewed with favour by the Labour Party. GPs in commissioning groups had less influence on the spending of money, but large numbers of doctors were involved in both systems and there was increased co-operation between GPs, health authorities and trusts. New kinds of primary care organisation were emerging, fundholding practices, multi-funds, total purchasing projects, GP-led commissioning groups and out-of-hours co-operatives. They broke down the traditional isolation of GPs and created opportunities for increased co-operation. The stimulation of alternative systems, designed for local circumstances and growing in number almost daily, was a major achievement. The proponents of the alternatives agreed that this was right, and that GPs should be at the heart of decisions affecting patient care.233
Most assessments were written from an established political or philosophical position and the evidence on its success did not point in a consistent direction.234 There were increased management costs, but studies by the National Audit Office and the OECD were generally positive. Entrepreneurial fundholders achieved much, and there was little doubt that they often obtained better services for their patients. An evaluation by Glennerster concluded that fundholding had a major influence on contracting, quality and value for money. The Audit Commission believed that some fundholders had difficulties in managing a budget, needing help to get the most out of their purchasing decisions. Universal fundholding was never the aim; there was no point in substituting one monopoly system for another. The rapid growth of fundholding left the GMSC in a quandary. An increasing number of GP fundholders believed that they could not trust the leadership to act on their behalf, and a national association of fundholders was established. As a result, the GMSC ultimately it found itself arguing with Labour in favour of fundholding, with The Times in support.235 Many GPs were now fundholders, and were determined to keep their own budgets, irrespective of the political party in power. On election, Labour rapidly changed entry to the fundholding to a two-yearly cycle, promising equity between fundholders and commissioning groups. But fundholding was now doomed.
Medical audit
While angry interchanges were taking place between government and the medical profession, medical audit was being developed in a calmer way. Doctors were anxious lest audit should become a management tool to coerce them into line. The almost complete absence of fuss was due to the establishment of mutual trust about audit between officials and leaders of the profession. Both believed that good clinical practice and audit could not be imposed, and chose their words carefully to ensure that coercion was never implied. Several features of the Department’s scheme were derived from, but not attributed to, American experience in HMOs such as the Harvard Community Health Plan. The Department of Health knew that medical audit was a branch of quality assurance and should be multi-disciplinary. That was a bridge too far for the medical profession in 1989.236 In general practice, pilot groups were established to test ideas based on continuous quality improvement and to avoid national mistakes. Medical audit advisory groups (MAAGs) were formed, chaired by a doctor who was accountable to management for the proper conduct of audit. Their major task was to educate practices, usually through the appointment of medical or lay advisers and facilitators who visited practices.237 Feedback from pilot studies was used to handle anxieties such as confidentiality. Local audit projects were funded and a textbook was written by the profession for the profession.238 In parallel, medical audit systems were developed for hospital services, the Royal Colleges establishing audit units and supporting projects looking at clinical outcomes. There was a strong feeling that clinical audit should be undertaken, that it was an important part of practice, and that it should become part of a wider range of educational, quality assurance and development systems.239 However, as time passed there was disappointment that audit had not somehow delivered. There was not much evidence that audit was improving clinical care, and some disillusionment was beginning to creep in. Perhaps the best way in which it should be undertaken had not yet been found. Part of the problem was that audit seemed to have at least two conflicting aims – the improvement of care by professionals as a result of an assessment of what they were doing, and the need for providers to demonstrate that they were meeting a purchaser’s contractual requirements.
The Patients’ Charter
In July 1991, government introduced the Citizens’ Charter across public and private sectors. Organisations were to state the standards to which they aspired, and measure their performance. The NHS was included in the government-wide programme and the initiative was driven by Brian Edwards, Regional General Manager of the Trent RHA. Standards were set for matters such as the time it took to be seen in an accident or outpatient department and how long patients should wait for operations. These were measures of the process of health care, rather than its clinical quality. Hospital league tables were developed showing how far individual trusts met charter standards, and hospitals sometimes manipulated their figures. Subsequently the charter was extended into primary health care. The standards imposed became built into the NHS reforms, for they provided a benchmark of performance by which hospitals were judged. Computer systems that identified the length of time patients waited for admission produced information that affected the payments hospitals received under their contracts, and the way consultants ordered their work. The standards initially stated that patients should never wait for more than two years for surgery, a period reduced to 18 months. In July 1995, regions were asked to set a voluntary 12-month limit, difficult given the need to spend money on handling emergency admissions and deal with other priorities such as mental health initiatives. Some managers felt that their future careers depended on a politically motivated imperative.240
The roll-out of the reforms
Anxieties were fanned by the uncertainties of change. Managers needed to adapt. Consultants came to realise that their power base was smaller, there was a trend towards rolling contracts with less security of tenure and a prospect of performance-related pay. A move towards community-based health care reduced the claims of the acute hospital sector on the NHS budget. GP influence on purchasing replaced consultant influence on regional decisions, and consultants’ ‘clinical priorities’ were increasingly distorted by the vagaries of different purchasing authorities and whether a particular patient came from a fundholding practice.241 Posts in public health medicine were reduced as authorities merged, and, although cost-effective purchasing seemed to require their expertise, this was a new field and not one where public health was paramount. Nurses found that the reforms further strengthened the position of managers. GPs, angry at their new contract, assumed that the NHS reforms would be equally distasteful. Why should Kenneth Clarke impose a contract they disliked and simultaneously trust them as fundholders with NHS money?
Initially the Labour Party opposed the reforms in their entirety; however, after electoral defeat in 1992, it slowly accepted some of the concepts. Brian Abel-Smith and Howard Glennerster urged Labour to resist gut reaction, believing that many reforms moved in the right direction. Long overdue, FHSAs had ceased to be provider-dominated. Fundholders were able to get a better deal for their patients, and making hospitals compete kept them on their toes. It made sense to build on what had been achieved.242 A division between purchasers and providers was sensible, and the substantial autonomy of trusts made it easier for them to adjust to what was wanted. Trusts could easily have a change of membership to incorporate democratically elected representatives. Ecclesiastics, however, attacked the moral values of the new NHS; it needed a culture of generous service and unstinting care, in contrast to the business model which had replaced co-operation with competition. The BMA had not been won over. In 1995 its Chairman attacked the reforms as an “infernal bazaar” rather than an internal market.243 While the medical profession believed that they were detrimental to the principle of equity, private practice was expanding and doctors were clearly willing to treat an increasing number of patients who could pay, more rapidly and in hospitable surroundings.244 Virginia Bottomley, Secretary of State, redefined the NHS as the provision of care on the basis of clinical need regardless of the ability to pay, not by who provided the service. She said that central strategic command could, with benefit, be replaced by a local dynamic. Clinically effective intervention, local innovation, use of new technology to reduce or eliminate the need for hospital admission and the move to community-based care would take root fastest if those taking the decisions were, like fundholders, close to the public and the patients concerned.245 Nobody was willing, however, to define what ‘clinical need’ really meant. Did it encompass renal dialysis for the 90-year-old person, in-vitro fertilisation for the single would-be mother, breast enlargement and sex-change operations?
Whereas many policies were determined centrally, others were driven locally.246 Entrepreneurial trusts might take decisions that alarmed the centre. In contrast to population-based health authority purchasing was the growing number of fundholders, purchasing a limited range of service for individual patients. How would these systems relate to each other? What balance would be struck between market competition and the management of a coherent service? What would be the number and configuration of NHS trusts? At first the talk was of contracting; everything would be resolved through provider competition. However, it was rapidly realised that a market ‘red in tooth and claw’ would be unacceptable. The term ‘contracting for health gain’ was coined, easier to talk about than to define. Planning reasserted itself with alliances, partnerships and the importance of multi-agency relationships.247 The strategic plans of yesteryear were replaced by ‘purchasing guidelines’, often with a clinical flavour. Operational plans reappeared under the guise of purchasing (commissioning) intentions. Evidence-based contracts became the new nostrum, to provide care known to be effective. ‘Accountability’ became the watchword, but accountability to whom – should purchasing be population focused or responsive to the demand of consumers, or both? Less priority was placed on competition and more on improving health; greater priority was given to primary health care, and improved standards were sought through the Patients’ Charter and the adoption of evidence-based medicine. Planning and competition began to exist side by side; groups of fundholders began to align their purchasing intentions with those of health authorities. The way money was being spent was clearer as a result of the separation of purchasing and provision. Common ground between the Conservatives and Labour was being established, and the outcome of the reforms was neither as appalling as its opponents had predicted nor the major leap forward for which its protagonists had hoped.248 Labour’s commitment to financial probity would not allow a promise of more generous funding; money would have to come from a reduction in management costs that had risen substantially as a result of the accounting procedures generated by the internal market.249 Labour came to accept that many changes of the Thatcher era had changed Britain’s institutional landscape, and the pattern of the NHS. It dropped root-and-branch opposition to the entire package, and focused its attack on the internal market, trusts run as independent competing units and fundholding. It mellowed even on these; a quiet agreement on some issues was beginning to emerge. Trusts, in some form, were likely to persist. The distinction between fundholding and approaches that had been developed in reaction to it (such as locality commissioning in which practices covering a geographical population collaborated) were crumbling. GP participation in commissioning seemed here to stay, although it was impossible to predict what form it would take. The return of Labour, however, foreshadowed further organisational change and an end to the internal market.
During the implementation of the NHS reforms, the medical profession had been concerned that they would have an adverse effect on clinical standards. After discussion, Ministers established a CSAG in 1991.250 Its function was defined as monitoring standards within the NHS. Its first remits were neonatal intensive care, childhood leukaemia, cystic fibrosis, coronary artery bypass grafts, emergency and urgent admissions to hospital, the management of normal labour and services for people with diabetes. Each study was undertaken by an ad hoc committee with co-opted experts and research support. The initial reports, for example, on coronary artery disease, demonstrated marked variations in patterns of care and a clear gap between demand and the resources provided. What the reports were unable to do was to demonstrate whether the reforms were ‘good’ or ‘bad’. There were no measurements made before the reforms with which the new results could be compared.
Mergers, amalgamations and structural changes
As trusts were established, districts ceased to have responsibility for hospital management. Combining a residual management function with purchasing was, in any case, difficult. It was argued that districts should amalgamate with matching FHSAs. At first ministers were reluctant to authorise this lest it prove the wrong path. However, the pressure for amalgamation was unstoppable. By 1995 the number of health authorities had halved. Vertical integration of primary and secondary care was prevented by the separation of purchasers and providers. In the USA patients had a choice of competing systems, each complete in itself, as hospital services extended into the community. Control of the market became an issue. To take away regional command merely to substitute regional market management was not attractive to the trusts, with their new-found freedoms. Regions might reduce the impact of the internal market. Devolution had often been preached, but few believed it would happen. Industrial concerns had been removing middle management, ‘downsizing’ and producing ‘flatter’ organisations. Only a few foresaw that regions might be treated in this way.251 Their last major task was to oversee the implementation of the NHS reforms, managing the fundholding scheme, and supporting districts in their purchasing functions. Many politicians and managers, and some consultants, wanted to abolish regions because they were controlling. Others saw professional advantages in their co-ordination of services, and political ones as they acted as a buffer between local problems and the Secretary of State. Desperate attempts were made to retain their function. However, a review of the relationship of the 14 RHAs with the centre in 1993 recommended that regions should be slimmed, and then amalgamated into eight in April 1994. Finally they should be abolished in favour of eight regional offices of the Department of Health. To the government, this proposal was ‘simpler and sharper’; to Labour it was ‘highly centralist and undemocratic’. Labour subsequently came to relish the power it gave them in government.
NHS Structure from April 1996
- NHS Executive and eight regional offices
- Co-ordinates local services within a single NHS
- 100 health authorities (integrated DHAs/FHSAs) covering both primary and secondary care
- Primary expertise planning, administration and contracting for services
- Accountable for the NHS within their districts
- Responsible for efficient use of resources examining the detail of care available
- Challenge variations in treatment rates.
1996 Adapted from S Dorrell’s Millennium lecture.
The Conservatives had done what Kenneth Robinson had proposed in the 1968 Green Paper. After 48 years, the regions that had been central to the development and evolution of the NHS were disbanded, the Health Authorities Act 1995 providing the statutory authority. Regional outposts had less power, they had few staff and money no longer flowed through them. The Act also enabled the formation of 100 single health authorities by merging the residual 105 DHAs and 90 FHSAs, saving £150 million a year. These, established five years after the start of the reforms, inherited the statutory functions of DHAs and FHSAs. They would commission a range of services within their allocated funding, provide and secure the provision of services, work with GP purchasers, and make arrangements with GPs and other contractors. They would agree strategies, monitor purchasing and support primary health care. Being under financial pressure, they looked for economies, pressed trusts to merge to reduce their overheads, and for the concentration of care in fewer hospitals.252 The publication of a further White Paper in 1996, Choice and opportunity, created yet a further dynamic for change.253 Subsequent legislation, the NHS (Primary Care) Act 1997, had substantial bipartisan support and provided new opportunities for the transfer of resources between primary and secondary health care, and the development of comprehensive packages of primary care by new ‘provider’ organisations such as community trusts. The principles of piloting and ‘opting-in’, initially seen with fundholding, could now be applied more generally. With the approach of the election in May 1997, Labour was poised to win. It pledged itself to appoint a Minister for Public Health, to ban tobacco advertising, to replace fundholding with locality purchasing and to end the internal market. Change would, however, be evolutionary and more money was not to be expected.254
Hospital and specialist services
Never had the acute hospitals been so efficient, and never had they been under such strain.255 The NHS reforms generated competitive pressures, particularly in the conurbations. Clinicians knew that if their hospital service and waiting times fell below the standards available elsewhere, the budget of their hospital might suffer. Other hospitals would be keen to offer an alternative. The drive to cut costs and treat the maximum number of patients with limited resources could have a devastating effect on clinicians and might make a lean and efficient department with good morale unsafe, hyperefficient, exhausted and demoralised.256 To improve efficiency, patients were discharged more and more rapidly. Readmission rates rose. As discharge was often delayed because special investigations had not been carried out, it became important to streamline such procedures. The number attending A&E departments rose steadily, as did those requiring admission. Between 70 and 95 per cent of medical admissions were emergencies. This was the result of increased capacity to help people with life-threatening conditions, an ageing population, patients’ expectations, changes in primary health care, social factors and defensive medicine. As well as shortages of staff in key departments, there was a lack of beds and patients had to wait while beds were found. Lack of operating theatres for emergency cases by day meant that emergencies might be operated on at night by relatively junior staff. In the search for efficiency, performance indicators were developed further, league tables were introduced and the variations between hospitals were examined. The Audit Commission looked at the use of medical beds in acute hospitals, finding inappropriate admissions, poor admission procedures, patients on inappropriate wards, inexplicable variation in lengths of stay, lack of consensus on clinical practice, poor discharge procedures and outdated bed allocation systems.257
Reshaping the acute hospital system
The hospital system was being reshaped by the changes in clinical methods and management. Techniques were introduced that might increase quality and reduce cost. Competitive ambitions became apparent. To remain a ‘major player’ trusts, sometimes encouraged by purchasers, attempted to develop new services or to maintain existing ones, even if the workload did not justify it. Because of duplication, units might see fewer patients than was desirable to maintain a high level of skill (as in the case of cancer treatment), equipment might be underutilised, staff levels might be too meagre for safety and costs higher than necessary. Synergies were postulated between the subspecialties. Too much was happening too fast; trusts did not want heavy-handed control, but rational co-ordination of services was at risk. Hospital practice had experienced a radical change with the progressive separation of sub-specialties from the mainstream of general medicine and surgery. Purchasers looked at the number of trusts and the extent of their duplication. Providers looked at costs, inefficiencies and the case for mergers. In the 1960s there was a clear concept of the nature of a DGH; nobody now had a master plan. The Royal College of Physicians (RCP), under the presidency of Sir Leslie Turnberg, became more deeply involved in the pattern of hospital services. There was debate about the distribution of hospitals providing secondary care for patients with the more common disorders. One model involved a marked reduction in the number of hospitals, leaving a smaller number that were strategically placed and offered a full range of secondary and tertiary services, coupled with more local supporting facilities. Developments in policy on specialty provision and medical staffing transformed the criteria for judging the size a hospital had to be to carry out clinical services safely.258 The RCP, echoing the thinking of the Bonham-Carter Report (1969), believed that a single general hospital now should serve populations of 200,000–300,000. Such hospitals should have access to a tertiary service provided on a population base of around a million. Geographically isolated towns with populations of fewer than 150,000 might be unable to sustain a viable DGH.259 Already a new tier of large hospitals was emerging, alongside the university teaching hospitals, with advanced skills and substantially better equipment than smaller DGHs. Subregional centres, serving a population of a million or more became apparent, as in Stoke and South Cleveland, often having a substantial academic base. Such hospitals could become cancer centres, dealing with the more complex tumours. They could accommodate the emerging specialties, such as diseases of the lower bowel and rectum (coloproctology), and vascular surgery, which were becoming distinct, dealing with the repair of aortic aneurysms, lower limb ischaemia and carotid artery stenosis.
The provision of emergency care was a key problem for smaller DGHs but attempts to close facilities on the grounds of safety or to enable money to be spent more sensibly could lead to public and political opposition.260 The growth of specialisation could be accommodated in a large hospital alongside a rota for emergency admissions, but not so easily in smaller DGHs. If patients were admitted to any of a number of wards, their care might be fragmented and poorer. Whereas many younger doctors continued to see the need for a generalist approach, in practice consultants might be so specialised that they lacked the broader skills necessary to provide emergency care and resuscitation. A surgeon spending the majority of the week on cancer of the breast was unlikely to be able to operate successfully on an emergency aortic aneurysm. Many specialist physicians and surgeons no longer participated in on-call rotas, which depended increasingly on those retaining a generalist approach, for example, the geriatricians. The RCP examined different models of care. Half of the hospitals surveyed had adopted an emergency admission ward, perhaps of 20 beds, with a system of assigning patients to specialist units. Alternatively, all consultants might combine interest in a particular field with more general clinical work, although this would dilute specialist skills. Or there might be a hybrid approach in which there was a combination of specialists and generalists.
Under pressure to improve the volume and quality of services without higher costs, some trusts, for example, the Central Middlesex introduced business process re-engineering. If the stages in the delivery of care were examined, was there a better way of designing the system? Given better drugs and anaesthetics allowing more speedy recovery, state-of-the-art diagnostics and imaging, minimum intervention techniques and better information systems, could any stages be omitted, or be arranged more economically to save the time and money of both patients and staff? Could protocols and clinical guidelines be written? Would the quality of care be at least as good? In A&E work, could minor injuries be separated from major trauma? Should elective care be separated from emergency work? If so, could nurse-practitioners undertake work traditionally regarded as medical, and did staff have the right mix of skills? The Central Middlesex began to develop a Mayo Clinic style ambulatory care and diagnostic centre, designed to provide 80 per cent of basic elective treatment. Outpatients usually had to attend different departments for blood tests, electrocardiograms (ECGs) and X-rays. Leicester Royal Infirmary ‘re-engineered’ the process, and provided a suite in the middle of the clinic that could provide 80 per cent of the tests outpatients required. This meant retraining staff to carry out several tasks, taking blood samples, ECGs and the preparation of patients for X-rays.261 Increasingly, efficiency was sought, not through further organisational changes, but in the improvement of clinical practice. Three things mattered: first, a knowledge of the pattern of disease, the alternative methods of treatment, their economic costs and benefits, and clinical guidelines founded on evidence-based medicine; second, a delivery system that co-ordinated all carers, primary, secondary and social; and, third, a quality improvement system to audit performance against evolving standards.262
Rationing
Far from being at odds with the ethos of the NHS, rationing had always been the essence of the system.263 In the 1960s, Enoch Powell drew attention to the limited resources, rising expectations and improving technology. The requirements of an ageing population added to the difficulty. Rationing took many forms, for example, the reluctance of a patient to visit a busy doctor, the family doctor’s hesitation to refer when there were lengthy waiting lists or services were not available in the locality, and the decision of consultants to select for treatment those expected to do best. Politicians might allocate priority; the Priorities document (1976) had aimed to move resources to the care of elderly people and those who were mentally ill at the cost of the acute services.264
The use of quality adjusted life years’ (QALYs) to grade alternative treatments was examined.265 Although the promotion of a comprehensive health service was the statutory function of the Secretary of State, that had never meant the provision of everything, or indeed everything that might possibly help a particular individual. This had been tested in the courts on several occasions, as it was in 1995 when, after providing several years’ care, the Cambridge and Huntingdon Health Commission refused to fund further treatment with only a small chance of success for ‘Child B’, an 11-year-old girl with leukaemia.266 ‘Panorama’ ran a programme on the issues. Child B, Jaymee Bowen, emerged as one of the brightest, shrewdest and feistiest kids on the block. She told viewers “you can’t really refuse a child who needs help … if you refuse one child then other parents are going to be worried you’d refuse their child.”267 The Times said that the most pressing point to emerge was the uncertain future of experimental medicine in a system that separated purchaser from provider. A health authority with finite resources would be reluctant to spend money on unproven procedures, yet the case illustrated how quickly an unproven procedure could become accepted medical practice. The danger of orderly rationing was that no gambles would be taken, no hunches pursued. Nobody expected health managers to waste money on moonshine; but there must be space for uncertainty and risk taking.268 Sadly, although treatment was continued, Jaymee Bowen died in 1996.
Doctors, with a strong belief in their duty to do everything possible for their patients or to allow them a dignified and peaceful end, chafed under restrictions. Fifty years previously they made life-or-death decisions quietly and privately. Now they were in the open, and it was often a young nurse who, believing herself to be the patient’s advocate, would demand the continuation of life-maintaining procedures in the face of the evidence. Adult intensive care units featured in tragic confrontations, for beds could be hard to find, and expensive treatment often deferred rather than prevented death.269 The measurement of physiological variables provided a sounder basis for prediction than a professional’s gut feeling, and computer programs could detect hopeless cases with 95 per cent accuracy. Most people had, however, an emotional revulsion from ‘death by computer’, yet there were advantages in the early detection of futile intensive care.270 There were discussions about surgeons refusing coronary artery bypass operations to smokers, people denied treatment on the grounds of age, older women or those who were HIV positive being treated for infertility, the use of interferon-beta in multiple sclerosis, and cosmetic procedures such as breast enlargement. There were no experts – everyone’s opinion was as good as everybody else’s.271 A substantial number of purchasers decided, individually, not to fund particular types of treatment. The government, however, did not accept that there should be firm rules governing eligibility.
There was little conceptual or philosophical basis for discussion on how priorities might be set, save an agreement that ineffective care had few claims on resources. In no country was there a publicly accepted set of principles that could determine who got what health care and when; drawing up the principles of selection proved extremely difficult. All countries had to grapple with the problem. The British tradition was to leave it to individual clinicians to take the decision as to who should be referred and who should be placed on a waiting list, guaranteeing that care was rationed by rules that differed, were inconsistent and implicit.272 In the USA, Oregon became well known for its explicit acceptance of the conditions for which the public health system would pay, and their priority order. The aim was to extend care to those below the poverty level by restricting what was covered. Treatments were ranked according to the ratio of cost to the improvement in quality of life produced. Categories of care were listed in rank order based on cost and health gain criteria.273 Initially a big success, Oregon’s attempt to limit services to those that were most effective, rather than rationing the people to whom services were provided failed. The list did not control the costs of services ‘above the line’ which were provided, and also, when money was short, further conditions had to be excluded that were medically and morally impossible – for example, treatment of cancers. Sweden explicitly rejected the idea that choices must be made to confer the greatest benefit on the greatest number. In the Netherlands and New Zealand, an attempt was made to list core services that were effective, efficient and allowed individuals to function in society. Rabbi Julia Neuberger maintained that three things were necessary to assist in the debate about priorities: admission of the extent of the uncertainty that existed about what was known about effectiveness; language easily accessible to the public to describe outcomes and effectiveness; and the engagement of a better educated public in the complex issues – something easier said than done.274
Some argued, like Wennberg, that the elimination of ineffective procedures would go far to solve the problem; others, that the drive of technology and the introduction of costly new drugs meant that, sooner or later, the problem of rationing had to be squarely faced. In 1994, Duncan Nichol, erstwhile NHS Chief Executive, established a group to stimulate debate and clarify policy. It believed that a universal tax-funded NHS was becoming unsustainable. The gap in resources would have to be met by a mixture of rationing and private provision. Stephen Dorrell, the Secretary of State, quoting HL Mencken, said Nichol’s analysis was “simple, obvious and wrong”. He questioned the prophesies of doom and thought that the NHS could continue to sustain the twin cost pressures of ageing and medical advance. The number of elderly people, though rising, had been rising for a long time, and some technological advances made treatment quicker and simpler. The time had not come when the principles of a health service based on clinical need, and without payment in time of sickness, should be abandoned, and he opposed blanket bans on particular procedures.275 Leslie Turnberg, President of the RCP, chaired a group that proposed a national body to advise on the principles of priority setting, and the way those principles should be applied. It appealed for greater state funding and the elimination of treatment that was not cost-effective.276 Not everyone believed that evidence-based medicine would save substantial sums but, in the government view, no clinically effective treatment should be excluded from the NHS as a matter of principle.
Hospital development and design
The 1960s and 1970s were the hey-day of innovation in hospital design; the 1980s and 1990s were a period of consolidation. ‘Best Buy’ hospitals were not replicated; although basically a good design, it was difficult to modify them, although with ingenuity it was possible. For example, at Bury St Edmunds a day hospital, a day surgery centre and an assessment centre were added at the periphery of the hospital. Internal modifications were more difficult and space had to be found for intensive care by moving the kitchen and dining rooms. ‘Nucleus’ maintained its popularity and British architects tended to use the standards that had been set, rather than maintaining the impetus of innovation. Some trusts therefore turned to North American firms who seemed to have a greater understanding of building for modern patterns of health care. The first low-energy Nucleus, at St Mary’s, Isle of Wight, was designed to save half the energy of the conventional type. It was completed in 1990. Delays in construction increased costs, and changes in fuel costs and in environmental legislation added to the difficulties, but by its second year of operation St Mary’s was approaching its energy-saving target. A second low-energy Nucleus designed to save 60 per cent was opened in 1992 at Ashington on the Northumberland coast. More innovative technology was incorporated, including a wind turbine generator to provide 10 per cent of the electricity. It was uncertain whether the hospitals were a step too far into a new world of design or a valuable trail-blazing exercise.277 Hospital building remained prone to delays and escalating costs. Occasionally, projects became the subject of enquiries by the National Audit Office and the Public Accounts Committee. Guy’s new phase III, promised as the largest and most sophisticated ambulatory care centre, was approved in 1985, to cost of £29 million. Delays and rising costs brought the sum to over £152 million. A superb piece of NHS real estate had turned into one of the most disastrous building projects the NHS had seen.278
Traditionally all new hospital building was financed from the Treasury, because it could raise money at the best rates, and control over public sector expenditure was thought necessary. However, following the NHS reforms, private finance became first acceptable and then important. The Capital investment manual, in June 1994, set out the terms of the Private Finance Initiative (PFI), making it clear that future capital schemes had to explore private finance and be tested to see whether this was preferable to the use of public money.279 Although many projects were small, by the end of 1995, plans existed nationwide for 60 schemes costing £2 billion to be funded by the private sector and leased to the NHS, although contracts took a long time to sign. Large construction companies, with experience of putting together large projects, took the lead.280 The new system froze major capital expenditure for two to three years while projects explored the option. PFI was a way of maintaining a capital programme at a time of restriction on government spending. The NHS capital budget was cut in the expectation that the private sector would fill most of the gap, yet by the election in May 1997, not one major privately financed hospital project had started.281 The PFI sometimes led to a radical reshaping of local proposals that had previously been agreed in consultation, and schemes generated by individual trusts did not necessarily take account of interaction between trusts and other parts of the health service. A major problem was how to transfer risks to the private sector at a cost that gave sufficient return but was affordable to the purchasers. There was also a risk that a policy of ‘buy now, pay later’, would store up problems for the future, and pose a threat to the integrity of the district hospital and the ésprit of the staff.282 When they returned to power in 1997, the Labour Party also backed the PFI, needing to restrict public spending. Indeed they accelerated the speed with which schemes could be agreed.283 It was not immediately apparent that PFI created major debts for the future, but also undermined capital planning, previously on a regional and strategic level.
Health service information and computing
After the RHAs took over the experimental computer programme in the 1970s, most pursued hospital computerisation in a less radical way. Nevertheless, there were steady advances in the computerisation of FPC services and the installation of hospital information systems. Industry developed and promoted a variety of systems. The key was to link systems across organisational boundaries, for example, between general practice and the hospitals. However, the NHS reforms completely overturned the assumptions on which development had been based, requiring a massive expansion of information flow to underpin the internal market.
From 1993 to 1996, much effort was required to provide information systems that would support contracting. The market could work only if people knew what they were paying for, difficult because of the lack of a national patient identifier. When the NHS began, it took over the wartime national identity numbers. It was apparent by the 1960s that these were inappropriate in an IT environment because of their complexity and varying formats. In 1992 a project to replace the NHS number was announced, to generate and issue more than 50 million new numbers. The NHS central register had recently been computerised, and in December 1995, new blocks of numbers were issued to registrars of births to allocate to new babies. In March 1996, GPs were given new numbers for their existing patients. Because of anxieties that the NHS number might be seen as an infringement of civil liberties, FHSAs were discouraged from telling patients their new number.284 A second requirement was an NHS Internet-type system, allowing the type of information exchange that had been routine in banks and airlines for 30 years. It would carry contract data between purchasers and providers, information about GP registration and payments, laboratory results, and letters between hospitals and GPs. After a number of setbacks, a national system was put in place in 1996, although alternative commercial systems were already in widespread use.285
The problem of London’s health services
Piecemeal change in London’s hospital services continued. In 1988, Parkside district was created, uniting St Mary’s and the Central Middlesex, and progressively leaving St Charles’ as a non-acute community hospital. The plan involved the part-rebuilding of St Mary’s and rebuilding the Central Middlesex. The new Chelsea and Westminster Hospital opened in 1993, which enabled the closure of five separate hospitals. The MRC, under financial pressure, decided to pull out of its Northwick Park Clinical Research Centre and concentrate at the Hammersmith Hospital. This freed modern accommodation and research space. A small specialist hospital concerned with coloproctology, St Marks, needed to move from its poor accommodation in City Road, but the space offered by St Bartholomew’s was little better. St Marks had the foresight to realise that it had more to gain than lose from a merger, and grasped the alternative – Northwick Park – with enthusiasm. Relocation in 1995 provided the hospital with immediate access to intensive care, theatres and state-of-the art imaging and service departments. St Marks had its own front door, clinical directorate and all the advantages of association with a busy district general hospital.286
The internal market had a major effect on central London and the existing pattern of services was not sustainable. The internal market could have been allowed to refashion London’s hospital service, but this would have been unpredictable in its effects. There were two major planning exercises – one by the King’s Fund, and one initiated by government.
The King’s Fund Commission
London’s health services were reviewed by a commission appointed by the King’s Fund to develop a broad vision of services that would make sense in the early years of the next century. The Fund spent £500,000 commissioning 12 research reports on which the conclusions, published in June 1992, were based.287 However, the Commission spent less time on data analysis and its examination of educational issues than had the London Health Planning Consortium (LHPC). Substantial attacks were mounted on its findings because of a belief that it was working towards a predetermined conclusion and that some of its members had little sympathy for London or for specialists. The report accepted the case for substantial change and reduction in acute services with a complementary build-up of primary health care. It did not consider the paucity of backup beds in nursing and residential homes, which barely existed in the metropolis. It reported that at least 5,000 beds must be closed if the capital were to be guaranteed a good standard of health into the next century. “Costs in London are not just expensive, they are extremely expensive … change is inevitable … Inner London hospitals are top-heavy with doctors, and the rate of patients going through is slower.”288
Tomlinson
The government, though committed to market solutions, embarked in London on strategic planning and consultation. William Waldegrave, the Secretary of State, announced his review at the 1991 Conservative Party conference. Sir Bernard Tomlinson, Chairman of the Northern RHA, would form a strategic view of the future needs for service, education and research in London. The Times said that Mr Waldegrave was ‘wringing his hands’ over what should be done in London. However, he needed to be convinced that major decisions were intellectually based. Already expansion had been approved at Guy’s, the Chelsea and Westminster was established and St Mary’s was being developed. UCH/Middlesex, strongly supported by the scientific community because of the quality of its work, also wanted a new building.
Bernard Tomlinson reported in October 1992, and was influenced by the King’s Fund Commission.289 He emphasised the need to improve primary and community care, bringing primary care up to national standards and providing services for people with special needs, such as the homeless. This prescription was widely accepted, for there was a general belief (without much supporting evidence) that improved primary health care was fundamental for the degree of rationalisation envisaged for London’s acute services. An idea that had emerged at a Nuffield-sponsored meeting of professional leaders and health service managers was a ‘free-fire’ zone where normal health service rules could be modified to facilitate the development of primary health care. Tomlinson adopted this and the government provided £170 million over six years in a ‘London Initiatives Zone’ covering about 4 million people, where health care needs were great and an innovative approach was required. Money would be concentrated on this territory and educational and management effort would be strengthened.290 Most people underestimated the complexities of building new and better facilities for GPs and primary health care teams. Neither was it easy to turn a theoretically attractive plan for the teaching hospitals and medical schools into schemes on the ground. The money helped new projects and encouraged the study of long-standing problems of inner London practice. The pace of change was, however, slow, and the effect on acute hospital services minimal. Neither the changes to the hospitals nor those to primary health care were universally popular and it was politically hard to fight on both fronts simultaneously.291 A primary care support force worked to improve matters, and was disbanded in 1997; it was hardly possible to maintain that the issues the group was set up to address had been resolved.
The Tomlinson Report foresaw a surplus of 2,000–7,000 beds because of the withdrawal of inpatient flows from outside central London and the increasing efficiency with which beds were used. Tomlinson revived earlier proposals for rationalisation. They involved change at UCH/Middlesex that had become a single, powerful and scientifically important organisation. There would be a single management unit for St Bartholomew’s and The Royal London; the loss of one hospital from among the south London hospitals of Guys’, King’s, St Thomas’ and Lewisham; rationalisation at Charing Cross/Chelsea and Westminster with relocation of specialist postgraduate hospitals to the Charing Cross site; and changes to specialist postgraduate teaching hospitals to bring them into closer relationship with general hospitals. Tomlinson supported the removal of St Marks to Northwick Park.
In February 1993, the Department of Health’s response accepted the general thrust of the recommendations, and the need to develop primary health care.292 A London Implementation Group was formed, chaired by Tim Chessells, Chairman of South East Thames RHA. Six specialty reviews were established to examine clinical requirements; the clinicians in the specialty under consideration came from outside London and could be brutal when faced with the pretensions they sometimes encountered. The reviews proposed that the best centres should be developed, the smaller ones should be closed or merged, and new ones established where they were needed as at St George’s where there was a long-standing requirement for renal replacement therapy.293 Several initiatives now came together, making change possible. There was a research review of the London postgraduate hospitals, which pointed to the need for a wide range of skills, including biophysics and molecular biology, and association with general hospitals and university facilities. The specialty reviews were published in the middle of 1993.294 Medical school deans had to play a difficult hand; most were privately supportive of the need for change and prepared to work for it, but in public they had to take their colleagues with them as far as possible. Trust chairmen had been appointed knowing there was a job to be done. They and their chief executives were heavyweights who did not fool around, although transitional funds were available to sugar the pills of change and mergers. Ministers were far more involved than they had been in the work of the LHPC; Virginia Bottomley, always in the public eye, was continuously involved in the decisions being taken.
There were four broad responses to Tomlinson: the optimistic that primary and community care could be brought up to the standards elsewhere; the realistic accepting the recipe while gloomy about the money and the difficulties; the despairing who doubted whether anything would be accomplished; and the reaction at St Bartholomew’s that was to indulge in old-style emotional campaigning against the proposals. St Bartholomew’s had come to believe its own rhetoric and dismissed any proposal not to its liking, however well founded. Its campaign was given a voice by the Evening Standard in probably the most ferocious media war ever waged against health service managers and NHS policy, unparalleled in its unstinting aggression and partiality.295 “During the past twenty years,” wrote Lord Flowers in The Times, “with a few honourable exceptions, every attempt to reform London medicine has been defeated by vigorous rearguard action on behalf of any hospital or medical school adversely affected. The result has been that the standing of teaching and research in London’s famed medical schools has been steadily slipping. The time has come for the government to stand firm.”296 Virginia Bottomley took decisions that her predecessors had been canny enough to defer, and for which her successors would be forever in her debt. She narrowly escaped defeat in Parliament and a rebellion of some senior London Tory MPs. Her reward was the Department of National Heritage. Robert Maxwell, Secretary of the King’s Fund, said that the creation of big medical centres across London, the main tertiary centres of service, research and education for the future, had been talked about for 50 years. Now it looked set to happen and would be Mrs Bottomley’s best legacy.297
Changes in London’s hospital service once more had ripple effects on medical education. The university re-introduced proposals for merger. There would be four centres, each related to a multi-faculty college, St George’s maintaining an independent position. Teaching hospitals continued to merge and, in April 1994, St Thomas’, Guy’s and Lewisham came together, and St Bartholomew’s, the Royal London, and the London Chest Hospital formed a new Trust.
Increasingly, a five-radial sector framework became the basis for discussion, although inevitably decisions in one sector affected others. In east London, the Royal Hospitals Trust provided services on three sites – St Bartholomew’s, the London Chest Hospital and, increasingly, at The Royal London – where a major capital project was planned. In north central London, services were focused on the University College London Hospitals – UCLH, the Middlesex, the Hospital for Tropical Diseases, and the Elizabeth Garrett Anderson. In northwest London, there was less clarity, with the Hammersmith, Queen Charlotte’s/Chelsea Hospital for Women, Charing Cross and, in close proximity St Mary’s, the Chelsea and Westminster and two specialist hospitals, the Royal Marsden and the Royal Brompton. In south London, the position of St George’s was secure. In South East London, there was protracted discussion about the future of Guy’s and St Thomas’, the latter becoming the main site for acute inpatient and emergency care. With the demise of the London Implementation Group in April 1995, the two Thames RHAs north and south of the river became responsible for co-ordinating change, at a time when they were themselves facing demise.298 With the election of the Labour government in May 1997, yet a further review of the future of London’s hospitals was promised, further delaying rationalisation.
Probity in public life
The entrepreneurial ethos of the 1980s, and the move towards a market in which care was costed and contracts were placed, may have been associated with a decline in the old-time values of the public service. Performance-related pay encouraged subordinates to tell their seniors what they wished to hear, at every level from the hospital to the Department of Health. Performance targets on which bonuses depended might be set to fit with levels already achieved. There had been fraud in hospitals back into the nineteenth century. The Audit Commission found plenty of examples in the fifth decade of the NHS, and there was a crop of questionable activities, ranging from the unwise to the out-and-out fraudulent. Wessex Region planned a massive computer system that proved costly and difficult to commission; a series of regional decisions wasted millions of pounds, and a prosecution was attempted. West Midlands region was similarly criticised for its financial dealings during the privatisation of some support services.
In 1996, the National Audit Office reported a series of irregularities and breaches of rules in the former Yorkshire RHA, subsequently confirmed by the Public Accounts Committee, which bore comparison with earlier ones. The disposal of surplus land to a developer had resulted in a substantial loss to the NHS; large sums had been spent on lavish hospitality and senior staff had received irregular relocation payments. The most senior levels of management had been involved, and the NHS Executive had difficulty in disciplining managers for wrong-doing in one post once the manager had moved to another.299 Neither was professional life free of misdemeanour. The publication of accounts of treatment that had never taken place, and a false claim to success in the re-implantation of a tubal pregnancy, led the GMC to strike a consultant gynaecologist off the medical register. The editor, a senior obstetrician who was listed as a co-author, resigned.300
An Audit Commission report showed that some consultants did fewer than their contracted hours, and probably did more private practice than they should – a finding supported by John Yates, who had lengthy experience in the field of health service efficiency. Others were stopped from working as hard as they wished by restrictions placed upon their outpatient clinics and theatre time.301
Medical education and staffing
In the mid-1980s, the BMA had been concerned that there would be too many doctors. Luckily, because demand rose substantially, government had not reduced the number of student places. Progress in medicine, rising expenditure on the NHS, continued demographic change, public expectations, the 1990 GP contract and the NHS reforms all increased the requirements for doctors. However, low morale, more doctors retiring early and concerns about doctors’ health boded ill for the NHS. Shortages arose, both in hospital and in general practice, and Britain was no longer self-sufficient. Trusts had to have the staff to provide the services that purchasers required. It became increasingly difficult to fill some consultant vacancies, particularly in anaesthesia, ophthalmology and psychiatry, in unattractive areas, places where housing was expensive or where there was little prospect of private practice. Young psychiatrists were recruited by the private sector, which was rapidly developing profitable services providing care to NHS patients. Recruitment agencies began to be employed, and trusts, using the greater flexibility they had been given, might pay over the odds to get people to move.302 Junior doctors were in short supply and an increasing number were recruited from the EC, especially the Netherlands and Germany. In 1992 a new Medical Manpower Standing Advisory Committee predicted a continuing shortfall and recommended an increase in the annual medical school intake of 240 to 4,470 UK places, a target met almost immediately. The target for the year 2000 was 4,970.
Medical staffing
The profession had never come to terms with the inherent conflicts between academic medicine, private practice and the requirements of the NHS. Rigid central control of staffing had not solved the problem of a career structure in the hospital service. The simpler system for general practice, where there were fewer factors involved, had been more successful in balancing supply and demand. Attempts were made to adjust the number of career registrars to the expected number of consultant opportunities. However, the rate of growth of junior staff was often larger than that of consultants; it varied between the specialties, being lowest in anaesthesia (1:1) and highest in obstetrics and gynaecology (2.5:1).303 In 1990/91 efforts were made to reduce the strain and the working hours of hard-pressed junior doctors and offer them a ‘new deal’; substantial progress was made in reducing the number on call for over 72 hours per week.304 Any changes to the patterns of rotas had, however, substantial effects on consultants and some saw the shorter hours as a threat to the adequacy of training.
The Calman Report
Under European legislation, specialists registered in one country had a right to practice elsewhere in the community. British training was longer than in the other countries and, following a court case, had to be brought into line. It was also necessary to have a defined endpoint marked by the award of a certificate by a body responsible for regulating training. To deal with the European dimension, the CMO, Kenneth Calman, and the profession had to look at training and its timescale and modify the traditional system. Achieving a balance was put on ice. There was an opportunity to look at other workforce issues; for example, improvements in the quality of training, the reduction of juniors’ hours (often more than 77 per week), protected time for study and continuing medical education. A working group produced recommendations more rapidly than had the previous exercises, and agreement was achieved on some matters to which there had previously been opposition.305 The proposals went some way to completing the work that the Royal Commission on Medical Education (1968) had begun. There were, however, widespread anxieties for they would have an impact on the organisation and provision of hospital services, the structure and management of specialist training, the balance between career grade doctors and trainees in hospitals, and the career pattern of hospital doctors.
The Calman Report
- Reduce the minimum length of specialist training to seven years
- Introduce more explicit training curricula and a certificate of completion of specialist training
- Merge registrar and senior registrar grades into a single specialist registrar training grade.
Hospital medical staff
England and Wales (whole-time equivalents)
Total |
Consultants |
|
1949 |
11,735 |
3,488 |
1959 |
16,033 |
5,322 |
1963 |
17,971 |
6,049 |
1968 |
21,232 |
7,544 |
England* |
the removal of Wales reduced total numbers by about 1,250 |
|
1973 |
24,829 |
8,988 |
1978 |
31,013 |
10,382 |
1983 |
33,155 |
11,849 |
1988 |
37,600 |
13,204 |
1993 |
43,801 |
15,210 |
Source: NHS Executive data 1996.
A single specialist registrar training grade was created, into which existing registrars and senior registrars were slotted. In 1996, a guide to training set out the experience required, and the number of slots providing structured training was determined for each region and specialty.306 Junior doctors were appointed to fill these slots, created to provide educational opportunities, and not for service reasons. After five or six years, the doctor might be awarded a certificate of completion of specialist training, without which appointment as a consultant would be impossible. Juniors were concerned about the difficulty of changing their chosen specialty and finding a permanent post, particularly if career posts were in short supply. Being supernumerary, departments should in theory be able to run in their absence. Trusts were asked to make forward estimates of the senior staff likely to be needed over the next three to five years, and agree these with purchasers. The cost of consultant expansion remained a problem, but the need for more consultants to provide the services purchasers were demanding fuelled consultant expansion, which rose to 4–6 per cent per year.
The changing requirements of the health service in the fields of audit, contracting and management, and the clinical demands of scientific advance and the pace of care, exacerbated the problems of maintaining cover and staffing hospitals around the clock, altering the traditional pattern of specialist work. Calman aimed to alter the delivery of acute hospital services from being consultant-led to consultant-based. In the minds of the seniors, this seemed to mean consultant-provided, not what the leaders of the hospital doctors had intended. The demands, on the one hand to change the pattern of staffing, and on the other to work more cheaply and effectively, faced trusts with a dilemma. Increasing numbers of sub-specialties, stiffer College criteria for the approval of training posts, and junior doctor cover with a sensible rota system placed smaller DGHs at a disadvantage. It was even more important to bring hospitals together onto a single site where this had not as yet been done. Hospitals unable to create attractive training opportunities faced problems. For example, the West Suffolk Hospital, with 70 consultants, calculated that to provide 24-hour on-site consultant cover would require an additional 21 consultants, and if the service were to be consultant-provided, the need would rise to a further 56. Nor was it clear how some departments would continue to run if there were local reductions in junior staff; or whether consultants would welcome direct participation in emergency care, day and night, any more than they had in the 1960s.307
The traditional concept of ‘firm’-based care had been eroded. Consultants found themselves responsible for ‘outlier patients’ who, because of bed allocation problems, were housed in wards not designed for their specialty. Time and energy were wasted as doctors moved from ward to ward. Laboratory reports and important papers might be in the wrong place. One doctor had to work with many different nurses, and vice versa – a problem aggravated by the introduction of team nursing, for no longer could calls be channelled through a single ward sister.308 Juniors worked shifts, looking after upwards of 200 patients. Professional morale, disillusion and disenchantment with medicine grew. The RCP produced proposals to try to mitigate the problems resulting from alterations in the organisation of the NHS, in training, in junior doctors’ hours, increased emergency work and the loss of a geographic focus to clinical work. It saw the need for teams operating within defined conditions of time and space. Some young doctors said that they regretted their career choice. Women were especially affected by a pervasive competitive atmosphere, a process of “teaching by humiliation” and the pressure to get good jobs. There was resentment at inadequate career counselling, on-call responsibilities, and being used as workhorses. Isobel Allen, who had studied the careers of doctors, and especially women doctors, pointed to the problems facing women who entered medical school in the same numbers as men, but needed flexible training and work, with part-time possibilities. There was no evidence that medical schools were recruiting students unable to stand the pace of medicine, but the present generation of doctors was less inclined to accept things than their predecessors. They wanted to be treated as professionals but felt like ants at the bottom of the heap.309
Nursing
Nursing education and staffing
In 1948, nurses were badly paid and worked long hours, but they knew they were respected and valued, they were well looked after in the nursing homes, and they wore smart uniforms. Now they were better paid but had less job security, they were worried about litigation and they had little reason to believe that anyone cared about them. They looked to their professional organisations– the RCN, Royal College of Midwives (RCM) and Health Visitors’ Association (HVA) – for union support. Nursing was in crisis; there was industrial strife, supported by National Union of Public Employees (NUPE) and Confederation of Health Service Employees (COHSE) but not the RCN. Government took action, not just because of the strikes, but because recruitment was poor, wastage was high, nurses were voting with their feet, and demographically the number of school-leavers was falling fast.
Three factors turned recruitment around. First, the government had been discussing a new grading structure, and this was agreed in May 1988. As part of the 1988 pay review, a nationwide regrading of posts took place, coupled with a rise of 4–30 per cent. The government hoped that this would lead to greater flexibility of pay and, in time, to geographical variation. The new scales were meant to keep nurses at the bedside by providing a sensible career progression for clinical staff, and recognition of qualifications, skills and wider responsibilities.310 The grading system stressed the importance of a single ward sister (grade G), with total responsibility for the ward at all times. The Review Body wanted the grading of all staff to be complete within six months. Sometimes flexibility was used by management to mitigate recruitment problems, generating ill-will and many appeals. The second factor was that the country moved into recession and people neither wanted nor were able to change jobs so easily. Third, in May 1988 the Secretary of State accepted Project 2000, which nurses had pinned their hopes on, with the proviso that its timing and detail did not jeopardise a proper nursing service.
Nursing leaders hailed the announcement as a significant milestone and a historic victory.311 John Moore recorded a joint understanding with the nursing professional bodies that education would retain a clinical focus and the time students spent in clinical areas would not be reduced. On that basis, government accepted that nurses in training should have student status, non-means-tested bursaries, and move towards a supernumerary position. However, demographic factors made it unlikely that a professionally qualified workforce of the size needed could be attained, so the gate of entry should be widened, and more work was needed on developing vocational training for support workers. Forty years earlier, the matrons and hospital administrators had refused to allow education to be uncoupled from service, removing students from the labour force. After Griffiths, chief nursing officers and matrons were no longer there. To replace the service contribution made by the students, extra money was allocated to recruit additional trained and untrained staff. When this was clear, hospital managers had little to gain from managing a nurse training school. Project 2000 had an easy ride.
Project 2000
Project 2000, accepted and published by the UKCC in 1986, brought to an end the system of nurse education developed by Florence Nightingale and by Mrs Bedford Fenwick of St Bartholomew’s. In moving to an American academic and theory-based model, nurses distanced themselves from domestic or medical roots. University affiliation and academic status were seen as strengthening the claim to be ‘a profession’. Academic nurses disparaged the earlier pattern of apprenticeship in hospital-based nursing schools, where the accent had been on clinical knowledge of disease and basic skills, hygiene and sterile technique, and in which the safety and comfort of the patient were paramount.312 Students had once been selected by a matron who would try to assess whether the candidate really wanted to nurse; now this was an academic responsibility. People who might be natural nurses might be put off or considered ineligible, for example, the caring and capable young person who wanted to do something active but did not wish to be judged by exam results, to go back to school or to become a health care assistant.313 The government wanted to increase the number of people in higher education without too large an increase in costs. The new universities, polytechnics that had obtained university status around 1992, were geared to student numbers in tens of thousands, high-volume teaching and slim academic staffs. They had the capacity, the flexibility and lower overheads than older universities. They had teachers in psychology, biochemistry and social sciences, and the large budgets of the new nursing colleges were attractive. In their financial bids they were often successful in obtaining a linked nursing college. However, nursing academics were seldom considered part of the university mainstream. Existing nurse tutors might be poorly prepared in academic terms and were often replaced. Course objectives might be expressed in psycho-social jargon. One handbook said: “During the course students will undertake modules that are both theme based and praxis based. Don’t worry if it seems confusing at first, that’s all part of the learning process!”314
There was now a single point of entry for students with a wide range of abilities. Of the 15,000 entering registered nurse training in 1987/88, 30 per cent had fewer than five ‘O’ levels. The new three-year course had a common 18-month foundation programme, after which students could choose general nursing, children’s nursing, nursing of the mentally ill or of people with learning difficulties. Theoretical work was supplemented by clinical experience under supervision one to three days a week at district hospitals, and the loyalty of the student was now to a college, not a hospital. Those successfully completing their course were awarded a diploma in higher education and a professional nursing qualification. Midwifery courses also became university-based. Health visiting remained a post-diploma qualification and, because basic nurse education was the main priority of the UKCC (UK Central Council for Nursing, Midwifery and Health Visiting), midwives and health visitors were worried that the loss of their training bodies would affect clinical practice in their spheres.
Demonstration schemes were established in 1989, and Project 2000 was rapidly implemented. The gap between nursing theory and practice was not overcome. As students were seldom part of a ward team, they felt unskilled in comparison with their predecessors. Because students could not be relied on to provide a predictable contribution to ward work, busy ward staff could not be counted on to provide educational support. Occasionally there were placements in private hospitals, where the students were well supervised by experienced nurses. Many, entering nursing to nurse, found courses concentrated so much on health, psychology and research studies that they barely admitted the existence of disease. They compensated for the lack of a practical focus in their courses by taking agency work as care assistants.315 There was much still to learn on qualification. The course had swung too far towards theory at the expense of practice and did not prepare students for their future responsibilities.316 Private hospitals preferred to recruit those with a year’s clinical knowledge and experience. It had been argued that Project 2000 would reduce levels of wastage because of its association with higher education. However, because of economic recession, wastage fell to around 5 per cent well before Project 2000 was implemented.
Nurse education placed an accent on the processes of daily living, the role of the nurse, the psychology of the patient and the interrelationships. Firmly based in the social sciences and educational theory, nurse education was distancing itself from experienced clinical nurses as well as medicine. In the 1950s, the Nuffield work study had identified clearly who did what; such studies were no longer carried out. Nursing research was less interested in what nurses did, more in philosophies of nursing, sometimes from a feminist perspective. Doctors were perpetrators of past nursing oppression, nurses were a victimised group, and nursing “a woman’s occupation in a man’s world, marginalised by medicine and government”.317 ‘Reflective nursing’ became the profession’s new enthusiasm.318 Reflection was derived from educational theory, not clinical care. It involved examining life experience to obtain new understandings and perspectives. Florence Nightingale had encouraged nurses to think about what they were doing. They were now asked to prepare psychological profiles of themselves, to examine uncomfortable feelings and thoughts, and to learn to know themselves.
In the past, British nurses, unlike their North American counterparts, had little opportunity to enter nursing through university education. The first degree course was established in Edinburgh in 1960. Thirty years later, 14 universities, including former polytechnics, were offering a degree course leading to a BSc in nursing studies, although the total number of places was only 400, one hundredth of the number following the diploma course. To maintain their registration, all had to conform with Post Registration Education and Practice (PREP), a new system of continuing education. PREP placed the accent on health promotion, counselling, educational development and research.319
Personnel planning
Statistics were difficult to come by and the information about motivation to enter or continue in nursing was often lacking. Because of the recession, people in the late 1980s did not change jobs, and vacancies were few. Nurses were appointed to particular wards and, as their grade and salary were determined by their experience within a particular specialty, they tended to remain in that field. After four decades of growth, the number of nurses reached a plateau, in spite of increasing NHS activity from shorter lengths of hospital stay, the intensity of patient care and the feelings of stress among nurses. In 1993, responsibility for funding nurse education was transferred from national professional boards to regional consortia of health authorities and trusts. Nursing lost its ability to protect its numbers. Each region had a target based on organisational requirements. This ‘employer-led system’ resulted in a reduction in the entry to Project 2000 courses. The English entry in 1994 and 1995 was roughly 11,000 per year, comparable to 1948.
As Britain came out of recession in the mid-1990s, wastage began to rise again to about 20 per cent.320 Nursing shortages reappeared in the headlines in 1996, and the Nursing Times carried many advertisements for vacancies. Nursing agencies flourished, especially in London, where many hospitals were heavily dependent on temporary staff. Some trusts began to examine how they could become more attractive employers, looking to Scandinavia and Australasia for staff, and offering recruitment incentives.321 Two in three nurses over 25 were married, and many others reported a partner. Almost half reported the existence of dependent children. Many had a second job – 30 per cent in the south-east.322 The 1991 census found that 140,000 nurses, midwives and health visitors in England, trained at NHS expense, had left the profession. Half were working in other jobs and 40 per cent of these did not intend to return to nursing. NHS training equipped nurses with good, transferable, skills.323 A large and varied workforce, nurses could not be expected to have unswerving vocational commitment, and the disappearance of the hospital as the traditional centre of loyalty fuelled instability. They moved rapidly, selecting posts providing the experience they wanted, seldom staying for more than a year. Feminists blamed the NHS, rather than the inherent characteristics of people in modern society.
England and Wales (numbers) including registered nurses, students, enrolled and pupil nurses
1949 137,636
1959 190,946
1963 215,219
1968 255,641
England (whole-time equivalents)*
1973 251,778
1978 292,640
1983 329,965
1988 330,669
England (excludes 28,000 Project 2000 students)
1993 296,414
1994 286,093
1997 300,467
*The removal of Wales from the figures, and the use of whole-time equivalents, reduced the figures by about 26,000.
Nurses in senior positions, though wanting highly skilled nurses to provide the direct nursing care of patients, faced a situation in which managers needed to reduce costs.324 The large proportion of hospitals’ budgets spent on nursing, coupled with the disappearance of students as part of the labour force, led the NHS to attempt to control costs by substituting less-skilled staff for registered nurses where possible. Enrolled nurses were offered conversion courses so that they could be registered and, simultaneously, the door was opened to the replacement of nurses by health care assistants, now with National Vocational Qualifications. A second-level nurse had provided basic nursing care for many years, and it was suggested that generic carers with comparatively brief training could provide most care in the future. A new, broader-based role encompassing nursing, but not conforming to traditional job descriptions, was proposed.325 Such workers were introduced to acute wards to undertake a variety of tasks for which a skilled nurse, though desirable, might not be strictly necessary.
Since the end of the 1980s, government had favoured some local discretion in pay scales, which health service unions were opposed to. In 1995, nurses were enraged by a Review Body offer of 1 per cent across the board plus an extra rise of between 0.5 and 2 per cent to be determined locally.326 Leaders of the RCN urged conference representatives to bring down the government, and nurses voted to end the no-strike policy. They had chosen an odd pretext, said The Times. The principle for which they were prepared to sacrifice their long-held rule was that a nurse in Aberdeen should be paid the same as one in Aberystwyth. Why should not variations in cost be reflected in different salaries across the country as they were in the private sector?327
Nursing practice
In 1948, medicine had few answers to serious illness, many hospital patients were gravely ill, and good nursing was a major factor in recovery. Young doctors learned much from the ward sisters. They relied on the nurses’ careful observation of patients and the nurses would make the work of the junior doctor easier, particularly at night. Student nurses, generally responsible and intelligent people, provided much care that now falls to health care assistants.
Work at Brunel University had suggested that the only person in a hospital apparently responsible for individual patients was the consultant, whose name usually appeared above the bed. However, the delivery of personalised nursing care had long been one of nursing’s aims. Could nurses who worked only 37 hours out of 168, have personal accountability comparable to that of the consultant? In 1991 the Patients’ Charter gave patients the right to a named, qualified nurse, midwife or health visitor responsible for nursing or midwifery care. The widespread popularity of the new policy in the upper échelon of the profession owed much to an apparent conversion of government to primary nursing. It was received as an important symbol to be grasped with enthusiasm.
A senior nurse said that “Named nursing is part of the unfolding story of nursing coming into its own. It will not produce instant perfection, it is not a panacea, it will not solve nursing’s problems overnight. It is a tool which can be used to further the quality of patient care. Whether it succeeds or not will be largely in the hands of nurses themselves.”328 The problem was, however, that it was more expensive than other patterns of nursing, and that there was little clear evidence that it did, indeed, improve care.
Primary nursing was a complex system that worked best when patient turnover was low, the staff were stable and their numbers adequate. This seldom being the case, it was not generally applied in a pure form. Patients were divided among teams, often identified by a colour, each consisting of one or two staff nurses and several health care assistants. A two-team system was commonest; a greater number of teams led to problems with the organisation of shifts. A staff nurse, perhaps the one admitting the patient, would be identified as the ‘named nurse’ and that nurse’s name might now be over the bed rather than the consultant’s. When care was long term, the nurse and patient had time to get to know each other. However, when patients were acutely ill, and their condition was changing rapidly, they might be moved between hospital wards. High-technology medicine was businesslike, patients were very sick, and there was less to be gained by emphasising primary nursing.
There was more diversity of role among ward sisters, who might have to choose whether to devolve care, and to act as detached ward managers concerned with budgets and general issues, or to maintain a clinical leadership role. When there was an accent on primary nursing, the ward sister might have less clinical involvement and ability to maintain common standards and to teach a tradition. The Audit Commission thought that, with the increased speed and complexity of medicine, it was no longer possible for ward sisters to know everything that was happening in their ward; delegation of responsibility to staff nurses was inevitable.329
The Commission considered different patterns of ward nursing, but was handicapped by the lack of evidence as to which was best. Advised by people with progressive views, it came down firmly in favour of primary nursing. Primary nursing was less practised than was claimed, particularly at night when the small numbers on duty meant that nurses had to work as a single group. Although task allocation was anathema to nursing academics, the criticism was overdone. It worked, it did not mean the abandonment of a holistic approach, and it capitalised on the expertise of all staff – from the most junior to the most experienced. It ensured that the least-skilled carried out the simplest tasks. All patients received some attention from the most experienced, nurses were more accessible to patients and relatives, and juniors were not expected to assume responsibilities for which they were ill-prepared, having an opportunity to consolidate their skills.330
In most NHS hospitals, nurses no longer looked like those of old. American-style trouser suits were increasingly in evidence. Sometimes nurses’ casual approach to uniform verged on the scruffy; catering assistants might be better turned out. At times, patient handling was empathetic and young nurses might be deeply affected by the fate of the patients into whose world they had entered. Not always; one elderly woman who objected to a nurse using her Christian name was told she should be in a private hospital. Wards might be dirtier and noisier, with TV on late into the night. In some wards, bed sores reappeared. Feeding and washing patients was downgraded in importance.331 Nurses might dissociate themselves from apparently mundane parts of patient care – making people comfortable, keeping them clean and seeing that they ate their meals and took their tablets; if performed, these duties fell to health care assistants. As nurses became more psychology minded, their writings conveyed the message that mere care was second best. They were at risk of selling their caring birth right for a mess of psychological pottage.332 Staff with special skills – for example, physiotherapists – might provide services that, in times past, had fallen to the nurses. Because discharge could be rapid, care in acute illness was often provided by relatives at home. The nurse – allegedly advocate, supporter and counsellor – was replacing the nurse who comforted and made comfortable. Time spent on the nursing process, nursing models and personal stress counselling reduced the time spent with patients, although the evidence that models improved clinical decision-making was largely anecdotal.
In 1992 the UKCC issued guidance on the scope of professional practice and the codes of professional conduct.333 It maintained that nurses were responsible for developing their own competence and practice, and were not dependent on doctors. The UKCC provided ethical statements of how nurses should safeguard the interests of patients, serve the interests of society, justify public trust and uphold the reputation of the nursing professions. With the exception of the prescription of drugs and the signing of death certificates, nurses were able, in law, to do almost anything, but should consider whether they had the authority to act and the necessary skills, ability and competence. Nurses should seek training when they thought they were inadequately prepared for their duties, rather than seeking many certificates. Studying allegations of misconduct, the UKCC thought they sometimes demonstrated poor management and lack of effective supervision for staff.334 In some hospitals there was little oversight of ward nursing standards. With a huge workforce of varying quality, reliance on professional autonomy in place of a nursing hierarchy was risky. It was often in the private sector that traditional approaches were maintained. A uniform code would be in force and the nurses knew that high standards were expected of them. Advertisements for private medicine frequently showed a nurse in cap and apron. Lacking junior medical staff, consultants and ward sisters were mutually dependent. The chief nursing officer was responsible for nursing standards, and these were subject to external inspection. Line management of nurses by nurses remained.
Matters were better in specialised units, where doctors and nurses worked together in a collegiate fashion, across the traditional boundary separating medicine and nursing, sharing the same objectives. There senior clinical nurses would undertake a variety of skilled tasks. They provided leadership and clinical supervision, and sometimes moved into a largely medical role. Medical budgets were tight, and when it was difficult to fill a junior post, doctors would examine the possibility of a nurse with additional training taking on the role. Nurses might take a lead in asthma, diabetes, cancer care, neonatal units and intensive care. In theatre, they might act as a surgeon’s assistant.335 The boundary between the work of doctors and nurses was changing; generally patients did not mind and there was no evidence that the quality of care was poorer.336 In South Cleveland, a nurse trained in protocols, working under supervision, replaced a house officer, admitting and diagnosing patients with medical emergencies such as stroke and intracranial haemorrhage. In Bristol, three nurse-practitioners replaced house officers in newly established units dealing with gastroenterology, urology and general surgery, and neonatal care.337 Within nursing there was tension between nurses, nurse specialists and nurse-practitioners about the theory and practice of advanced nursing. There might be confusion about training, status, working relationships, career structure and the pay of those undertaking responsibilities well beyond their traditional roles.338 Many courses in ‘advanced nursing’ appeared, to provide the background nurses needed for the new functions.
District nurses were increasingly incorporated into general practice, some feeling that there had been a loss of autonomy. Health visitors, lacking the ability to quantify what they did, were under pressure from management. Psychiatric nursing was quite altered. Many more patients went to day hospitals and returned home at night. The increasing number of community psychiatric nurses had to establish their role – the care outside hospital of major psychoses or the counselling and management of depression and minor problems. Sometimes they preferred the latter, and to divorce themselves from consultant community psychiatrists.
Nursing development units (NDUs)
Nursing development units (NDUs) had been established in the mid-1980s; in 1988, the King’s Fund financed additional ones and established a programme to evaluate their work. NDUs were found mainly in clinical areas where caring and rehabilitation dominated, rather than in acute care, a decreasing part of the NHS. The central philosophy was nursing autonomy; nurses determined how work should be carried out and the methods to be adopted. Units aimed to monitor and evaluate the care given and develop nurses personally and professionally, while offering the best standards of (nursing) care. NDU mission statements savoured of motherhood and apple pie; a statement that everyone is a worthwhile individual whose beliefs need to be respected, and who is naturally creative and effective, is laudable but is not a definition of specific aims. The literature contained much material on the interrelationship and personal development of nurses, less about the details of the nursing therapy patients received, and virtually nothing about the role of doctors.339 Some combined staff development with ‘reflective nursing’.340
Nursing administration
In 1948, sisters led the ward teams, closely aware of patients’ progress and the detailed care they received. Matrons, and superintendents in the community, considered the effects on nursing of changes in medicine and management. It was now difficult to see where professional leadership lay; the very concept was open to question. If it existed, it centred on philosophers and educationalists, not experienced and active clinical nurses. Organisational changes had eliminated many middle management posts in nursing, making it harder for nurses to move into it from clinical work. Nursing management was now often in the hands of managers who might come from any of a number of disciplines. Management wanted a flexible workforce, able to accept new roles, explore the interface between medicine and nursing, and to monitor the health care assistants who often provided basic care. Many senior nurses went into posts concerned with quality; some obtained business qualifications and returned to the NHS as unit business managers. Changes in the composition of trust boards reduced the nursing voice at a senior level, although sometimes academic nurses might fill the gap. Nursing tried to adapt to the NHS reforms, Caring for people, The health of the nation, the Children’s Act and the Patients’ Charter. Application of the jargon of management – targets, visions, missions and performance indicators – did not work well. Rather than concentrating on the bedside, trust senior nurses tried to reflect government priorities. Some trust reports lacked data, and made up for in length what they lacked in meaning.341 Centrally, attempts were made to develop a strategy of nursing for nurses.342 The starting point was a rephrasing of the first section of Virginia Henderson’s definition of nursing.343
The philosophical basis of nursing, midwifery and health visiting is concerned with enabling the individual, whether adult or child, and his/her family to achieve and maintain their optimal physical, psychological and social wellbeing when that individual or his/her carers cannot achieve this unaided.
An attempt was made to identify the challenges facing nursing and midwifery. Nurses “must look for where they can properly take a lead, not settle into a secondary role … to advance confidently nurses need to consider who they are and what they want to be.” There was, however, no clear path ahead. Nursing was slanted towards health promotion and needs assessment rather than curative medicine.344 The nursing profession seemed to want to take the lead in the improvement of public health, rather than to work in partnership with doctors in medical care. The professions of medicine and nursing remained curiously apart; if nurses were becoming less enthusiastic about curative medicine, who would take their place? If the electronics technician and the engineer took over high-tech tasks, perhaps the nurse would once again be increasingly concerned with personal care, like her medieval predecessor.345 In the past, nursing’s acceptance of medical leadership in the NHS had the advantage that it came with medicine’s protection. Having opted for professional autonomy, nursing now had to fight its battles alone. The concentration on higher education and the distraction of high-status professionalism left nursing dangerously exposed.346
The condition of the NHS
In the fifth decade, the health service was redefined in terms of what would be provided and how the provision was to be organised. Primary care was stronger than it ever had been, ensuring ease of access. Technological progress and an international research base had transformed the hospitals. But basic problems of rationing care persisted. Cash limits on purchasing authorities and the continual pressure of efficiency savings limited the work that providers could do. By 1997, although special programmes had greatly reduced the number of patients who waited more than a year for admission, waiting lists in general were lengthening, services were being cut and operations were being postponed nationwide because purchasers had run out of money. Consultants warned that, during the winter months, the NHS might be reduced to treating emergencies only, if underfunding was not remedied. The NHS was facing its worst financial crisis for a decade and was heading for financial meltdown.347 The reforms of the 1990s had not solved the basic NHS dilemma outlined by Powell in the 1960s of increasing demand and constrained resources, to which neither political party, nor indeed any other country, had an answer.
As the decade drew to its end, and the majority of the Conservative government dwindled, the commitment to the NHS was reaffirmed in a White Paper, A service with ambitions.348 The NHS must continue to be there when we needed it. Doubts about the viability of the NHS were nothing new, but the government believed that, throughout its history, the NHS had found ways of accommodating to the pressures upon it. Others were less certain.
The new Labour government, elected with a landslide majority in 1997, came in to power without any plan for the NHS ready. Mostly Labour did not understand the potential advantages of the market reforms their predecessors had introduced, and the new Secretary of State, Frank Dobson, thought most anathema. Labour placed a greater emphasis on public health appointing the first ever Minister for Public Health (Tessa Jowell) and started a review of inequalities in health with the assistance of the former CMO, Sir Donald Acheson. It also announced a commitment to end all forms of tobacco advertising. As regards NHS organisation, the new government under Frank Dobson, moved to end the internal market that its predecessor had created. The division between purchasing or commissioning health care and providing it would be maintained, but GP fundholding would be phased out and simplified arrangements developed to replace the internal market’s contracting system. An emphasis would be placed on co-operation and public service rather than competition. The new government placed its traditional emphasis on improving the effectiveness of the relationships between the health service and local authority social services, perhaps by establishing a Royal Commission on Long-term Care to promote this. It embarked on a further programme of legislation and organisational change. Facing the financial problems of its predecessor, a further review of health service financing was begun, and Labour began to undertake the policy analysis that had been missing on election. The crisis it faced in the NHS was immense, and nothing substantive was done to face up to it.