Chronology: the third decade

1968

Background

Czechoslovakia uprising

Moon orbited
 

NHS events

Ministry of Health and Ministry of Social Security form

Department of Health and Social Security (DHSS) – Richard Crossman, Secretary of State

Hong Kong influenza epidemic

Royal Commission on Medical Education (Todd)

Seebohm Report

First Green Paper on NHS Reorganisation

Designation of regional hospital board (RHB) hospitals for medical education

Prices and Incomes Board no. 60 on Nurses

Heart transplants

Royal College of Nursing (RCN) admits student nurses

1969

Background

Man on the moon

Age of Majority reduced from 21 to 18

British troops in Northern Ireland

Woodstock

Boeing 747 in service
 

NHS events

Bonham-Carter Functions of the DGH

Ely Hospital report published

Hospital advisory service

Royal Commission on Local Government
 

Twenty years on

The 1960s had started with optimism, austerity had ended, economic growth seemed assured, poverty was receding and life was improving. From 1964 the situation deteriorated as the international balance of trade swung against the UK and long-range economic forecasts proved wrong. There was severe deflation in 1966, devaluation of the pound in 1967, and the third decade of the NHS began in financial crisis. The time had come to rethink the pattern of the NHS. Alvin Toffler’s book Future Shock described the shattering stress and disorientation induced in people experiencing too much change in too short a time. Clinical development and activity was certainly increasing inexorably.1 There had been few organisational changes in the first 20 years of the NHS which was ‘administered’, rather than ‘managed’. Resources had been tight and there were many service problems to be resolved. Central planning was in vogue and solutions were now sought in changing the management and structure of organisations. Small was not considered beautiful and there was a deep belief in the wisdom of management consultants. The arguments and assumptions about the NHS were changing. No longer was it merely a question of whether the nation could afford a health service, and what form of cost control was required. New questions required a more political solution. Were the resources of the NHS correctly deployed, north and south, and between acute and chronic care? What form of management was appropriate? What were the implications of community care? Should the education of doctors and nurses be matched less to acute disease and more to longer-term needs? Was there too great a disparity in the income of the high earners in the NHS and the ancillary staff? What was the place of private practice? How far should union power extend?

Royal Free HospitalThe future relationship of health and local authority services became a key issue. The closely related reports of the Royal Commission on Local Government and the Committee on the Provision of Personal Social Services (Seebohm) appeared. Seebohm recommended bringing together all personnel concerned with any aspect of personal social care into new social services departments within major local authorities. It argued for ‘generic’ social workers, which meant the transfer of highly skilled medical social workers from the hospitals to the local authorities and the destruction of bridges that were being developed between social workers and doctors.2 At the close of 1967, Kenneth Robinson, the Minister, had announced that the government’s views on the structure of the NHS would be set out in a Green Paper. After 20 years, said the British Medical Journal (BMJ), the structure was out of date, but at least modernisation and reform might be in sight. There had been widespread demand for a new service designed to meet the requirements of modern medicine. Delay in making the necessary changes had been a major cause of the emigration of medical people. The whole welfare state, and not just the NHS, needed to be recast for the changing needs of the community. The BMJ concluded that the integration of the management and financing of the NHS was necessary.3

Challenges for the future

Kenneth Robinson chaired a conference to celebrate the 20th anniversary of the inception of the service in 1948. Jennie Lee, Aneurin Bevan’s widow, opened it.4 Bevan, she said, believed that local government could not have coped with the establishment of a health service in 1948 along with all its other responsibilities. The acceptance by the state of total responsibility for a comprehensive health service for everyone was an essential first step. Nevertheless, Bevan had seen that a time might come when the relationship between health and local government would need re-examination, particularly if the units of local government were to become larger.

At the conference, Professor John Butterfield, a physician from Guy’s Hospital, traced the way patterns of disease and health care had changed. The population had increased by 10 per cent and was better housed. Technological advance had taken place, with developments in electronics and laboratory automation and an explosion in the number of drugs available. Acute and infectious diseases were on the wane, but disability caused by chronic and degenerative and debilitating diseases was increasing. John Reid, the Medical Officer of Health (MOH) for Buckinghamshire, talked of the growing interest in medical care in the community, and the development of group general practice with the attachment of local authority health visitors, social workers, nurses, midwives and auxiliaries. Artificial barriers between the three parts of the service were being removed. Henry Yellowlees, Deputy Chief Medical Officer at the Department of Health, spoke of planning and information, and the effects of different medical policies on the use of resources. JOF Davies, the Oxford Regional Hospital Board’s senior administrative medical officer (SAMO), said that reviewing clinical performance and taking advantage of operational research and statistics was important. Desmond Bonham-Carter, Chairman of the Board at University College Hospital, looked at personnel issues. Some people had suggested that the health service was on the verge of general management following an industrial model. General managers would then have authority over medical and nursing directors. His view was that, even if such a manager were paid more than leading consultants and top posts were open to all, a general manager would be unable to dictate to medical, nursing and other disciplines, because their decisions depended on clinical and professional judgements. Trades union representatives raised the need to value the many skills required in the NHS; the problem of recruiting, training and retaining staff; the problems of a tripartite service; and ensuring involvement of the staff side in proposals for change. The unions saw increasing government involvement in negotiations on pay and conditions of service, incomes policies and the pursuit of productivity. They did not believe that any change of government in the next 20 years would alter this fundamentally, because government was increasingly involved in the national economy.5

Much of the agenda for the next decade had been outlined. “Forward into the 1970s,” said Kenneth Robinson, closing the conference. An economic crisis in January 1968 again forced the government to cut public expenditure. Faced with the alternative of reducing hospital building or re-imposing prescription charges, Labour maintained the building programme and introduced a charge of 2s 6d (12.5p) per item, with certain categories of patient exempted.

Medical progress

Health promotion

People were living longer and the major causes of death were changing. Public health measures needed a new perspective. On 10 September 1973, Marc Lalonde, the Canadian Minister of National Health and Welfare, addressed the PanAmerican Health Organization conference in Ottawa. His talk, subsequent working papers and a report published the following year, modestly claimed to unfold a new perspective on the health of Canadians and to thereby stimulate interest and discussion on future health programmes for Canada. The report focused attention on four main factors: human biology, environment, life style and the organisation of health care. Much premature death and disability was preventable.6 Some hypotheses might be sufficiently valid to warrant taking positive action. Being slim was better than being fat; it was better not to smoke cigarettes; alcohol was a danger to health, particularly when driving a car; and the less-polluted air and water were, the healthier we were. The report received international acclaim and led to publications in the USA, Australia and the UK. In November 1975, in response to concern about the disparity of spending on curative as opposed to preventive services, a parliamentary subcommittee began a special inquiry into preventive medicine. David Owen, then Minister of Health, wished that a major effort to bring health promotion to the forefront of NHS through the planning system would mark his time in the Department. In 1976, the government published Prevention and health: everybody’s business.7 The main killer diseases – coronary heart disease, lung cancer and bronchitis – were largely caused by people’s behaviour. Both individuals and government must accept responsibility for health. Cigarette smoking, lack of exercise, the fats in the diet, and obesity were part of the life style of advanced, urban, industrial high-consumption societies. The various government publications sparked debate, for resources were limited and had to be focused on where they would do most good. Some saw the accent on life styles as only half the story, blaming the victim instead of dealing with underlying socio-economic factors affecting health. Government accepted that it had a role to play through fiscal policy, environmental controls, education and housing. But, as there was a class gradient in diet, exercise and smoking, had not the trades unions a role as well?8 Government could not move too far ahead of public opinion, but no single measure would have done more to prevent disease than making tobacco and alcohol progressively more expensive – instead of cheaper – in terms of the labour required to buy them, save perhaps the prohibition of advertising of tobacco. In April 1977, shortly before the 30th anniversary of the NHS, David Ennals, Secretary of State for Social Services in the Callaghan administration, commissioned a review of the available information about differences in health status between the social classes, the possible causes, the implications for policy, and the further research required. Sir Douglas Black, Professor of Medicine in Manchester, chaired the review and was helped by CS Smith, Secretary of the Social Science Research Council, Professor JN Morris, Director of the Medical Research Council (MRC) Social Medicine Unit, and Professor Peter Townsend, from Essex University.

Causes of increased mortality 1968–1978

Cause of death

% increase

Cause of increase

Cancer of the oesophagus

12

Alcohol

Cancer of the lung

48

Cigarettes

Cancer of the pleura

43

Asbestos

Cancer of the breast

11

?

Cancer of the cervix

59

Multiplicity of sexual partners

Skin melanoma

37

Partly UV light

Alcoholism

141

Alcohol

Drug dependence

263

Addictive drugs

Cirrhosis of the liver

54

Alcohol

Motorcycle accidents

58

Motorcycles

Source: R Doll (1983)9

Although everyone eventually succumbs to one condition or another, it was commonly argued that redistributing funds in favour of prevention could reduce the burden of disease and the cost of the NHS. A series of reports from the Royal College of Physicians (RCP) on smoking, atmospheric pollution, fluoridation, dietary fibre and coronary thrombosis had encouraged interest in prevention. Health promotion, Richard Doll argued, was primarily about the identification of measures proven to prevent the onset of disease, implementing them and measuring what was achieved. Many diseases that were increasingly taking their toll were amenable to prevention, in particular, cancer of the lung and heart disease. Prevention of trauma had been successful; in spite of the enormously increased number of vehicles and the rise in population, a series of regulations had held deaths on the road to around 6,000 per year for the past 50 years. Sometimes the public would resist a measure that would reduce the toll of disease greatly, while pressing for action that would not only be costly but also produce minimal benefits.9

The contribution of acute medicine to health

Thomas McKeown believed there had been undue concentration on acute medicine and the hospital services. He disagreed with the idea that improvement in health must be based on understanding the structure and function of the body and the processes of disease. At a symposium in 1970, and in his Rock Carling monograph of 1976, he examined the causes of death, the reasons for improvement in human health over the past two centuries, and the parallel development of medical science and the hospitals.11 He thought that past improvement had mainly been due to changes in behaviour and the environment, better food, cleaner water and an improved standard of living. One must look to these for further improvement. He thought that medical science and medical services were misdirected. Society’s investment in health was not well used because it rested on a false assumption about the basis of human health. It was assumed that the body was a machine whose protection from disease and its effects depended primarily on intervention. The patient’s demand for acute care and the physician’s wish to provide it were the result. The requirements for health were simple: to be born healthy, to be adequately fed, to be protected from a wide range of hazards in the environment, and not to depart radically from the pattern of personal behaviour under which people evolved by smoking, over-eating or leading a sedentary life. Environmental change, personal preventive measures and therapeutic intervention had to be brought together.12 It was a critique echoed by others. McKeown’s conclusion, that until the beginning of the twentieth century, it was unlikely that immunisation or therapy had a significant effect on the mortality of the population, was a foundation of what became known as ‘the new public health’.

McKeown played down the contribution of clinical medicine. He hunted for evidence for his theory, ignoring the dubious nature of statistics a century old. Henry Miller, the neurologist, now Vice-Chancellor of Newcastle University, believed that public health doctors in general discounted the clinician’s contribution because much of medicine was aimed at the reduction of suffering and the improvement of function, and it was hard to identify a substantial impact on mortality rates.13 During the previous 30 years, clinical medicine had been transformed – the conquest of poliomyelitis and syphilis being examples. An effective accident service would also make a major contribution to public health. Miller had no doubt that the hospital system would survive as the functional and intellectual hub of the NHS. Julian Tudor Hart, a GP at Glyncorrwg in Glamorgan deeply committed to a socialist analysis of society, also thought clinical medicine helped people, but he thought that resources were distributed in an inefficient way. In his article, ‘The inverse care law’, published in The Lancet, he said that the availability of good medical care tended to vary inversely with the need of the population served. Those doctors most able to choose went to work in middle-class areas. Places with the highest mortality and morbidity got the rest – often doctors from abroad who had difficulty obtaining the most sought-after jobs. In the areas with the most sickness and death, GPs had more work, larger lists, less hospital support and ‘traditional’ but ineffective ways of working. The hospital doctors in these areas shouldered heavier caseloads with fewer staff and less equipment, more obsolete buildings and a shortage of beds. Tudor Hart was worried by calls on the right for a return to an insurance-based system and the marketplace. He believed that the NHS had brought a substantial improvement in access to health care for those previously deprived, chiefly as a result of the decision to remove the NHS from market forces.14

The quality and effectiveness of health care

Archibald Cochrane’s Effectiveness and efficiency stimulated much thought. Quality was a major theme underlying many aspects of health care rather than an isolated topic of its own, and was the topic of a Nuffield Provincial Hospitals Trust symposium.15 The direction of the quality movement in health care was affected by the country’s social and economic culture. In the UK, the NHS tended to insulate people from the need to consider costs and quality together.16 In the USA, rising costs led to the introduction of ‘utilisation review’ and a demand for more information about costs and quality. Professional Standards Review Organisations (PSROs) were introduced in 1972. Money went into research projects. Dr RH Brook and his colleagues at Baltimore City Hospital, evaluated the quality of care of patients treated for urinary tract infection, uncontrolled hypertension and ulcers, all conditions in which it was relatively easy to define adequate investigation, treatment and follow-up.17 They found a wide variation in quality. David Rutstein, at Harvard, suggested that the occurrence of an unnecessary disease, disability or untimely death was a sentinel event requiring search for remediable underlying causes. He believed that such an approach could be used more widely than deaths in childbirth or during surgery.18 Avedis Donabedian, an American academic and perhaps the most influential theorist in the field of quality assurance in health care, provided a framework that clarified thinking.19 Quality could be looked at from three standpoints:

Structure

The adequacy of facilities and equipment, the qualifications of the medical staff and their organisation, ‘proper’ settings for health care

Process

Whether what was thought to be ‘good’ care was given, the appropriateness and completeness of the history and examination, justifiable diagnosis and therapy, co-ordination and continuity of care

Outcome

What the result was in terms of patient satisfaction, quality of life, illness and death (morbidity and mortality)

Two features, said the BMJ, must underlie any worthwhile system of medical audit: first it must be effective, and second, it must be totally independent of the state. That said, the profession should not only be concerned about its standards but also be seen to be.20 The Maternal Mortality inquiry was an example of an outcome study that revealed problems with the structure and processes of care. Disquiet at the conditions in long-stay hospitals had led to reviews. Surgeons took an increasing interest in their results.21 A call for the widening of audit came from doctors and public alike, and increasingly it was felt that people had the right to reassurance that there was monitoring of clinical standards, as there was in education and child care. Audit systems were developed. A standard of care, perhaps based on published research work, would be established. Professional activity would be compared with the standard, and the results of the evaluation used to modify clinical conduct. The audit might be conducted by the professionals themselves or by an external agency.

Enquiries into the structure, the processes and the outcome of care revealed that resources were not always used to the best purpose. In 1969, the British Medical Association (BMA) Planning Unit drew attention to the unaccountable variations in the frequency of some routine operations from city to city and hospital to hospital, in the mortality from standard surgical procedures, and the duration of stay in hospital of patients with similar diseases. Did patients with hernias or varicose veins who stayed in hospital a couple of weeks really do better than short-stay or outpatient cases? Did patients with coronary heart disease do better under continuous monitoring in hospital or at home with simple nursing care? At what point did population screening cease to pay dividends and become counterproductive? Such questions required answers that could only come from the professions; a government department could not furnish them.22 The Planning Unit also considered the complex and expensive forms of treatment being introduced, in particular organ transplantation. Some argued that these should be discouraged, in the dubious expectation that this would in some way lead to improvement in the quantity or quality of existing services. The Planning Unit thought that to slow research activity would be incompatible with professional freedom and with the enterprise expected of NHS staff.

The interests of staff and public might conflict; the medical profession was prepared to consider quality within an educational framework. People who had reason to question the quality of the care they had received looked for something with more bite. In 1976 there were 17,000 complaints about hospital treatment, one for every 300 patients admitted. Hospitals, said the BMJ, should have a simple system for handling complaints. But care was needed – the medical profession was beginning to look at ways of improving standards through voluntary medical audit, and an open-ended complaints procedure including matters of clinical judgement might postpone audit for another generation.23 In any case, financial problems were also a threat to quality. At a BMA Council meeting, Dr Appleyard, a paediatrician, said that the profession should tell Mr Ennals (then the Secretary of State) that it was no longer prepared to cover up the inadequacies of the health service. What was the position of a consultant who decided that staffing levels were insufficient and patients were at risk? A small group of the Joint Consultants Committee was appointed to consider a way to identify hospitals that were becoming dangerous to patients.24

A parliamentary commissioner on administration (ombudsman) had been appointed in the late 1960s, and the parliamentary select committee reporting in 1971 recommended the inclusion within his remit of complaints about hospitals.25 The medical profession had no objection to complaints about the failure of an ambulance to arrive or the squalid condition of a casualty department. The difficulty arose, however, if a patient died and relatives thought he might have survived had the treatment been different – a matter that could already be pursued through the courts. The Health Service Commissioner, Sir Alan Marre, opened his doors in October 1973, and there were several no-go areas. Patients who had appealed to a tribunal or had gone to court, or had the right to do so, normally could not take their grievance to the Commissioner. Neither were actions that were the result of clinical judgement included.26

Changes in society

Changes in life style were affecting health and the health service. Overseas package holidays became widely available, skiing became more popular, and eating patterns altered. Increased car ownership altered leisure activities; exercise became fashionable and aerobics was introduced. The Woodstock festival was held in 1969; the young were urged to “turn on, tune in and drop out”. The following year, a festival on the Isle of Wight attracted 200,000 people over a period of a few days. As nobody knew how many would come, the catering and sanitary facilities were strained. Festivals became a regular occurrence and initially there was much goodwill, although the personal conduct of those attending attracted media interest. Local doctors and voluntary organisations gave their time generously and little drug misuse was apparent. Those attending were said to have a natural dignity, grace and happiness that were difficult to credit unless seen.27 The BMJ was less enthusiastic.28

If the festivals are not the degraded orgies that they are sometimes made out to be, nor are they quite the care-free wandervögel, healthful communing with nature that their admirers have occasionally supposed. They are commercial ventures which can succeed financially or fail, and the people who attend them pay for their entertainment – apart from the considerable number of gate crashers. A fully satisfactory public health service for the occasion should therefore be included in the cost of it.

Advances in technology

Ever more complex diagnostic techniques, multitudes of drugs, and highly complex surgery were changing the face of medical practice. Sub-specialisation increased. Some orthopaedic surgeons tended to deal with fractures, others with joint replacement. Increasingly, the treatment of a single patient required the co-operation of different specialties, as in the case of cardiac and pulmonary resuscitation, renal dialysis and transplantation. Medical laboratory work was expanding. Computers, initially linked to analytical equipment, were increasingly built into laboratory systems. The fibreoptic endoscopes, developed in Japan by Olympus and other companies, could now be used to look at the oesophagus, stomach, duodenum and colon, and to take samples for pathological examination (histology and cytology).29 ‘Spare-part’ surgery was growing, as metal, plastic or dead tissues were used to replace parts of the body with a relatively inert mechanical function, such as arteries, valves and joints. Transplants were increasingly successful.30 Most acute hospitals had intensive care units, where patients who had had heart attacks could be continually monitored by nurses watching electrocardiogram (ECG) displays, and resuscitated when necessary.31 No wonder, wrote a physician, that sympathy seemed less of a priority, for doctors were human and there was a limit to the capacity to absorb and transmit all this and sympathy too. Reversion to the gentler manner of a bygone age seemed unlikely.32

The drug treatment of disease

Genetic engineering slowly began to influence the development of new drugs. Stanley Cohen and Herbert Boyer at Stanford University combined their knowledge of enzymes and DNA, and in 1973 published a method of inserting foreign genetic material into bacterial plasmids.33 The Cohen-Boyer patent, which earned Stanford an ever-increasing amount ($15 million in 1995), was one of the first steps in developing recombinant techniques.

In spite of the many new antibiotics, ward infections by strains that were difficult to treat became more common.34 The drugs that remained effective had to be used with caution and in the knowledge of the changing patterns of resistance. During the 1950s, staphylococci were increasingly found to produce penicillinase that inactivated the antibiotic. The production of penicillinase-stable penicillins such as methicillin gave clinicians a temporary respite, but then methicillin-resistant strains appeared. It seemed as if the main classes of antibiotics had now been identified, and henceforth discoveries were little more than additional members of an existing group.

The traditional treatment of stomach and duodenal ulcer had been based on diet, alkalis and, if these failed, surgery (partial gastrectomy or vagotomy and drainage). Relapse after medical treatment was almost invariable and recurrence after surgery was common.35 Now there was a new answer. Histamine had long been known to stimulate gastric acid secretion but antihistamine drugs did not relieve ulcer symptoms. In 1964, James Black examined several hundred chemicals with a slightly different pharmacological action and found some that did reduce gastric acid secretion. The first compounds to be tried were not effective by mouth, or had unacceptable side effects. However, in 1976 cimetidine, and later ranitidine, which required fewer daily doses, proved a breakthrough, helping duodenal and gastric ulcers to heal. They were so effective, and adverse reactions so few, that some GPs – instead of waiting for X-ray examinations – made a diagnosis by seeing if the new drugs gave relief. Long-term administration seemed necessary, which helped these drugs become the first products to generate US$1 billion revenue.36

Advances also occurred in the treatment of asthma. An entirely new agent was introduced – disodium cromoglycate (Intal) – which inhibited a bronchial reaction to inhaled allergens. It was best used as prophylaxis by regular administration of the dry powder in a special inhaler. Better bronchodilators, which made breathing easier, became available. For example, salbutamol partly replaced isoprenaline, which had been used for many years.37 The treatment of high blood pressure was also improved. Many people could not tolerate the side effects of the earlier drugs, but the introduction of beta-receptor blocking drugs (such as propranolol in 1969) that were effective and easier to take improved compliance.38 In 1974, Peter Ellwood and collaborators at the MRC Epidemiology Unit in Cardiff published a paper on the possible use of aspirin to prevent myocardial infarction; the result was suggestive but statistically inconclusive.39

The ‘non-steroidal anti-inflammatory’ drugs were a major advance in the management of arthritis. Aspirin, the mainstay of treatment, was a discovery of the nineteenth century chemical industry. In the 1950s, alternatives such as phenylbutazone became available, followed in the 1960s by other drugs including indomethacin, and a range of propionic acid derivatives including ibuprofen and naproxen.40 The way in which they relieved symptoms remained a mystery until 1971, when John Vane offered an explanation of their activity in blocking prostaglandin synthesis and release. Parkinson’s disease was also helped by the introduction of levodopa in 1970.

By 1968, a million women were on the contraceptive pill, and there was growing concern about its side effects. A strong relationship was reported between the use of the pill and death from pulmonary embolus or cerebral thrombosis (a stroke caused by a clot forming in a major artery to the brain).41 The Committee on Safety of Drugs recommended the use of low-dose preparations. The thalidomide disaster of 1961 was casting a long shadow. Manufacturers now had an entirely rational fear of adverse publicity and expensive litigation, and might only undertake costly research and testing on drugs that had a potentially large market.42

Misuse of drugs steadily increased. There were few, if any, valid indications for the use of amphetamines, but the Association of the British Pharmaceutical Industry was opposed to a ban. Several groups of doctors, including the Inner London Medical Committee, overrode the industry and recommended a prohibition of their use.43 Barbiturate abuse was also common. Young addicts found them lying around the home, and sometimes stole prescription pads from surgeries or burgled pharmacies. A campaign to restrict their use was also launched; benzodiazepines were just as effective and their addictive properties still seemed low.44

Drug interaction and ‘bioavailability’, the extent to which a product administered can be used by the body, became important. Digitalis had been one of the few effective cardiovascular drugs, although determining the ideal dose for an individual patient had always been difficult. It became possible in 1968 to measure plasma concentrations accurately by tests using radio-isotopes (radio-immune assay). The research workers developing the technique were the first to notice that batches of digoxin manufactured after May 1972 produced twice the previous plasma levels, although the tablets had the same content. What had changed was the formulation, the fillers, buffers and stabilisers used. A warning was immediately issued about this first major example of a bioavailability problem. Drugs might interact with each other. Anticoagulants had been used widely in the treatment of heart attacks. However, careful control was needed. Indomethacin, salicylates and sulphonamides enhanced their effect; sedatives and tranquillisers might inactivate them.45

The pattern of suicide changed. Suicide from coal gas and barbiturate poisoning had been common but became less so, because natural gas had a lower carbon monoxide concentration and barbiturates were less commonly prescribed. Potential suicides chose from the ever-widening range of sedatives, tranquillisers and antidepressants; suicide from prescribed drugs increased.46 Accidental overdosage could also occur, particularly in children, so protective packaging began to be introduced. Even coffee had hazards; too much produced symptoms indistinguishable from those of anxiety neuroses. Sudden withdrawal might also produce severe headaches. To what could one turn for relief?47

Radiology and diagnostic imaging

Advances in surgical treatment imposed new diagnostic demands on X-ray departments – for example, a series of pictures in rapid succession to give a moving image. More films meant more radiation and greater risks for both patients and staff. However, it became possible to cut radiation exposure by three-quarters when rare earth intensification screens, which produced a brighter image, were introduced. Simultaneously, new contrast media were introduced that were safer and less unpleasant for the patient. Image intensifiers were developed further and produced clearer and more detailed images. Coupled to TV systems and cine equipment, they were rapidly applied to studies of the oesophagus, gut and heart. Because images could be recorded in digital form, they could be compared and manipulated using the ever-increasing computer power that was becoming available.

Sometimes new methods of producing pictures did not use X-rays. Another name was therefore found for X-ray departments – diagnostic imaging. Radio-isotope imaging systems were becoming better and gamma cameras were increasing in efficiency. Unlike rectilinear scanners, they could detect radiation all over the area being examined at the same time, so they were quicker in use and could show radio-isotopes moving from one part of an organ to another. Ultrasound was widely available and the quality of sensors and computing improved rapidly. Moving images could now be seen using ‘real-time’ ultrasound, and the newer scanners were smaller, easier to install and easier to use. Already widely used in obstetrics, cardiology also benefited. Blood flow and valve movement could be measured as new techniques were used, such as the Doppler effect.

Sir Godfrey HounsfieldThe most important advance of the decade was the introduction of X-ray computed tomography (CT). Since the first x-rays in 1895, all radiographs had shared the same constraint – a two-dimensional image. The limit on progress was thought to lie in the systems producing radiation. Now orthodoxy was challenged. Interest shifted from the source of the radiation to the detection of the image. Advances in detection, combined with a finely collimated beam, allowed a 1,000-fold increase in the power of systems. Godfrey Hounsfield, an engineer at EMI (Electrical Musical Industries), announced the development of X-ray computed tomography at the British Institute of Radiology Congress in 1972.

Hounsfield modestly wrote that the technique of CT scanning might open up a new chapter in X-ray diagnosis. Tissues of near similar density could be separated and a picture of soft-tissue structure within the skull or body could be built up. It was a fundamental advance in diagnostic medicine. Instead of film, X-rays registered on sensitive crystal detectors. The patient was scanned by a narrow beam that was moved across the body and also rotated around it. A huge number of readings were fed into a computer that mathematically worked out the values of density of each ‘pixel’ of the image. It displayed cross-sectional images in an entirely new way.48 Ian Isherwood, Professor of Diagnostic Radiology at the Manchester Royal Infirmary, said the new process opened the brain of the patient and the mind of the doctor.49 One could not only look at an image but could also formulate questions to ask of it.

Hounsfield worked with Dr James Ambrose, at the nearby neurosurgical unit at Atkinson Morley’s Hospital where the earliest clinical images were created. On 1 October 1971, Ambrose made medical history by carrying out the first CT scan on a live patient, revealing a detailed image of a brain tumour. It was the improvement of computer processing that made the early scans possible, but 15 minutes of computing was needed to create a single picture. Hounsfield recognised its significance in radiology but was unable to interest his company in its development; EMI was more used to marketing the Beatles’ music and did not have the infrastructure to support major medical instrumentation. Visiting radiologists understood the potential and Ian Isherwood encouraged the Department of Health to support the new technology. The Department funded the development of a head scanner and the second prototype was installed in 1971 at the National Hospital for Nervous Diseases, Queen Square. The development was commemorated on a postage stamp in 2010.

Scanners revolutionised the diagnosis of stroke and intracranial haemorrhage (bleeding within the skull). At first the new technique was used only by neurologists because the part being scanned had to be surrounded by a water jacket and remain completely still. Normal structures of the brain were beautifully shown, and the position and nature of space-occupying lesions could be seen with great accuracy.50 The scanner that neurologists required was small and the images were so good that they rapidly displaced older examinations such as cerebral arteriograms and air encephalograms that were painful and risky. With the development of larger whole-body scanners in 1975, first used by Louis Kreel of Northwick Park, the results (particularly from scanning the chest and pelvis) opened new diagnostic possibilities.51 International interest in the new technology was keen and firms based outside Great Britain with long experience in radiology (such as Philips and Siemens) developed new generations of equipment. CT scanning, unknown at the beginning of the decade, was an ambition of every district general hospital (DGH) by its end. Initially introduced to regional and neurological centres, many DGHs began to appeal for charitable funds, even though each scanner carried with it high running costs. The images from the new techniques were digital and it became possible to record digital images from conventional X-ray equipment. Magnetic disks could now store them, opening the possibility of doing away with a silver-based photographic process.52

Interventional radiology

The trend in diagnostic imaging had been towards less-invasive procedures, yet imaging and surgery began to converge as a new sub-specialty, ‘interventional radiology’, developed. Image intensifiers allowed radiologists to work in normal room lighting. The improved ability to pass fine catheters along blood vessels into the smallest branches, and to see precisely where they were, meant that it was possible for radiologists to carry out quasi-surgical procedures under radiographic control. Interventional radiology became the umbrella term covering many therapeutic and diagnostic procedures. Catheters could be manipulated to reach most parts of the body, and a wide range of lesions could be treated. The principal techniques were the obliteration of abnormal blood vessels such as angiomas with materials including gel-foam or polyvinyl alcohol foam, increasing blood flow in narrowed vessels, and dissolving blockages formed by thrombosis with clot-dissolving agents.53

Infectious disease and immunisation

Communicable diseases could still produce a surprise. What was true of one microbe, said James Howie, Director of the Public Health Laboratory Service (PHLS), was not necessarily any guide to the life style of another. The diseases displayed the versatility of the microbes that caused them.54 Viruses were generally associated with acute illnesses; now evidence was accumulating that they were also responsible for a variety of subacute or chronic degenerative conditions such as Creuztfeldt-Jakob disease (CJD), and two diseases in animals, scrapie in sheep and mink encephalopathy.55 Though rare, the extraordinary nature of their agents that were highly resistant to normal methods of sterilisation made them of interest.

Countermeasures could be developed only by slow and often tedious methods. There were three main methods of control: immunisation, hygiene and chemotherapy. Immunisation had substantially reduced the common diseases of childhood. Measles immunisation became public policy in 1968, but the levels of cover were often disappointing. For whooping cough there had usually been more than 100,000 notifications a year before immunisation was introduced in the 1950s. This had fallen to around 2,400 by 1973 when the vaccination rate was over 80 per cent. In 1974/5 public concern followed presentation of data about the problem of neurological complications associated with immunisation. Cover fell from 80 per cent to 30 per cent, and major epidemics of whooping cough followed in 1977–1979 and 1981–1983. It took ten years for balance to be restored.

From 1968/9, the UK was affected by a world pandemic of influenza – Hong Kong ‘flu, named because of the location of the earliest cases. Though it was estimated that some 30,000 people died in the UK (about a million globally), and the workload of general practice rose greatly, there was no widespread alarm. It returned in a lesser form in 1970 and 1972.

Hygiene remained important. Salmonella food poisoning, often following the consumption of cold or partially heated chicken, milk and eggs, could be traced back to poultry-processing plants, to their suppliers, to the breeding stock and to the food mixtures that were often heavily contaminated. Was there an effective system of inspection? asked the BMJ. Animal carcasses, environments, infected raw material fed to animals, processing plants and slaughterhouses were the source of human infections, and it was improbable that salmonellosis was the only example of an animal infection important to humans and animals, and to the economics of farming and food processing.56 A major achievement of the PHLS was the discovery that many cases of diarrhoea, for which no other bacteriological cause could be found, were due to Campylobacter jejuni, which was difficult to grow in the laboratory. It soon became apparent that such infections were even more common than those due to Salmonella.

Blood transfusion had long been known to be responsible, on occasion, for jaundice. This was especially so when large donor pools were used as the starting material for dried plasma. In the late 1960s, a test had become available for hepatitis B antigen, and blood donor screening was introduced. Worldwide elimination of smallpox was now in sight and the World Health Organization (WHO) intensified its campaign. In the UK, the risks from rare but serious complications of immunisation were much greater than from the disease itself.57 Routine smallpox immunisation in childhood ceased in 1971. In 1973, however, when smallpox was no longer considered to be a risk in the UK, an outbreak originated from a laboratory at the London School of Hygiene & Tropical Medicine.58 There was a failure to follow up contacts, and secondary cases were initially missed. Spence Galbraith, a London area medical officer, had long argued the case for a centrally financed and co-ordinated national epidemiological service. There had been a lack of a centre for epidemiology in the NHS, unlike the case in the USA where the Communicable Disease Centre had existed in Atlanta since 1946. Following the report on the handling of the outbreak, the Communicable Disease Surveillance Centre (CDSC) was established in 1977 under Galbraith, as part of the PHLS, to handle outbreaks that crossed organisational boundaries. The value of the CDSC was proved in 1978, when a technician in Birmingham also contracted smallpox from a laboratory.

Marburg fever was the first of several new viral haemorrhagic fevers to be reported. In 1969 another was recognised, Lassa fever, named after the place in Nigeria where it was first seen. It was related to a reservoir of infection in sub-Saharan Africa. In 1976 a further one erupted in Zaire and southern Sudan, with appallingly high mortality. Hundreds died, including 40 hospital workers. The causal agent, resembling Marburg virus but serologically distinct, was called Ebola virus. Such untreatable and apparently easily communicable infectious diseases caused great anxiety. The risk that people might travel by air during the incubation period led to plans for high-security infectious disease units. Travellers returning to Britain with a temperature were sometimes suspected of Lassa fever, though the diagnosis was rarely confirmed. A laboratory worker at Porton Down accidentally pricked his thumb while working with Ebola virus and six days later became ill and was transferred to an infectious disease unit at Coppetts Wood Hospital in north London where a plastic isolater, developed by Trexler, was available for use.59 A permanent high-security unit was planned for Coppetts Wood but was delayed interminably while there were public protests and arguments about the design. The hospital was next to an infants’ school and the area medical officer said that the proper place for swamp fevers was swamps – not Haringey.

Malaria had been eradicated from Britain long before the start of the NHS, with the exception each year of a few hundred imported cases, usually in tourists, business people, children visiting parents who were stationed overseas, immigrants returning home for a visit and, to a lesser extent, new immigrants. In the 1960s, the number of cases had been low, probably because of worldwide mosquito eradication programmes. The number of cases in Britain reflected the changes taking place in the tropics. There were great hopes that DDT and other insecticides would make possible the control of malaria by eradicating or reducing mosquito populations. It was a bitter disappointment to find that mosquitoes could develop resistance to insecticides, and that organophosphate residues from the insecticides were entering human food cycles.

In 1976, an outbreak of 180 cases of severe respiratory disease with 26 deaths occurred in the USA among people who had attended an American Legion convention. The bacterium responsible for ‘Legionnaires’ disease was rapidly isolated. The first British outbreak occurred in Nottingham in 1977. The infectious agent was subsequently found in water from cooling towers and air-conditioning systems, and the infection might therefore circulate throughout buildings.60 Further British outbreaks included one in London, near the BBC in Upper Regent Street. Some episodes of illness many years previously could now be attributed to the same cause, for samples of patients’ blood had been kept.

Sexually transmitted disease

Syphilis was under control, but gonorrhoea and non-specific urethritis were still increasing. A hundred years previously, the Chief Medical Officer (CMO) had argued that venereal disease (VD) was a just retribution for sin. Now it was seen as a penalty of ignorance in the young, for which their elders were responsible.61 Ambrose King, the venereologist at the London Hospital, warned about the possible failure of the VD services. The public was ill-informed, doctors were poorly educated in the subject, the facilities for treatment were often in the poorest buildings in the hospital, laboratory standards varied, and contact tracing was inadequate. King wrote that VD did not appeal to tender hearts and swayed no votes. Most money went into researching the complex problems of a few, little into really big medical problems affecting large numbers of people.62

Orthopaedics and trauma

Moorgate

On 28 February 1975, at 8.46am, a tube train failed to stop at Moorgate and ploughed on into a brick wall, compacting the first three coaches into a tangle of metal. Why the driver did not stop was never fully understood; there seemed nothing wrong with the train. At 9am, St Bartholomew’s Hospital was asked by the London Ambulance Service to send a doctor to the site. A casualty officer left in an ambulance with a medical student and a small first-aid bag and, on arrival, called for a resuscitation team. It was hard to know the magnitude of the disaster or to reach the casualties. Death was difficult to diagnose; heart sounds were inaudible because of the noise of pneumatic drills close by. Cutting equipment, lighting and ventilation were needed in the confined area of the rescue; but 74 live patients were evacuated in 13 hours. Because of the problem of getting to the wounded, it was 24 hours before it was clear that no one else was living.

Forty-three people died from head injuries and traumatic asphyxia, and two later in hospital from crush syndrome.63 Forty-one were admitted, and the slow but steady arrival of the injured meant that the small accident department at St Bartholomew’s Hospital was not overwhelmed. Barts was flooded with people wishing to give blood as a result of appeals on the radio, and while the hospital’s accident plan worked well, it was apparent that disaster planning should cover more than a single hospital. It was clear that there was no national system of approaching major accidents, or giving the recently developed specialty of accident and emergency (A&E) surgeons a centre place in disaster planning. Even a system of clearly marking dead victims was missing, to avoid time being wasted by the rescuers on repeatedly confirming death. Moorgate brought the need for wider disaster planning to be taken seriously.

Fractures

Organisations such as Arbeitsgemeinschaft für Osteosynthesefragen (AO) continued to develop systems of fixation, improving the design of tapped screws to fix bone fragments, and developing the use of implants to span fragments and maintain length and alignment. Road traffic accidents remained a major feature of the work of orthopaedic departments. Because of the high incidence of head injury among motorcyclists, crash helmets were made compulsory in 1974; paradoxically the enforcement of their use meant that more motorcyclists survived their head injuries, and presented orthopaedic units with severe multiple injuries that formerly would have been seen only in the mortuary.

Hip replacement had emerged as one of the most important developments of modern surgery. A wide range of less-sophisticated operations (hip fusion, arthroplasty and new femoral heads) were now superseded by total replacement as the treatment of choice. In 1972, John Charnley reported long-term follow-up of 379 operations, carried out between 1962 and 1965, with excellent results.64 The need for surgery of the knee was at least as great as in the hip, but the mechanics of the joint were much more complex. Caution was needed because, if the operation went wrong, putting matters right was difficult. In the 1950s and early 1960s, hinge joints had been used, but they tended to loosen or break. Major effort went into development; Michael Freeman introduced a prosthesis in 1968 in which the joint surfaces alone were replaced, the upper with metal and the lower with polyethylene – the condylar knee replacement.65 It depended upon the patient’s own ligaments for stability and was not suitable in cases of gross destruction of the joint, but the design was progressively improved. In the 1970s, a two-piece prosthesis with a mechanical link was introduced. The failure rate remained higher than with the hip and, although the relief from pain was substantial, a walk across uneven ground was seldom possible.66 Initial experiments were undertaken with other joints, the shoulder and the elbow. All operations had the potential to produce problems – infection, loosening of the components or the wearing out of the prosthesis.

Cardiology and cardiac surgery

With better forms of treatment, the prevalence of heart failure, particularly from high blood pressure, fell. Heart attacks, most dangerous in the first minutes, presented an increasing problem. A big reduction in the mortality of the disease depended on prevention rather than technology. Evidence incriminated high blood pressure, smoking, obesity, a high intake of saturated fat from dairy products and physical inactivity. Many common foods seemed ‘super-saturated’– from roast beef to bangers and mash.67

In 1971, Brighton followed Belfast in the introduction of mobile coronary care ambulances, staffed by specially trained personnel who could recognise abnormalities of heart rhythm from the ECG, and correct them by the use of a defibrillator.68 Because there might be a delay before a call was received, there were doubts about the effectiveness of such services. In hospital the technology of coronary care units steadily improved, with automatic preset alarms warning nurses if the patient’s heart beat became too fast or slow, and indwelling cardiac catheters to monitor heart function. From six weeks’ bed rest in hospital, the norm in 1948, treatment had moved on. By the 1970s, experience showed that rapid mobilisation could be advocated confidently. In an uncomplicated case, the patient could be out of bed in a day or two and discharged in a week to ten days. It was now known that, within a month, the damage to the heart had largely healed and a normal life could be resumed.69 Archibald Cochrane was responsible for a study from Bristol, which revealed that selected patients treated at home did as well as those treated in hospital, and raised a question about the effectiveness of hospital care. However, the groups studied differed in their characteristics, and the study did not alter policy on hospital admission.

After a slow start, ultrasound (echocardiography) was increasingly important in cardiology, particularly in the assessment of valve disease. Heart valve damage because of rheumatic fever had been a major problem. By the 1970s, the scene had changed for two reasons. First, there had been an astonishing fall in the incidence of rheumatic fever, probably related to better housing that reduced overcrowding, and possibly a diminished virulence of the alpha-haemolytic streptococcus or better control of infections.70 Second, the surgical treatment of damaged heart valves had improved and was now a routine procedure. The death rate during operation for valve replacement was about 10 per cent and, although patients never regained the heart function of a healthy young adult, most were well satisfied with the improvement. The valves might be mechanical, for example, the Starr-Edwards caged-ball type that proved highly durable, or tissue, human ‘homografts’ or pig aortic valves, both of which were widely used but less durable.71

Infants and children with congenital heart disease and a poor chance of survival had been among the earliest cardiac surgical patients. Initially, because of the limited techniques available and the complexity of the abnormalities, full restoration of normal anatomy and function was seldom possible. Now major abnormalities were increasingly tackled in units such as Great Ormond Street, the Brompton and Guy’s hospitals, where there was great expertise in the care of small and sick infants.

The acceptance that surgery had a place in the treatment of coronary heart disease was largely the result of work at the Cleveland Clinic, Ohio. In 1962, selective coronary angiography was introduced, which showed the position of blocked arteries, and by 1967 a surgical technique was developed to use a graft to bypass obstructed coronary arteries. The ability to stop the circulation and use a heart-lung machine (perfusion) gave the surgeon enough time to perform the operation. Surgery was found to improve angina better than medical treatment. By 1971/2, increasing numbers of operations were performed in Britain and the procedure began to dominate cardiac surgery.72 By 1974, the mortality was as low as 3 per cent in patients with severe stable angina.

Surgeons had been doing experimental heart transplants in animals for some years. A human heart transplant was carried out by Christiaan Barnard in South Africa in 1967. Norman Shumway then performed two at Stanford, California. Early in 1968, the Board of Medicine of the American National Academy of Sciences issued guidance on the experience, laboratory facilities and ethical safeguards that should be in place in cardiac surgical centres considering transplantation.73 The first heart transplant in Britain was carried out at the National Heart Hospital by Donald and Keith Ross on 3 May 1968. There was a second, but both resulted in the patients’ death. The UK surgeons had underestimated the need for careful patient selection and the problems of tissue typing, and they lacked the laboratory and pathology services that Shumway had. They misjudged the media interest, and some senior members of the medical profession were highly critical of the operations and thought them attention seeking. The third UK transplant took place at Guy’s in 1969 and the Daily Telegraph published the name and biographical details of the donor, a nurse killed in a road accident. An unrepentant newspaper rejected the hospital’s protests. The BMJ thought it breathtaking that the paper thought it knew better what was good for the patient than the doctors or relatives.74

Worldwide, many hospitals undertook a few heart transplants, but few of the patients survived. Almost all units rapidly stopped heart transplantation and a voluntary moratorium seemed to come into force. The transport of brain-dead donors in ambulances was leading to widespread distaste. Concerned about the ethics of the operation, the adverse publicity and the costs, Sir George Godber convened a group of experts, including those involved with the London patients, to ensure that the medical profession acted on a matter of public concern. Sir George said that clinical decisions about the treatment of individual patients were for the consultants concerned, but the diversion of resources from other hospital work was a matter that involved management. The expert group advised that heart transplantation was still largely experimental and there was no advantage in replicating work being done elsewhere. Regions were told not to make special resources available to support programmes and, as a result, heart transplants ceased for the time being in Britain.75 Shumway and his team in Stanford quietly continued their clinical work and their long-standing research programme. In 1971 they described their first longer-term results and success rates were improving. Of 26 patients, 13 left the hospital, of whom seven were alive a year later. A further report of 150 consecutive patients between 1968 and 1978 showed a one-year survival rate of 70 per cent – comparable with that of renal transplantation. The success was the result of teamwork with full immunological, pathological and microbiological services. In 1977, the UK Transplant Panel defined the conditions required in cardiac surgical units planning heart transplantation, and the ban was rescinded in 1979.76

Organ transplantation

Organ transplantation began as laboratory research on animals, was used experimentally in humans by a few clinics, and became accepted as a form of treatment of general application.77 Sometimes, as in liver transplantation, the mortality was extremely high at the beginning, but with experience it diminished substantially. The search for new and more specific immunosuppressive agents proved frustrating. Roy Calne, an English surgeon then in Boston, investigated the use of a derivative of 6-MP, azathioprine, used in the treatment of leukaemia, and showed that it prevented the rejection of kidney grafts in dogs. Subsequently, the introduction of cyclosporin in the late 1970s, combined with steroids, changed the whole picture.78 Getting excellent results in many patients, even with kidneys transplanted from unrelated donors, was now possible. Liver transplantation, first reported in 1963, was more difficult, particularly as the liver was sensitive to interruption of its blood supply, and had to be cooled rapidly and perfused if it was to survive until reimplantation.79 Nevertheless, the procedure was increasingly successful and a joint programme began at King’s College Hospital and Addenbrooke’s in 1968. By 1979 there had been 83 liver transplants, with steadily improving results.80 Research in basic immunology and tissue typing allowed clinicians to select donors with theoretical expectations of better results. The improvements in immunology also made bone marrow transplantation possible and, in the late 1960s, a role for it was identified in aplastic anaemia (failure of the bone marrow to produce blood cells) and leukaemia.81 Although these techniques were effective, they were costly. Clinicians did not always discuss the financial consequences with hospital management before beginning programmes locally.

Renal replacement therapy

It was now well established that active life could be prolonged in renal failure by maintenance dialysis and renal transplantation, but lack of facilities meant that most of those who would benefit still died.82 Home dialysis did little to relieve the pressure, although two-thirds of patients on dialysis treated themselves in their own homes. This was, in part, because of slow opening of new dialysis units and partly because of the risk of hepatitis B, although the control measures that were introduced in 1970 were largely effective.83

In the 1970s, outbreaks of encephalopathy and bone disease occurred in various dialysis units, and bone disease became a major problem. It often seemed that not a week went by without a dialysis patient sustaining a fracture. Alfrey and colleagues in the USA associated the encephalopathy in dialysis patients with aluminium toxicity. A geographical variation of toxicity that was associated with levels of aluminium in the water supply was found.

After five years’ effort, dialysis units were accepting only 500 patients a year out of an estimated potential three times that size. Once patients were on treatment, they were there for years, blocking the units for new cases. Expansion of transplant facilities was urgently required, for a successful transplant removed the need for regular dialysis, made a place available for another patient and was probably cheaper in the long run. Kidneys were scarce. A national organ-matching and distribution service was established in 1972, and it was found that many first transplants rapidly failed, probably reflecting the low quality of donor kidneys.84 Increasingly, kidneys from living donors, often relatives, were used. The results were better and the effect of removal of a kidney from an otherwise healthy person was negligible.85 Antony Wing, a nephrologist at St Thomas’ Hospital, produced a graph showing a relationship between the total number of renal patients on programmes in different countries and the gross national product of each. The prospect of survival apparently depended on the economic productivity of the country of residence.86 Dialysis and transplantation were now both established procedures and were interrelated. Neither could stand alone, for patients might need to move from one treatment to another, for example, if a transplant failed.87

Ear, nose and throat (ENT) surgery

A new surgical approach to the internal auditory canal was introduced by Ugo Frisch in Zurich. Hopkins had developed his fibreoptic lens system in 1954, but it was not until the mid-1970s that fibreoptic endoscopy revolutionised the examination of the nose and sinuses, also making possible the development of endoscopic sinus surgery.

Ophthalmology

Developments in operating microscopes, instruments and lasers were applied to the diagnosis and treatment of eye disease, leading to operative procedures that were often time-consuming for the surgeon but made earlier discharge possible. Better anaesthesia and less-traumatic surgery led to successful surgery on older patients than ever before, and they were operated on at an earlier stage of disablement.88 Phacoemulsification, a method of breaking up the opaque lens followed by aspiration to remove the lens fragments, was introduced into cataract surgery by Kelman in 1967.89 Increasingly, lens implants were the treatment of choice.90 Once, a cataract operation had been followed by three weeks in bed. Now early mobilisation was the order of the day. Sometimes surgeons extracted both cataracts at once, not generally regarded as good practice. Patients could expect to be up and watching television two days after operation. Some diseases had previously been untreatable, for example, the retinal damage found in most diabetics after 10–20 years. The Birmingham Eye Hospital reported a controlled series showing that photo-coagulation by laser destroyed the growth of abnormal new blood vessels, and could arrest the development of the disease.91

Cancer

The late 1960s and early 1970s saw a breakthrough in cancer chemotherapy. Gordon Hamilton-Fairley, at St Bartholomew’s, saw what was happening in the USA and had the vision and drive to fight for chemotherapy and oncology in the UK. In the 1950s, acute lymphatic leukaemia had proved to be curable. In the 1960s, it was the turn of Hodgkin’s disease, which had usually been fatal, though treated for many years by radiotherapy and the early cytotoxic drugs such as nitrogen mustard. In early cases, radiotherapy had cured a few patients. Combination therapy was discovered almost by chance. In 1964, the National Cancer Institute put together four drugs, each of which had some therapeutic effect but different toxic effects. The combination they chose to use was hard to better, and consisted of nitrogen mustard, oncovin (vincristine), prednisolone and procarbazine (MOPP). There were apparent cures, which led to a redoubling of effort. The combination of vinblastine and chlorambucil raised the remission rate to 63 per cent.92 The radiological technique of lymphography, which made it possible to see affected lymph glands in the abdomen, and biopsy of spleen, bone marrow and liver, showed that the disease spread progressively. The spread could be accurately staged (assessed), and improved irradiation facilities made it possible to treat larger volumes of tissue. With increasing success, it became imperative that patients, often young adults, were handled from the outset at centres of expertise. Between 1972 and 1977 there was also a dramatic improvement in the remission rate of acute lymphoblastic leukaemia.93 Cisplatin, introduced in 1972, had been discovered accidentally after Rosenberg, studying the effect of electrical currents on bacterial growth, found that bacteria did not separate properly. It was the platinum electrode that was responsible and the guess that something that stopped bacteria dividing might do nasty things to cancer cells led to further work. It proved useful for several tumours, including testicular cancer.

Openness with patients was now required, for the new patterns of chemotherapy were not compatible with reticence about the diagnosis.94 On both sides of the Atlantic, enthusiasm grew among oncologists. Perhaps all that was necessary was to discover the right combination of drugs and each cancer in turn would be cured. A campaign in the USA to ‘conquer cancer’ painted a rosy picture of the outlook, with little stress on the poor prognosis of the more common cancers such as breast, lung and stomach. The mortality for cancer of the breast had altered little in 50 years. Since the 1950s it had been known that some breast cancers were hormone dependent. Anti-oestrogens provided a possible line of attack but most had severe side effects, making them unacceptable to most doctors and patients. In the early 1970s, several compounds, such as tamoxifen, were reported as arresting or reversing tumour growth. They became widely used, first in advanced breast cancer, subsequently at earlier stages, and to stop recurrence in the other breast.95 Chemotherapy was already well established in advanced and inoperable disease and was used to reduce tumour size and make radiotherapy easier. With the discovery that tamoxifen and cytotoxic treatment improved the results, there was a swing away from radical surgery, partly because of a growing belief that treatment influenced survival less than the behaviour of the tumour and the extent of the disease at the time of presentation.96

The results of screening for cancer of the cervix began to be analysed and there was debate over its effectiveness. In 1976, the Canadian government published a report of its experience with mass screening. The incidence of the disease had fallen alongside the introduction of the programme, and, although it was not possible to prove that screening was responsible, the report concluded that many cases of carcinoma that were still localised to the cervix would have spread had they not been treated.97

A further report on smoking was published in 1971 by the RCP. Some 50,000 deaths a year could conservatively be attributed to it, and the list of associated diseases now extended to chronic bronchitis, coronary artery disease and cancers of the mouth, pharynx and larynx.98 Cigarette consumption continued to rise, although the prevalence of smoking was beginning to fall. Three factors were recognised as affecting consumption: health publicity, tax increases, and controls on smoking in the workplace.

Obstetrics and gynaecology

The number of maternal deaths had fallen from 67 per 100,000 in 1952 to 19 per 100,000 in 1969. The causes remained the same: abortion (other than termination of pregnancy), pulmonary embolism, haemorrhage and toxaemia.99 The improved results were not due to any one factor but to higher standards of surveillance, earlier detection of complications, and more effective preventive action and treatment.100 A healthier population played its part as well.

Hospital deliveries were rising; by 1968, the figure was 80 per cent. The average length of stay after delivery, however, continually decreased and an increasing number of maternity beds opened. As hospital confinement increased, GPs increasingly restricted their involvement to antenatal and postnatal care.101 Babies were generally delivered by midwives both at home and in hospital, but there was little contact between the two branches of the profession. In 1967 a committee was established, chaired by Sir John Peel, to consider the future of the domiciliary midwifery service and the need for maternity beds. The findings of the Perinatal Mortality Survey undertaken by the National Birthday Trust Fund influenced the Peel Report, published in 1970.102 Cranbrook had recommended a 70 per cent hospital confinement rate, Peel now advocated 100 per cent, and both assumed that hospital delivery was safer without actually establishing this. It became policy to concentrate obstetrics in properly equipped and staffed units, and to discourage isolated GP units. It was the death knell for the domiciliary midwife, who found that the work she had chosen and which she enjoyed was now branded as unsafe and inappropriate.103 District midwives were retitled community midwives and became part of the primary health care team, working with GPs. Home birth was barely viable. As the numbers fell, neither GPs nor domiciliary midwives had enough experience to give them the confidence and the skills required.

While there was agreement that a unified obstetric service was desirable, relationships between obstetricians and GPs could be touchy. Consultants were sometimes prepared to work in a service with GP obstetricians only if they could lay down strict rules of practice, whereas the GPs might want to work in a unified service only if they could enjoy unfettered clinical responsibility.104 The presidents of the Royal College of Obstetricians and Gynaecologists (RCOG) and Royal College of General Practitioners (RCGP) agreed to a joint committee to discuss the problems. Both Colleges favoured the closure of small isolated GP obstetric units and the complete integration of consultant and GP facilities. Improving the survival and quality of the fetus was thought to mean hospital delivery and fetal monitoring, a technique developed in the USA in the 1960s that spread rapidly in the UK. Fetal heart rates were measured and the acid-base balance was checked by fetal scalp blood sampling. The techniques were introduced without any substantial clinical trials, the results were not always fully understood, and mothers were often worried about the nature of the attention they were receiving. Women, however, wished to take a more active part in their care, and there was pressure to make hospitals more personal and less authoritarian. Some obstetricians had long aimed for this; for example, the baby might be placed in the mother’s arms immediately after delivery, a simple philosophy, but one sometimes difficult to imbue into those involved.105

Abortion

The Abortion Act 1967, the result of David Steel’s private member’s bill, had changed the ambiguous and unsatisfactory state of the law. Genetic medicine created a further indication for termination; prenatal diagnosis of some congenital and inherited conditions was now possible by sampling amniotic fluid at the 16th week of pregnancy, allowing termination if indicated and acceptable to the mother. Abortions could be carried out only in NHS hospitals or other premises approved by the Minister, and each had to be notified confidentially to the CMO. Neither the Ministry nor the gynaecologists had expected such a great change from the more liberal attitude by society and the GPs to abortion, and no extra staff, outpatient or theatre time were funded.106 The number of abortions rose annually for the first six years to 167,000 in 1973, after which it fell slightly, perhaps because of the introduction of free contraception. Numbers rose again in 1977/8, perhaps owing to adverse publicity about the side effects of oral contraceptives.107 The whole character of gynaecologists’ outpatient work changed, and waiting times for routine procedures got longer, particularly as few terminations were treated as day cases in the NHS, as they were in private clinics. Initially about 60 per cent of operations were carried out in NHS hospitals, but there were wide regional variations in services. Only 20 per cent of women seeking a termination in the West Midlands region had NHS care in their home health region, compared with 90 per cent in the Northern region.108 In some areas, gynaecologists were not prepared to carry out many – or any – terminations, and women turned to voluntary organisations. The proportion of terminations performed in these ‘approved’ places rose because NHS beds were never increased to meet the demand. In 1968, there had been 50 deaths associated with abortion; such deaths fell steadily and, in 1978, there were only five.109 Up to a third of patients came from overseas, mainly France, Germany and the USA, although, after 1973, the number of non-residents fell as other countries liberalised their legislation. The number of foreign women coming was said to have made Britain the ‘abortion centre of the world’, and private clinics were thought to be making exorbitant profits.

Opposition to the Act did not cease once it had become law. Despite 15 attempts to change it, mainly by reducing the time limit for abortion, the Abortion Act 1967 survived its first two decades unchanged. In February 1971, the government set up a committee of inquiry, not into the principles that underlay the Act, but the way in which it was working. Reporting in 1974, the committee supported the Act and its provisions, though it criticised the inequalities of provision and laxity of some parts of the commercial private sector.110 Abortion was sometimes the result of an inadequate family planning service. It was estimated that there were some 200,000 unwanted pregnancies a year and the country’s family planning services were inadequate. Only one local authority out of six had a full service, but family doctors and gynaecologists were not entitled to prescribe contraceptives for social reasons within the NHS. The voluntary sector tried to fill the gap, the Brook Advisory Service establishing clinics for teenagers. The BMJ argued that the expense of including contraception within the NHS would be small compared with the alternative – the cost of illegitimate births and abortions.111

Paediatrics

An appreciation of the unique nature of paediatric disease, and the ability to save children who might previously have died, spurred the appointment of many more paediatricians. Adults’ physicians felt less comfortable managing children’s care than in the past. The newcomers were interested in neonatal problems; particularly in the lowest social classes, where the perinatal and neonatal mortality was high in comparison with other developed countries. The 1960s saw a proliferation of special care baby units (SCBUs). Most of the babies admitted were not under-weight; almost half were there for observation, needed no treatment and left the unit within three days.112 It proved difficult to organise an effective neonatal service. District paediatricians might prefer to maintain their own special care unit, even though it was half-empty. Obstetricians were sometimes loath to allow paediatricians to care for babies, and might oppose the transfer of mothers in premature labour to hospitals where there were good facilities for very small babies. In neonatal intensive care units, the survival rates of small babies continued to improve and, by 26–27 weeks, the chances of survival had reached 50 per cent.113 Intensive care cots were, however, in short supply. It was now possible to use machines for long periods to help babies breathe, and provide advanced monitoring.114 Jonathan Shaw introduced total parenteral nutrition in 1973, allowing babies to be fed with fat, protein and carbohydrates intravenously.

Some major problems were diminishing. From 1970, the perinatal deaths from haemolytic disease of the newborn fell rapidly.115 Far fewer children had orthopaedic problems. Screening shortly after birth for congenital dislocation of the hip was introduced, leading to early and successful treatment. Tuberculous bone disease became uncommon. Babies born with the malformation meningomyelocele were less frequent, as cases were identified by ultrasound at an early stage of pregnancy and termination was offered. The treatment of thalassaemia was improved by the introduction of desferrioxamine in 1975, first intramuscularly and, in 1977, by continuous subcutaneous infusion using a battery-operated pump. In 1966, the amino-acid sequence of human growth hormone was determined, and by 1970, synthetic hormone was produced in small amounts – although, to begin with, this was too impure to be given to humans.116

A three-year review of services for children, Fit for the future, chaired by Donald Court and reporting in 1976, did not have the impact that it might have done.117 The report was lengthy and made many recommendations, some of which were controversial – for example, that some GPs should specialise in providing the paediatric care for their practice, and that health visitors should have geographic rather than practice responsibilities. In the furore that followed, many sensible proposals were ignored. The report saw health surveillance, and the provision of treatment where necessary, as one of the main functions of child health services. It recommended special arrangements for the care of handicapped children, and the setting up of district handicap teams, the beginning of a structured team approach to the care of long-term disability and mental handicap in children.

Geriatrics

Improved teaching in undergraduate medical schools and better recruitment to geriatrics aided the specialty. Acute medical units might now be hard to distinguish from geriatric ones with active clinical policies. The crucial development of the decade was the recognition that most people admitted to medical wards were elderly, so the key to the care of old people lay in the acute wards. That was where unnecessary long-stay problems had been generated in the past. Many surgical procedures were predominantly required by elderly people, for example, cataract operations and hip replacements. Traditionally the patients entering the geriatric service were selected largely by GPs or doctors in A&E departments, rather than by geriatricians. There was dispute about the pattern of geriatric care, particularly as general physicians saw a risk to their resources. Should general physicians accept patients, however old, making geriatricians unnecessary? Should there be a defined age at which everyone was admitted under the care of a geriatrician, allocating patients on the basis of age rather than clinical requirements? Should geriatricians cease to be separated from consultant colleagues? In Newcastle, the medical staff decided to integrate geriatrics with other general medical services, pooling beds as part of multi-consultant teams, all taking part in acute medical emergency work.118 In Oldham, the unit was also an intrinsic part of the DGH, and provided total medical care with virtually no waiting list. Turnover more than kept pace with demand.119 When the new Northwick Park Hospital opened, the geriatrician, Malcolm Hodkinson, decided that from the outset his department would have neither a waiting list nor a system of pre-admission assessment. The emphasis would be on active treatment and early discharge. Those who could not be discharged, (roughly half), were transferred to two smaller hospitals so that the department followed a scheme of progressive patient care. The morale of the staff improved and there was less tendency to treat geriatrics as ‘the poor relation’ of medicine.120 As geriatric departments became smaller, with short waiting lists, experimental systems of integration were possible. The aim was early discharge. There was never enough welfare accommodation, although local authority provision in England and Wales increased by a third between 1961 and 1966.121 The responsibility for the care of those not needing technological care was passing slowly to the local authorities.

New facilities were becoming available, for example, geriatric day hospitals where patients, brought by ambulance, spent four to eight hours a day. Everything the patients did in the day hospitals was planned to overcome their disabilities. Some patients were there for assessment, avoiding hospital admission, or for short periods of physiotherapy and occupational therapy before discharge to community care. Others attended regularly, and, because they were under supervision, admission might be avoided. Patients came in roughly equal proportions from GPs and hospital doctors, and the units acted as a midway point between the acute inpatient unit and the community social day centre.122 In 1960 there had only been a dozen, but ten years later there were 120.

Mental illness and the long-stay hospitals

Inpatient care for mental illness (England and Wales)

1954

1969

Beds

157,427

133,667

Inpatients

152,197

116,275

The Hospital Plan of 1962 had stated that some of the mental illness services for a district should be at the DGH. Earlier estimates of the need for beds had been based on a custodial approach; newer estimates looked at balanced psychiatric units working alongside other specialties in a district hospital, and in partnership with the local authority social services departments.123 Although the numbers admitted continued to rise, length of stay fell. Between 1954 and 1969, inpatient numbers fell nationally by 31 per cent, and in the Oxford region by 45 per cent, but only by 18 per cent in Liverpool. Eason and Grimes, Department of Health and Social Security (DHSS) statisticians, showed that the number of ‘old’ long-stay patients, though continuing to decline, was doing so ever more slowly. While 50 beds per 100,000 of the population seemed adequate for acute adult admissions, additional beds were required for long-stay patients and for elderly severely mentally infirm people. The large isolated hospitals still treated most admissions (77 per cent). General hospital units treated only 17 per cent, and the teaching hospitals 7 per cent. In the Manchester region, 44 per cent of patients went into DGH units. In East Anglia none did.124 A review of the functions of a DGH in 1969 suggested that all the district’s psychiatric services should be on site but, for financial and logistic reasons, it would be 25 years or more before such a policy could be fully implemented and all the large mental illness and mental handicap hospitals had closed.125

From the Manchester experience it appeared that three 30-bed units, each with its own nursing and medical team, could meet the needs of a district of 180,000. Only a few patients – for example, disturbed adolescents, drug addicts and those who were violent – would not be appropriate for a DGH. Critics said that the enterprise and enthusiasm of the staff of these units was not in question, but they had over-rated their achievements. Some units had only one consultant for populations of 200,000, there were few other medical staff and little support from social workers or psychologists. The service was cheap and the bed usage remarkably low, but the standard was satisfactory only for communities who had known nothing better. There had been no systematic descriptions or evaluations of the many DGH units established by 1960.126

Community-based psychiatric services required complementary residential and day services provided by the local authority social services department, and it helped if health and social services worked with the same population.127 ‘Sectorisation’ was advocated, each psychiatrist having a little patch, convenient to the social service department of the local authority. The development of community links was clearly worthwhile, but should the consultant be chosen by postcode? The ethos of mental hospitals had changed from a medical autocracy to a more diffuse, multi-disciplinary quasi-democratic administration operating through a multiplicity of committees.128 Until 1948, the medical superintendent reigned supreme, for better or worse; the decisions were his. As medical superintendents disappeared, problems were discussed at length and decisions were hard to come by. Some hospitals, such as Claybury, had also adopted the concept of the therapeutic community, ‘treatment by committee’ as some called it. The authority of doctors was, at times, reduced to vanishing point. The power structure was changing; nurses were being trained as therapists to work in teams and make possible the treatment of far more psychotic and neurotic patients, both in hospital and in the community.129

Scandals in mental illness and mental handicap hospitals

The effects of management – or lack of it – were nowhere more obvious than in the long-stay hospitals. The concentration of resources on DGH units at the expense of old psychiatric hospitals where there were too few staff and standards of care were low, added to the problems. The response to the article Lord Strabolgi and Barbara Robb had written to The Times, asking for examples of poor care, had been hundreds of letters releasing pent-up rage and misery, including many from nurses and social workers. In 1967, Barbara Robb, a signatory, published Sans everything on behalf of AEGIS (Aid for the Elderly in Government Institutions).130 Its cry of distress at the undignified suffering of so many was written in careful terms, increasing the effectiveness of its attack on the care of the elderly and of elderly mentally ill people. The book consisted largely of accounts, often by staff, of anonymised institutions, of random and mindless cruelty and thoughtlessness, petty fraud with patients’ money and the desolation of life in large hospitals. While paying tribute to the “good hospitals” with excellent staff running interesting and optimistic programmes for patients, in others, persistent staff-shortage could lead to the recruitment of “nurses who, knowing little or nothing about the proper care of the elderly, came to regard defenceless patients merely as sources of disagreeable hard work for themselves”.

Long before he had become Minister of Health, Kenneth Robinson knew of the bad conditions of the mental illness hospitals. He had stressed the need for improvement in the hospitals with their stark conditions, double-locked doors, even in areas where patients had little tendency to wander or to abscond, and appalling sanitary conditions. There were fire hazards – 24 female patients died at Shelton Hospital, Shrewsbury, in February 1968. Knowing how difficult it was to work in them, he sprang to their defence, and subsequently admitted he probably over-did it.131 The six hospitals mentioned in Sans everything were identified. The regional hospital boards (RHBs) were asked to investigate. The reports were heavily edited before publication, and the tenor was to dismiss the accusations as inaccurate, misinterpretation or isolated aberrations of individual staff, some of whom subsequently retired. Kenneth Robinson told Parliament he deeply regretted the anxiety caused to patients, relatives and hospital staff by allegations now authoritatively discredited. The BMJ was pleased that staff and hospitals had been exonerated. The Nursing Times described Sans everything as an exercise in mud-slinging; the Minister, reacting to public concern, had made careful enquiries, and allegations of cruelty by nurses were not proven.132 Some thought his response smelt of a white-wash.

Richard Crossman replaced Kenneth Robinson in 1968. His interest in mental hospitals was also sincere. He discovered that the money spent on food in a district hospital was often three times that spent in a geriatric hospital, and that was more than was spent in hospitals for the mentally handicapped. He asked civil servants to defend this and was told “they wouldn’t appreciate better food if they got it”. Crossman thought that good food might be one thing that elderly and mentally handicapped people really could appreciate, and that after 25 years of the NHS, people in long-stay hospitals should expect equality of treatment.133

While the inquiry into Sans everything was proceeding, another was established that would have important consequences. In 1967 a nursing assistant at Ely Hospital Cardiff made specific allegations to the News of the World about the treatment of patients and pilfering by staff. They were forwarded to the Minister. Kenneth Robinson, though Labour, commissioned an inquiry under the chairmanship of Geoffrey Howe QC, a budding Conservative politician, thereby ensuring cross-party support. Howe and his committee worked hard. Richard Crossman was Minister by the time the report was published. It appeared in full only after some argument, but Crossman came out strongly in favour of publishing the entire report. The Report on Ely Hospital (the Ely Report) was long, detailed and dealt with shortcomings, both in the hospital and by the hospital management committee (HMC) and the RHB. It spoke of the continued acceptance of old-fashioned, unduly rough and undesirably low standards of nursing care. The report laid some blame on professional isolation. The allegations had been confirmed and the most serious accusations were directed at an inert nursing administration that had victimised staff who complained.134 Staff who, for years, had either lived with the system or got out, now sometimes spoke up, ignoring the possibility of retribution. After the publication of the Ely Report in March 1969, regions were asked to examine their own services, and increased allocations of capital and revenue were made to services for the mentally handicapped.

The Hospital Advisory Service

One recommendation of the Ely Report was for a new system of regular visiting or inspection. Richard Crossman, tired of sending memoranda that nobody read, decided to establish a Hospital Advisory Service (HAS) and received the support of the medical profession. The doctors accepted that the Howe Report on Ely established good grounds for this. The team was led by Dr Alex Baker, a psychiatrist already on secondment to the DHSS.135 It was multi-disciplinary, visiting hospitals for two to three weeks. It looked first at mental handicap hospitals, about which there was most concern, moving on to mental illness and geriatrics. Its reports went to the Secretary of State and to the regions and hospitals.136 The first annual report drew attention to the lack of communication within hospitals, between hospitals and the community, and with management. Almost all hospitals of 1,000 or more beds had major problems, and many regions were still adding to and refurbishing old hospitals for the mentally handicapped, rather than establishing modern methods of care in smaller units.137 Other scandals followed.

1970 – While the DHSS was trying to decide how to handle the Ely Report, even more serious violence was revealed at Farleigh Hospital in Somerset, a hospital for mentally handicapped boys and men. Police proceedings were brought against several nurses on charges of ill-treating patients. This led to a committee of enquiry that revealed weaknesses in hospital administration, long-standing differences between the medical superintendent and the HMC, and under-staffing by nurses. It made recommendations about the need to instruct staff in how to handle disturbed or difficult patients.138

1971 – At Whittingham Hospital, police enquiries into allegations of ill-treatment and financial irregularity led to a committee of inquiry. The report said that Whittingham was a hospital of wide contrasts, some good features and some very unsatisfactory. Medical and nurse staffing was inadequate; in 1970 it had the poorest medical staffing of 108 English mental illness hospitals submitting returns. Many allegations of ill-treatment were justified, including the ‘wet-towel’ treatment where a towel was twisted around a patient’s neck until he lost consciousness. There had been large-scale pilfering, if not organised corruption; complaints by nurses had been suppressed; the management structure was defective and there had been inadequate medical supervision in some wards. There were two standards in the hospital: one for acute mental illness; and a lower standard for longer-stay – mainly elderly – patients.139 The report was frank about the problems that occurred as mental hospitals were deprived of their short-stay patients. There was an accumulation of long-stay patients receiving no more than residual care, a system as demoralising for the staff as it was bad for the patients.140 Dr Russell Barton had named Whittingham in 1965 as a hospital in which ill-treatment occurred. The BMA Central Ethical Committee censured him for comments capable of being construed as adverse criticisms of a member of the hospital’s medical staff.141

1972 – At Napsbury Hospital, near St Albans, an independent professional appraisal of medical and nursing practices in one part of the hospital was undertaken after an inquest on a patient who had died of severe injuries. A consultant, supported by equally enthusiastic doctors and nurses, routinely treated schizophrenics on the theory that their mental state was the result of a crisis in interpersonal relationships.142 According to the report, his methods of treatment, which included behaviour modification and the withdrawal of conventional nursing care so that conditions became dirty and unhygienic, had been pursued in an insistent and inflexible manner with, at times, a lack of compassion and respect for the rights of patients.

1972 – At South Ockendon Hospital for the mentally handicapped, an independent committee of inquiry was set up after the death of two patients and a complaint by a nurse about low nursing standards on the ward where one of the patients had lived.

During the latter part of the 1970s, there were other enquiries, the most dramatic being in 1976 at Normansfield Hospital, a hospital for the mentally handicapped in southwest London. Nursing members of Confederation of Health Service Employees (COHSE) went on strike when all other methods of bringing problems to management’s notice had failed. The subsequent public inquiry showed an appalling quality of life of the patients and a failure of senior medical, nursing and administrative staff to co-operate with each other. The extremely low standard of patient care and the hostility between one consultant and virtually all other staff were well known to everyone at every level, and all agreed that the situation was unacceptable. However, nothing was done until the nurses’ action brought instant response.143

The inquiries devastated the morale of the hospitals, and there was much anger among the staff. It was common for more money to be given to the hospitals criticised, sometimes at the expense of others equally pressed. To overcome the lack of psychiatric leadership, a taskforce was often assembled, and an experienced consultant and senior nurses would be drafted in to set matters right. The necessary qualities were rare. This denuded the hospitals from which they came of key staff. Of the hospitals involved in scandals, most were for mentally handicapped or elderly people. Common themes ran through the reports – the effects of professional isolation, the low expectations of custodial regimes, the dangers of corruption in closed societies, and the extent to which staff and management would go to stifle criticism of the quality of patients’ care. Above all, they exposed the weaknesses and superficiality of the system of lay management that had failed to exert any real influence on the quality of care, and the lack of professional leadership.144 Retrospective analysis of routine hospital returns revealed that most were badly staffed, although the returns had never been used to pinpoint hospitals where this might have made adequate care difficult to provide.145 The BMJ contrasted the approach of the HAS with that of Florence Nightingale when facing not dissimilar conditions at Scutari. She had used her intelligence and social connections to change the system, but had personally washed wounded soldiers, scrubbed tables, boiled water and burnt maggoty dressings. Action was needed but, sadly, the public and the medical profession were indifferent to long-standing problems until something went seriously wrong, a hospital burnt down or a patient was maltreated.146 Sir George Godber also had doubts about the HAS system. It involved people descending on a hospital, making a report and going away. They were not personally responsible for remedying what they had seen.

Few scandals became public after 1976. The regions and the DHSS moved fast when potential problems appeared. In the Newcastle region, a system of regional visits and continuing association was already in place. In North East Thames, in which South Ockendon was situated, it was realised that questionable practice might be widespread and a regional monitoring team was established. The team included board members and officers, and visited all the mental illness and mental handicap hospitals. It had continuing responsibility for the problems discovered, and could support staff who were sometimes working under extreme difficulties.147 Usually it spent three days and nights at the hospital, roving without warning. The reports were beautifully, indeed poetically, written by Dr Peter Camm, an officer of the authority. They were referred in confidence to the hospital and the circulation was restricted. Findings warranted this approach, for while ill-treatment was seldom found, systems of care were often ludicrously inappropriate. In one hospital, the supply of towels ran out regularly each week and the nurses were forced in turn to use draw sheets, sheets and pillow-slips to dry the patients. In a ward without a clock, it was explained that it would be pointless to have one, as the patients were disorientated and did not know what time it was. One hospital received regular complaints from British Rail that patients wandered across the tracks. Sadly a report was leaked to the press, the medical and nursing staff were outraged, and the region ceased its visits.

White Papers

Better Services for the Mentally Handicapped - White paperTwo White Papers were subsequently published and, for the first time, there were authoritative and detailed statements of national policy.148 Neither mentioned the scandals that in some measure had stimulated their publication. Keith Joseph published Better services for the mentally handicapped in 1971. It encouraged inpatient units in district hospitals, day hospitals, outpatient services and community care in local authority provision. Comparable guidance was issued about the mentally ill. The BMJ was sceptical. It believed that, for many patients, the policy was fair enough, if and when the community facilities were built. But where it was best to look after chronic schizophrenics with personality disorders and eccentric if not criminal behaviour was another matter. Perhaps the patients or their relatives might be asked. The local authority social services departments were still reeling from the implementation of Seebohm and the Local Authority Social Services Act 1970. Trust between hospital and town hall had broken down. Premature discharge of patients to social service facilities that were overloaded and creaking, or existed only on paper, did a disservice to everyone. There was a danger of patients falling through holes in the system, joining the army of the destitute.149

David Tidmarsh, at Horton Hospital Epsom, cared for substantial numbers of alcoholics, schizophrenics and drug addicts. On admission many were destitute, unemployed and without family ties. On discharge they ended in common lodging houses with little after-care or exchanged a hospital bed for a prison cell.150 Tidmarsh thought that chronic disturbed schizophrenics rapidly disrupted the psychiatric units of DGHs. A committee of the BMA, the Royal College of Psychiatrists and the Society of Medical Officers of Health came to the same conclusion – that the difficulties and disadvantages of attempting to treat nearly all types of mental disorder in a small, mixed-sex ward had not been sufficiently stressed.151 In such a setting, rehabilitation was virtually impossible. Tidmarsh argued for retention of the larger mental hospitals until adequate substitutes were available.152 Therapeutic activity was concentrated on short-stay patients in the new district psychiatric units, whilst long-stay units in the old hospitals were patched and cobbled up.

Changing the mental illness service

The government priority was to move patients from the old hospitals onto the site of the DGHs, and at first little thought was given to the development of facilities within the community or the role of the local authorities. The large hospitals were often in the country, away from the populations for which they cared. A single hospital might serve three or more districts, not necessarily even in the same region. The Epsom cluster of hospitals dealt with districts north as well as south of the Thames and the linkages were changed from time to time. Patients seldom retained contact with their home district and were accommodated at random with people from other places. If district- and community-based psychiatric services were to be established, the situation needed to be sorted out. Each district had to build up new facilities, both in district hospitals and in the community. Conversion of existing facilities was generally difficult, and the facilities at hospitals such as Hackney were poor and far more crowded than the old asylums, with little space for occupational therapy and leisure activities. Spare money for new services was one thing they did not have. The closure of the facilities in the shires might, in theory, provide the money, but the new service had to be developed before the old one had closed. The staff of acute hospitals might be loath to host a psychiatric unit, those at the old hospitals seldom wanted to move with the patients into the community, even if they were attuned to new treatment regimens. The gradual reduction in the number of patients in the old hospitals did not save much money, for the infrastructure had to be maintained and it was the most dependent patients who remained. In annual regional reviews, the DHSS encouraged regions to make fast progress and targets were often set.

Better services for the mentally ill was published by Barbara Castle on 20 October 1975, a time of financial crisis, and the year when the recommendations of the Royal Commission on Mental Health (1957) were supposed to come to fruition.153 It was a sober document, avoiding questions of cost, but recognising that it might be 20–30 years before a wholly new pattern of service would be in place. Regions were all too well aware of the costs. One radical group described it as “Castles in the air”. The paper emphasised the provision of a comprehensive range of local services rather than the closure of asylums, not possible until their services were no longer required by patients admitted many years previously, for whom local services could not be provided. There were still 30,000 such people in 1971 and, despite clinical advances, ‘new long stay’ patients continued to emerge. It was hard to see that community services could be developed when the asylums still had to be maintained. The brave new era depended on a reduction of nearly half the number of local mental hospital beds and the development of a wide range of services collectively known as ‘community care’. The BMJ editorials, often written by Henry Rollin, were critical, not of the policy but of its practicality.154 The policy remained of local units of 100–200 beds, serving a population of no more than 250,000, as part of a comprehensive district service in which local authorities co-operated. The DHSS Priorities document, published in 1976 to guide regional planning, reaffirmed this, although recognising that progress would be slow because of shortage of money.155 Public pressure, and groups such as the National Schizophrenia Fellowship and MIND, pressed for continuing help for people leaving mental hospitals. There was also concern about possible abuses of psychotropic drugs, electroconvulsive therapy (ECT), psycho-surgery and compulsory administration of treatments. As time passed, fears about the policy proved justified. Support from community services might be poor, and local authorities might have different priorities – for example, children. Psychiatric units at district hospitals were selective about the patients they admitted and had difficulty in providing a wide range of services. Some large hospitals would be needed for a long time to come. What was to be done with existing staff in county hospitals, some of whom were as institutionalised as the patients?

Regional secure units

While hospitals had been moving to more liberal attitudes on restraint, judges and prison staff had become increasingly concerned about the number of mentally abnormal offenders in overcrowded prisons.156 Judges believed that some prisoners were mentally ill and needed treatment under supervision, but the mental hospitals were losing the facilities, skills and desire to contain people. As locked wards disappeared, people who could not be managed with safety, either to themselves or to the public, without security became hard to place. Far more supervision was required and psychiatric hospitals had neither the staff nor the facilities to prevent patients from absconding. The special hospitals such as Broadmoor were filled to their limits. There was an increasing number of applications from ‘normal’ mental hospitals wanting to transfer problem patients to them. Most were turned down because there was no room or it was judged that really high security was not necessary. Regions faced horrendous problems with patients who, though they might rape and attack nurses and patients, were refused transfer. The special hospitals, in turn, needed to transfer patients into NHS hospitals at an appropriate stage of recovery. An impasse was often reached. Hospital beds were under the control of psychiatrists, who might be sceptical of how far helping psychopaths was possible. Even if psychiatrists were prepared to admit a patient, the nurses, supported by their unions, might not be. While judges could send people to prison, NHS management would have been unwise to attempt to force the admission of a patient against the wishes of the nurses and doctors. Secure facilities were also required for dangerous inpatients who had not committed any crime.

In 1974, the Glancy Report recommended that each region should provide secure inpatient facilities, and the Butler Committee, established after a mentally abnormal offender committed murder following release from Broadmoor, also proposed that each region should have a unit of 50–100 beds for convicted offenders and other mentally abnormal people who needed medical treatment in secure conditions.157 Such units should be in centres of population, ideally on the site of a district hospital, and have access to the full range of diagnostic and therapeutic services. The DHSS accepted the recommendations, but regions were slow to act. A more formal request was accompanied by a capital allocation, a target of 20 places per million, and encouragement to provide interim units while permanent facilities were created. Some regional health authorities (RHAs) had little enthusiasm for the policy and spent the money on other services that they considered more important. Others found it almost impossible to obtain planning permission. Local MPs objected and nobody wanted psychiatrically disturbed criminals in their back yard. The first permanent unit did not open until 1980, others only came on stream slowly afterwards and some ‘interim’ units were never replaced. The initiative was an orphan; nobody particularly wanted it.158

General practice and primary health care

Primary care was beginning to take its place as perhaps the most important part of a planned health service.159 Julian Tudor Hart, in Glyncorrwg, coined the phrase ‘anticipatory health care’. He began to screen patients between 20 and 64 years of age for high blood pressure, mainly by checking people during a normal surgery attendance, supplemented by call-up and home visiting. He later applied the same approach to older patients and to other risk factors for coronary heart disease and stroke – smoking, cholesterol levels, obesity, diabetes, airways obstruction and alcohol problems. His aim was to move back from end-stage disease to its origins, improve the health of the whole practice by identifying treatable problems at an early, often pre-symptomatic stage, and to look for them systematically, building up a profile of patient information to track progress.160

Antibiotics had made acute infections easier to treat at home; now there was progress with the management of chronic disease. Diagnostic services such as electrocardiography and endoscopic examination became more readily available to GPs, without the need to refer first to a specialist.161 A survey carried out by the BMA Planning Unit in 1969 revealed that the financial incentives in the GPs’ Charter were already producing results; general practice was entering a phase of revolutionary improvement.162 Younger doctors were predominantly entering groups rather than smaller practices. Postgraduate centres were increasingly accessible throughout the country. Better organised and better equipped doctors questioned the need for so much home visiting; patients could be attended more quickly and probably more thoroughly at the surgery. Flexible appointment systems and better surgery organisation made same-day attendances easier. Patients more often had transport, and doctors and their receptionists attempted to reduce what they saw as unnecessary home visits, the numbers falling by at least a third.163 Some GPs had found ways of linking their practices to large, distant ‘mainframe’ computers. They became enthusiastic as they explored the problems to which a computer might be a solution. The costs were so high that GPs could not fund systems personally and general introduction was not possible, but the main applications, including registers, appointment and immunisation scheduling, prescribing and recording the nature of conditions seen in the practice were soon apparent. In 1970 Dr Preece, an Exeter GP, had shown the feasibility of keeping general practice records on a computer and in 1975 an experimental project linked the practice at Ottery St Mary with the Royal Devon and Exeter Hospital.164 It was a visionary idea, but too early both in terms of computer systems and the ethos of co-operation between primary and secondary care.

The RCGP celebrated its 20th anniversary in 1972 and could claim substantial success for its policies. The ‘College model’ of general practice was widely accepted. Research units had been established in Birmingham and Dundee. Its advocacy of university departments of general practice ensured that, though few had yet been established, most medical students saw patients at home as well as in hospital. Continuing education, self-audit and the quality of vocational training schemes were issues not yet resolved.165 In 1974, family planning services, provided previously by local health authorities, became part of the general medical services. Family doctors wished to provide them, and patients should clearly have a choice. Many clinic staff thought GPs would be inadequately trained for the work, but courses were established, fees were agreed, and patients wanting contraceptive advice began to migrate to practices that provided family planning.

Although specialist and general practitioners were often friends, inter-disciplinary friction persisted. Many specialists still felt that the problems presenting in general practice were mostly minor, that patients would really prefer to be treated at a hospital, that it did not matter much if general practice was of a low standard because the hospitals acted as a safety net. They felt that transferring patient care to hospital did no harm and that a hospital need not be greatly concerned with the standards of primary health care in its neighbourhood.166 Those who considered primary health care to be the centre point of the NHS, and that consultants supported GPs, were regarded with puzzlement.

Attachment of staff

Local authority nurse attachment schemes spread. ‘Attachment’ was a misnomer, for potentially it was joint medical and nursing practice. The nurses’ presence affected the GPs’ perception of primary health care; GPs were increasingly involved in health promotion and the long-term management of chronic disease. Alongside them were a steadily increasing number of practice nurses, employed by the GPs themselves, and now partly paid for by the NHS.167 Where doctors and attached nurses were carefully matched, schemes might blossom from the start. Progress was hardest in small practices where accommodation was often inadequate or when there was a shortage of nurses locally. These problems were particularly acute in urban and inner-city areas where senior community nurses were often unconvinced of the value of attachment. There might be a fundamental clash of attitudes. Nurses had no feel for the self-employed position of the GPs. Some GPs insisted that the nurse was ‘their’ nurse and failed to respect the skills of a different discipline. Nursing management had little sympathy for family doctors, nor understanding of the contribution to patient care that the joint working of nurse and doctor could make; it specified who the nurses should be, how many were available, and what work they might do.

Health centres and group practice premises

In the mid-1960s, John Fry estimated that 60 per cent of GP premises had been built before 1900. There were two ways of improving matters: local authorities could provide new health centres; or GPs could build accommodation themselves.

Local authorities could build health centres to accommodate their own staff, nurses, health visitors and dentists, and rent space to family doctors. Health centres were large, costly and slow to build, yet their popularity increased and construction began to accelerate. By the early 1970s, 100 or so were opening each year. By 1974, 15 per cent of GPs were working from them, the numbers rising about 2.5 per cent per year.168 They were a good basis for training young GPs. Health centre GPs were not necessarily in partnership with each other. Most centres had a manager and, in 1970, a design guide was issued covering standards of accommodation. Then the oil crisis produced economic difficulties and cut the money available for building. Capital spending was limited and priority was given to deprived areas. Barbara Castle’s attack on private practice alarmed GPs who thought that, if they moved to a health centre, they would lose freedom of action. Health centre popularity waned. Those centres under construction had been in the pipeline for many years and were not necessarily those most needed.169 The more enthusiastic GPs had been relocated, and many of the rest did not wish to leave their own premises or work closely with other doctors. Some centres, built in the expectation that GPs would move in, remained empty.

The alternative was for family doctors to design, fund and build premises for themselves. Until 1966, general practice was under-capitalised, mainly because GPs had to pay for any improvements to their premises themselves, although interest-free loans had been part of the Danckwerts settlement. The GPs’ Charter introduced direct payments for rent and rates and encouraged the trend towards group practice and the employment of additional staff, which in turn demanded better premises. A group practices loan scheme (subsequently operated by the General Practice Finance Corporation) made it easier to raise the money, and the system of reimbursing ‘notional rent’ or a ‘cost rent’ made the option practicable and sometimes positively desirable. Self-help became the most common way to improve premises, and the cumulative effect on standards was enormous. Only in the inner cities, where there might be planning problems, and the cost of land and building was often too much for the practice to bear, did the scheme fail.

Vocational training

Vocational training had been popular in the early years of the NHS and, in 1957, its best year to date, there were more than 400 trainees. Then entry to general practice became less competitive and rapid partnership became the rule, for security and better pay were easily obtainable. In 1968, the trainee entry numbers had fallen. John Horder, later President of the RCGP, gave evidence to the Royal Commission on Medical Education. The College said that personal and family doctoring could survive only if it had as rigorous a training as those in specialist services. In its report, the Royal Commission treated general practice like the other branches of medicine. It recommended that vocational training should be compulsory and last three years.170 George Godber said that the question had become not ‘whether’ but ‘how’ vocational training should operate. People should not worry unduly about where the money was to come from, but get on with vocational training as an act of faith.171 Increasingly it was realised that vocational training was coming, and the number of trainees rose to 667 in 1975. Most trainees took a newly introduced examination for membership of the College, and trainees formed their own organisations and groups. The College consistently argued that mandatory vocational training was required and, in 1974, the Conference of Local Medical Committees accepted by a slim majority that it should normally be mandatory for those wishing to be principals in general practice. It was one thing to campaign for vocational training; quite another to define it. This task was undertaken by a small and senior group of doctors at the College and helped by a group of trainers working mainly in London. Their report, The future general practitioner: learning and teaching, showed the relationship of general practice, not only with clinical science but also with such basic sciences as physiology, pathology, epidemiology, psychology, sociology and with the theory and practice of educational methods.172 The content of training included the study of health and disease, human development, human behaviour, medicine and society, and practice management and organisation. It was a stimulating and provocative book, setting a breathtaking pace. Trainers were now selected for their ability to teach, their facilities and their qualities. They came together, region by region, to learn how to do it. “Enthusiastic front runners in general practice,” said the BMJ, “should spare an occasional glance over their shoulders to make sure that the rest of the field is still in sight.”173

Prescribing

After the devaluation of the pound, economies were necessary, and either hospital building had to be cut or prescription charges brought in. Prescription charges were re-introduced in 1968. The number of prescriptions fell, people increasingly bought common household remedies across the counter, and GPs prescribed larger quantities. Exempt groups were established, including some with chronic diseases such as diabetes, the young, the old and people on Supplementary Benefit. Half the prescriptions issued were for these categories, reducing substantially the benefit to the exchequer.

Future primary health care policy

A vision of ideal general practice had now emerged and was often visible on the ground. The thinking of the BMA, the College of General Practitioners (which obtained its Royal charter in 1972) and the DHSS was pulled together by the Harvard Davies report on the organisation of group practice.174 It was agreed that general practice was already following the right path – well staffed and accommodated group practice. Yet there was a growing fear that some changes in general practice, though necessary for its efficiency, might have disadvantages for patients. Sir George Godber organised a working party of doctors from the RCGP and the General Medical Council, analogous to the Cogwheel group.175 It considered appointment systems, deputising services and access to diagnostic services. Since 1964, deputising services had developed widely and 64 were in operation by 1972. Though widely criticised to begin with, the working party now thought the services efficient, secure and generally acceptable. Such arrangements were essential to the efficient practice of medicine and should be allowed to evolve in the way that did the least injury to continuity of care.

At their annual conference in 1977, GPs passed a resolution deploring their relatively low remuneration, and asked for a completely new charter to be negotiated. A small working group was established to look to the future. This ‘new charter working party’ was rapidly drawn into discussions about a salaried service, security of tenure, reasonable working speeds and high standards.176

Local health authority services

By 1968 medical officers of health (MOsH) had a smoothly running empire, managing community nursing services, social work services, the after-care of people who were mentally ill or mentally handicapped, the ambulances, and the child and school health clinics. Relationships with general practice and the hospitals had improved, and leading MOsH were at the forefront of thinking in their specialty. This effective structure now began to come to pieces. In quick succession, the Seebohm Report177 recommended the separation of social work services from medicine and directors of social services were appointed. The MOsH had lost a significant part of their work, as mental health, day centres and home helps went to the new departments. The community nurses, the district nurses, midwives and health visitors, were next, establishing a separate nursing hierarchy. The decision was taken to place ambulance services under the regions instead of the local authorities. Finally, the health centre programme began to wind down.

Hospital and specialist services

The economy had been healthy and, by the mid-1960s, substantial development moneys were flowing into the health service. Power lay with the person signing the cheque, and that person was often at the RHB. The regions differed in their fields of competence. Renal transplantation prospered in Newcastle, Cambridge and Hammersmith, cardiac bypass work in Birmingham, Leeds, Hammersmith and Guy’s. Newcastle and Oxford pioneered hospital building developments. Medical manpower planning and training were best developed in Wessex. Liverpool led in the better control of drugs, Sheffield in hospital libraries, Oxford on the relationship of primary health care with the hospitals, Manchester in district psychiatric units.178 Regions got the chance to influence the policy of the centre but there were leaders and laggards.

The work of the hospitals

Changes in hospital admissions

The hospitals were working harder and faster. Between 1969 and 1978, there was a 15 per cent drop in the number of medical beds, and the length of stay fell from 16 to 11 days;177 22 per cent more medical patients and 10 per cent more surgical patients were being discharged. In spite of increased hospital activity, waiting times and waiting lists continued to increase. Surgeons felt they were not being given the tools they needed for the job, and growing waiting lists were the result. Beds were reallocated to reflect changes in clinical medicine. The number of beds for respiratory diseases halved and the cardiological ones increased by 50 per cent. ENT, chest, infectious disease, mental illness and general surgical beds closed, and the number allocated to surgical sub-specialties rose. Changes in treatment, for example, the use of antibiotics, were having a continuing effect. In the surgical specialties, beds had fallen 5 per cent in number, and length of stay from 9.7 to 8.2 days.

Main increases

Main reductions

Kidney disease

Infectious disease

Cancer

Tuberculosis

Ischaemic heart disease

Utero-vaginal prolapse

Poisoning

Tonsils and adenoids

Osteoarthritis

Hernia

Strokes

Peptic ulcer

Leukaemia and Hodgkin’s disease

Fractured neck of femur

Source: Report of a study of the acute hospital sector. DHSS 1981.179

The number of doctors was rising faster than the number of patients they were treating, reflecting the intensity of care and new specialties such as transplant and dialysis units. Similarly, nursing staff increased 17 per cent between 1972 and 1978 compared with a 5 per cent increase in admissions (a shorter working week for nurses, 37.5 hours, accounted for a quarter of the rise). The conditions that were being treated more frequently tended to be more costly than those they were replacing, which were usually dealt with easily and quickly. There was increasing pressure to treat some diseases of elderly people, such as high blood pressure and heart disease, with expensive drugs. While some conditions, such as coronary artery bypass grafting, arguably could lead to savings on medical treatment in the long term, in others, such as kidney disease, once a patient was ‘on-the-books’ there were continuing and accumulating costs. Bone marrow transplantation similarly showed signs of being a growth industry, and a highly expensive one.

Hospital organisation and technology

As technological progress occurred, the specialties became increasingly dependent on one another. Hospital organisation had to allow for this, and patients needed admission to units with the right range of facilities. When organs failed, ‘organ support’ might be practicable. Renal failure could be handled by dialysis, respiration by ventilation and intravenous nutrition – but only in centres with special expertise. Sometimes one patient required many skills; a patient in a cardiac unit who developed renal failure would be in difficulties if renal physicians were not nearby. This problem was at its most acute in early life. Few hospitals had the expertise to handle an infant with multiple problems. Similarly, badly injured people – for example, road accident victims – might need a neurosurgeon for the head injury, a thoracic surgeon and an anaesthetist for chest injuries, and an orthopaedic surgeon for the fractures. They needed an abdominal surgeon as well if they had a ruptured spleen. International studies showed that the outcome – life or death – was largely dependent on the size and facilities of the institution. It was better to travel 100 miles to the right place than die near one’s home. Technology could be deployed to maintain life when previously that would have been impossible. Clinicians such as Brian Jennett, Professor of Neurosurgery in Glasgow, looked at the clinical, ethical and financial consequences of intensive care. Like other technologies it could help patients, but used inappropriately it showed lack of humanity and wasted resources. Sometimes simpler means could achieve the same end, or intensive care was unsuccessful because the condition was beyond influence. It might be unsafe because the risks of complications outweighed the probable benefits, or unkind because the quality of life afterwards was unacceptable. In such cases it was also unwise, because resources were diverted from more useful activities.180

Hospital development and design

Enoch Powell’s Hospital Plan (1962)181 had laid out a long-term blueprint and was regularly revised. Most of the money went, as intended, on district hospitals. The old voluntary hospitals had been in the city centres and, though accessible, their sites were small. The municipal ones had usually been farther out and often had large grounds. No longer required for their original purpose, they provided an incalculable asset to the NHS. Hospitals such as St George’s and the Royal Free stand on the ground once occupied by London County Council fever hospitals. Increasingly, people had modern or modernised hospitals offering good acute care locally. The inclusion of geriatric and psychiatric units within general hospitals was slower, partly because they were generally in second or third phases. Because of overambitious planning and shortage of money, some hospitals were never finished, later phases being postponed. They were left with oversize boiler-houses and kitchens. Concentration on acute facilities sometimes undermined policy to close down large long-stay hospitals, and imposed planning and maintenance blight on small and medium-sized hospitals.182

The huge building programme was uncharted territory. RHBs all had their own architects’ departments, although some commissioned private architects, such as Llewelyn Davies, Weeks and Partners. Changes in clinical science required changes in design. Outpatient departments were ever more important, as were the service departments such as radiology and pathology. Nineteenth century hospitals had spacious ward blocks separated from one another to maintain good ventilation. Now other patterns were tried, for example, a high-rise ward block sitting on a podium of service departments. The Nuffield studies had suggested a break with the 30-bed Nightingale ward in favour of single rooms combined with four- to six-bed bays. The ‘racetrack’ ward had patients’ rooms arranged around a central core of services. ‘Sister’s desk’ was replaced by a nursing station, where staff could sit instead of being on their feet all day by their patients. A senior architect at the Ministry, later admitted to a new ward at St Thomas’, said that he had committed one of the worst architectural crimes by failing to think through what this meant to the patient. Privacy was fine for those admitted for elective surgery, but when seriously ill, the constant observation and presence of the nurses were more important. As the new hospitals opened, problems became apparent. They might provide better facilities but they cost more to run than the ones they replaced. The principle of ‘revenue consequences of capital spending’ (RCCS) was therefore introduced. When a hospital was approved, agreed running costs for the new building were allocated to meet the additional requirements, so that those designing hospitals no longer had to be over-worried about this. RCCS inexorably swallowed nearly all the health service’s growth money, leaving little for developments that were not capital led. Had the building programme been perfect, this would have mattered less, but inequalities between regions were often perpetuated and sometimes aggravated. A key problem was to use the money in the year in which it was available, but in some regions the planning staff were of poor quality and ineffective. They were late with their proposals. The money went to the southern regions that planned promptly and well, and had ready access to ministerial staff. This adversely affected Sheffield, Manchester and Leeds. To try to better use the available money, a new allocation system, the ‘Crossman formula’, was introduced. This allocated money partly on population size, but also on bed numbers and caseloads, ‘need’ playing only a minor part in the allocation process. The new system did not greatly improve matters; indeed it had bizarre – even biblical – results, for unto him that had was given; regions that were over-bedded were further endowed.

The hospital building bureaucracy burgeoned. To reduce the number of planning disasters, escalating costs and contracts that overran, a capital building code was developed (CAPRICODE), which inevitably delayed the start of building. The ‘functional contents’ of the hospital were determined in relation to the population size. An appraisal of alternative sites was carried out. The design had to be approved by the DHSS at each stage, and by the Treasury for larger projects. Eventually it was possible to place a contract. By that time, life had moved on and modifications to the original plans were needed. Redevelopment involved the planned closure of smaller, older hospitals. There was no money to keep an old hospital open once a new one had been commissioned. However, it was one thing to tell a community and its MP that a new hospital would be built on a green field nearby, quite another to shut the well-beloved institution where the locals were born and died. A new industry was established in protesting and delaying closure. Action groups with strong staff representation would demand to see ministers, who had to go through lengthy (although self-imposed) procedures before the hospital could be shut. The lengthy time taken to build and open an NHS hospital, compared with a private one, was only too apparent.

Both economic problems, and the appreciation that London had more hospital beds than could be justified, led to a series of closures of small hospitals where the accommodation was poor and other hospitals nearby could pick up the load. As an example, in east London, the Connaught Hospital was closed in 1977, the site being sold in 1979 for £365,000.

The Bonham-Carter Report

In 1966 the Central Health Services Council had held a conference on DGHs, and decided to review the concept, now many years old. The result was the report of a subcommittee, chaired by Sir Desmond Bonham-Carter, published three years later.183 The subcommittee included many eminent people: nurses such as Muriel Powell and Catherine Hall, and doctors such as Tom McKeown, John Reid, Charles Fletcher and Donald Irvine. It was an example of how the application of logical principles can lead to an impracticable solution. Sometimes called the ‘Noah’s Ark’ report, because of the recommendation that each hospital should have at least two consultants in each specialty, it proposed much larger hospitals.184 Economic provision of support services, such as pathology, laundries and sterile supplies led the committee to see 200,000–300,000 as the appropriate population, double that on which the Hospital Plan was based. The range of specialties to be provided, the desirability of single sites, and the provision of services for elderly people and those who were mentally ill, and sometimes regional specialties as well, increased the size further. Such hospitals clearly could support university medical schools and nurse training schools. In his preface to the report, Richard Crossman was ambivalent about hospitals with 1,000–2,000 beds. Had medical considerations had been given excessive weight? Too little consideration had been given to patient accessibility, the difficulty of managing large organisations and the possibility that economies of scale would not be realised. Indeed an unpublished study by the Department’s operational research unit in 1971 concluded that the most economic size was far smaller – between 500 and 800 beds, depending on whether the site was rural or urban. Specialisation had not yet reached the point at which this measure of concentration was essential.

‘It seemed a good idea at the time’

The hospital building programme provided an opportunity to test new ideas. Some worked but others did not. High-rise hospitals required banks of lifts that were expensive and slowed movement. Lifts affected the way staff met and talked. People did not meet each other as they did in, for example, the St Thomas’ long corridor. Progressive patient care was a system of moving a patient during recovery from high-intensity nursing and observation into a quieter environment, where there could be day rooms to which patients could go. This system could be applied to the ‘racetrack’ ward, which extended round four sides of a square, with nursing of different intensity. The racetrack could be combined with four-bed bays to give patients additional privacy. Greenwich was a ‘test-bed’ for some of these ideas. However, as the average length of hospital stay fell, patients were discharged and did not go to the low-intensity area. The nurses had to walk vast distances to get things. At Truro, one of the Department’s operational research staff spent the night in a ward, and discovered that, when the night nurses came on duty, they moved the sickest patients out of the four-bed bays into the corridor, as there was no time to go into the bays to see how they were.

Central treatment areas, incorporated into the ‘Best Buy’ hospitals (discussed below), also misfired. Nursing treatment was centralised and staffed by specialist nurses next to the day-care unit. The additional cost of porters pushing trolleys round the hospital, the reduction in experience of the nurses remaining on the wards, and the impossibility of moving all the patients so the sickest had to have their procedures on wards with no facilities for preparing intravenous fluids or trolleys, brought the idea into question. New tower-block hospitals were designed in the years of low energy costs. They were large and confusing to staff, let alone patients. Charing Cross had to be closed briefly, immediately after it opened, as nobody had told the staff how to find the different departments. Tower blocks required expensive heating and air-conditioning. They had power-driven lavatories. Had the Thames flooded, those at Guy’s would have ceased to work. An emergency planner suggested breaking the windows and shouting ‘Gardez l’eau’. Sometimes a project turned into a financial horror story. The Liverpool teaching hospital, expected to cost £12 million and be open by 1974, had reached a cost of £54 million by 1977 when the Public Accounts Committee investigated it, and was still not open.185

Standardising design

The opportunity to build anew comes once in a professional lifetime so local people were invariably inexperienced.186 Staff had often retired before their new hospital opened. When the new Royal Free Hospital opened, it was found to have large areas devoted to recovery from ECT, a form of treatment largely abandoned. The DHSS prepared hospital building notes to crystallise good practice and help architects. Standard designs were developed to avoid the cost of each RHB creating essentially similar buildings. First came the ‘Best Buy’ hospitals at Bury St Edmunds and Frimley, billed as ‘two for the price of one’. Best Buy was a compact low-rise building, economic to build, developed in a single phase, with the wards mainly on the first floor in a band round the outside. To keep the allocation flexible, beds were not allocated to particular specialties, and there was a progression from high- to low-dependency areas. A ‘ring-main’ corridor separated the wards from the central core of highly serviced departments, theatres, central treatment rooms, intensive care and the maternity delivery suite. Supporting facilities, the laundry, sterile supply and pharmacy, were provided off-site. It was a tight design that was difficult to expand because the departments most likely to need extra space – the service departments – were in the centre. East Anglia was the only region to re-use it, building modified Best Buys at King’s Lynn, Great Yarmouth and Huntingdon.187 On the basis of epidemiological studies, the hospital was sized to provide two acute beds per 1,000 population, on the assumption that there would be full support from community care teams, local authorities and GPs, and early discharge policies.188 The national norm was then 3.3 per 1,000 and, if that standard had been adopted nationally, 50,000 of the 150,000 beds allocated to acute specialties nationally would have been lost, and the Liverpool and North-West Metropolitan Hospital Regions would have had their existing beds cut by half. More rapid use of beds did not save on clinical staff but, potentially, money might be released for advances in medical technology and the neglected areas of long-stay care.189

The needs of medicine were more varied than Best Buy permitted. Despite pleas for standardisation, most of the hospitals built were one-offs. Many followed a general fashion described as the ‘matchbox on the muffin’: a tower block containing wards (the matchbox) that stood on a low-rise building containing the service areas.190 In 1972, it was thought that about 70 DGHs still needed building. There was a generous spirit abroad; most regions had been designing their own standard departments and the DHSS decided to co-ordinate the work, using the same set of operational policies and standard dimensions. The departments would be ‘harnessed’ together by a framework of communications and engineering works. The project was highly ambitious and the seeds of destruction lay in its costliness. Few were built and there was relief when the project was ditched after the oil crisis. But without ‘Harness’ the building programme might have suffered more than it did. ‘Harness’ was raided to produce ‘Nucleus’.190

Nucleus hospitals

‘Nucleus’ was, in comparison, a runaway success. It was designed to provide 300 beds at a works cost of less than £6 million at 1975 prices. The basic pattern was not rigidly standard and provided a range of departments to suit most needs. It was viable as a first phase and could be expanded later to 600–900 beds. Designed as a low-rise building, there were never more than three floors, reducing the need for lifts. Windows could be opened for ventilation. It resembled a Lego set of interchangeable ‘templates’, each the shape of the Red Cross emblem. The templates might be wards, a laboratory or an outpatient department. The use of the same template for all functions made for flexibility but, in some places, the squeezing showed. The space for patients was ample, but not for staff. The modules could be piled on each other three high and coupled at the ends or sides. The Department’s building division produced a small ‘build your own Nucleus’ package with movable pieces that made a good party game. Modules could be arranged to suit sites of different shapes and gradients. The order of phases was a local decision; acute care might come first, the psychogeriatric unit later. Regions could use standard and tested designs, or create their own internal arrangements and modify the external appearance as they chose.182 The first was built at top speed from 1976 onwards, partly to satisfy the local lobby that had opposed the closure of the Poplar Hospital near David Owen’s home. Maidstone opened shortly afterwards, and had the reputation of being one of the prettiest. The design was modified to reduce the energy requirement, and the first low-energy Nucleus was built at St Mary’s on the Isle of Wight.

Hospital management and clinical budgeting

The first report of the Joint Working Party on the Organisation of Medical Work in Hospitals (1967) or ‘Cogwheel’ was an early attempt to involve hospital doctors in management and ensure that power and responsibility rested in the same place. The Cogwheel reports had suggested that consultants should form clinical divisions, each with a chairman, to decide how they would work with each other to produce the best outcome for patients. Although staff were not always enthusiastic, Cogwheel divisions were progressively organised and quite a lot was achieved. More information was available to clinicians, decisions were taken sooner than before, and all specialties had a voice, although the links with public health and general practice were often inadequate. By 1972, consultants in half the country’s hospitals had revised the organisation of medical work in a quiet revolution.193

Bed usage was a perennial problem.194 Hospitals forever seemed under pressure, yet nationally there were always tens of thousands of beds unoccupied at any one time. Statistical systems such as hospital activity analysis (HAA) showed that the intensity of bed use varied widely from hospital to hospital, and many methods were used to improve efficiency. Progressive patient care, programmed investigation units, admission wards, pre-discharge wards, pooling of beds and better hospital information systems were all tried.

Medical science raced ahead with new discoveries, leading to fresh and often expensive treatments. Deciding which ‘advances’ should be introduced widely, used on an experimental basis only, or rejected was given less attention.195 Iden Wickings at the Westminster Hospital in 1973/4 experimented by giving clinicians responsibility for the budgets. Doctors were involved in agreeing service and expenditure plans, providing an incentive for them to screen out unnecessary or expensive prescriptions, and to scrutinise clinical demands more closely. Although there were expenses in setting up costing systems and in the time of the doctors involved, those who had experienced the process came to support it. It made it easier to review priorities and spend money to best effect.196

Health service information and computing

Planning and managing the NHS, and forecasting expenditure, was an imprecise affair because the information systems in existence had been designed for accounting rather than management purposes. In 1970, the King’s Fund established a working party on the application of economic principles to health service management. In its report, Accounting for health, the group considered that major changes would be needed if an information system were to be medically useful, outcome orientated and provide a basis for management and planning.197 Linking clinical activity and the costs incurred would be essential, and the only way would be to build up from data for each individual patient. Brian Abel-Smith, who chaired the group, proposed an individual patient identifier, perhaps a unique number, so that all activities could be linked. The starting point would be data from hospitals, GPs and the prescriptions that they issued. Scotland also saw a need for more broadly based health service information systems, and commissioned SCICON to explore the requirements. SCICON similarly believed that the key was the collation of information about patients, their illnesses and their environment. The report stressed the need for a basic register of people in each area, based on a unique identification number, which would link activities such as hospital discharges, the immunisation programme, and special disabilities or risks. SCICON envisaged a major interlinked information system covering nearly all health service activities, including finance, personnel and supplies, which would also provide a basis for health services research.198

The experimental computer programme

In 1966, Robert Rowe and Don White, two DHSS officers, toured US hospitals, to look at computer systems. The following year, the ‘experimental programme’ began to explore the use of computers in hospital management. The BMJ said that the practice of medicine by every individual doctor was going to be influenced over the next few years by computer developments, but was worried about costs and confidentiality, and that the unsolicited generosity of government might be intended to serve a national computer industry rather than the interests of the NHS.199

Computers were already being used by RHBs for paying staff and running their finances. They were also processing the 10 per cent sample of admissions (the Hospital Inpatient Enquiry) and 100 per cent of the HAA. It was decided to standardise these functions to minimise the costs of each RHB writing its own software. Several clinical scientists, for example, Professor Whitehead in Birmingham and Professor Flynn at University College Hospital, were connecting auto-analysers to small computers and developing systems to calculate the results and print reports. MOsH, such as Tom Galloway in West Sussex, were using local authority machines to run immunisation programmes. A few general practices were using distant computers to list their patients, maintain records of contacts, and run immunisation programmes. Some hospitals were also beginning to consider the possibility of computerisation.

At first the experimental programme was funded generously. It explored unknown territory, attracted people of vision, and was based on real-time computing compared with the batch processing then standard. The administrators at the hospitals chosen were often high-flyers and many did well in their later careers. Twelve sites were chosen, each exploring different applications. Most systems dealt with administrative rather than clinical problems, reducing the commitment of the professionals. All projects discovered that the starting point had to be a master index of patients. King’s College Hospital attempted to develop computer-based medical records. The project failed to accomplish this, but important lessons were learned about how doctors took histories and came to a diagnosis. The London Hospital believed it was better to computerise a single activity across the entire hospital than to go deeply into an activity in one department only, and began with admissions and discharges. At Exeter, an attempt was made to link a practice in Ottery St Mary and the Royal Devon and Exeter Hospital. Four teaching hospitals worked on a co-ordinated project; the main lesson to emerge was the difficulty of co-ordinating teaching hospitals.

Substantial expenditure needed to be justified. Sizeable evaluation teams were engaged who faced immense problems. How did you show that computers were saving money, time or lives? Systems analysis revealed poor organisation and lack of management information that had to be sorted out before computerisation – a benefit, but not one attributable to the computer. So inefficient was one hospital that the systems analysts declared it was impossible for it to be functioning at all! ‘Before and after’ comparison would not work because hospitals were continually changing. They could not easily be compared with each other – would Charing Cross ever admit that it was comparable with the Royal Free or vice versa? The main achievement of one team was to show that they could divide variables into those that could be costed, those that could be counted but not costed, and those (such as patient satisfaction) that one could neither count nor cost.

It was too early for success. British hospitals lacked the need to invoice patients, which underpinned all US hospital computer systems. Software and hardware were primitive. Mainframe computers cost so much that they could not be duplicated; however, they had to be taken off-line at night to update the files, so the systems could not be used round the clock. As hospitals worked day and night, this was a substantial problem. The Public Accounts Committee in 1976 had harsh things to say about some projects,200 but the debacle was no greater than often occurred in industry. Ultimately the experiments were bequeathed, with a moderate dowry, to the RHAs. Yet visits to the USA showed that effective systems could be developed, and that hospitals such as El Camino, in California, which were well organised and had defined their information requirements, could introduce hospital-wide systems.201 By the mid-1970s, more mundane systems were clearly going to be worthwhile. National standard immunisation and child surveillance systems, and systems to support the functions of family practitioner committees (FPCs), were going ahead. The expertise in computing, though dearly bought, was an asset to the service.

The special problems of London

London presented a particular problem. The pattern of hospitals inherited in 1948 included a heavy concentration of acute hospital beds in inner London that subsequent development had done little to improve.202 Substantial rebuilding of teaching hospitals had accentuated the imbalance and, alone among the regions, the specialised services of the four Thames regions were eccentrically placed, all cheek by jowl in the centre of London. Elsewhere the teaching centres were near the geographic centre of their region. The four regional health authorities made co-ordinated planning difficult, and the presence of the postgraduate hospitals, still directly responsible to the Department of Health, added to the conceptual difficulties. In 1975, a London Co-ordinating Committee was established to assist in solving the problems, run by Albertine Winner, a recently retired deputy CMO of the Department of Health. The regions differed widely in their views and their ethos, and the Committee lacked the power to make contentious decisions bite, highlighting the problems of developing generally acceptable London-wide strategies.

The policy of financial reallocation from the southeast to the north of the country made action imperative for hospitals and medical schools in London. A second attempt was made in 1977, and an officers’ group was established, the London Health Planning Consortium (LHPC).203 It faced two main problems. First was the need to reduce the level of acute hospital services in central London to bring it in line with the population and the money likely to be available in the future. Second, it was widely accepted that there were too many small and medium-sized units in specialties such as radiotherapy and cardiac surgery, and a degree of rationalisation was clinically desirable.

Medical education and staffing

The Royal Commission on Medical Education

Royal Commission on Medical Education (1968)

  • Medical education a university responsibility
  • Three years’ general professional education after qualification
  • Further professional education thereafter, continuing education for life
  • Substantial increases in medical school intake; new medical schools
  • Teaching hospitals to be placed under regions
  • Great change in London; pairing medical schools.

Chaired by Lord Todd and appointed in 1965, the Commission reported on 4 April 1968.204 There were two underlying themes. First, the undergraduate course produced, not a finished doctor, but a person who could become one with further training. (There was, in comparison with Goodenough, little new here.) Second, medical education should continue throughout professional life. The Commission made recommendations on the structure and organisation of postgraduate training, proposing general professional training, followed by more specialised posts and vocational registration, both for hospital specialties and for general practice. It saw the future NHS as fewer but larger hospitals, in association with large groups of GPs working, possibly, in health centres.

The Commission suggested the abolition of boards of governors, placing teaching hospitals within the regional hospital service. Each group of teaching hospitals should be under the immediate charge of a governing body with strong university representation, and should be an integral part of a wider structure, again with appropriate representation. Teaching hospitals and their medical schools opposed these recommendations. Todd personally favoured salaried general practice and his vision was similar to the Kaiser-Permanente system of health maintenance organisations in the USA.

The Commission differed from the Willink report on the future number of doctors in assuming continued emigration, at a higher level than proved to be the case. It also believed that, with economic growth, there would be an increase in the doctor/patient ratio. Both reports assumed that there would be no fundamental change in the organisation and delivery of health care, or that other disciplines might take over work traditionally done by doctors.205 The Commission also differed from Willink in recommending an early increase in the medical school intake to 3,500 and to 5,000 by 1985, a new undergraduate clinical school at Cambridge, and further new medical schools at Leicester, Swansea, and possibly at Keele, Hull and Warwick in the future. It favoured large schools with an average intake of 200 students and insisted that they should be part of a multi-faculty university. The BMJ welcomed this ‘new look in medicine’.206 The government agreed that student numbers should be increased, partly by the expansion of existing schools and partly by the new schools. Two were in the Sheffield region, at Leicester and Nottingham, the first Nottingham students graduating in 1975. The third was in Southampton, where the first students graduated in 1976. In the new schools, the curriculum was designed to blur the distinction between teaching and non-teaching hospitals and between hospitals and the community. Students saw the full impact of illness in the home, the community and the hospital from early in their course. The first Dean at Southampton was Donald Acheson. With a population of 200,000, that city was too small to support an annual intake of 130 students. However, Wessex region had a population of 2,000,000, a highly developed system of postgraduate education and a regional authority keen to work with the university. Many hospitals in the region were therefore involved in clinical teaching.207

In London, where many big problems lay, the Commission made six major recommendations:

  • The intake should be increased from 800 to 1,200 to allow maximal educational use to be made of the country’s largest concentration of medical resources.
  • The schools should be reduced in number from 12 to six by pairing them, so each school could have a full range of clinical departments with academic staff.
  • Paired medical schools should be associated with a multi-faculty college, to enable contact with teachers in other disciplines.
  • Postgraduate institutes should associate with the paired medical schools.
  • Money should be provided to fill academic gaps.
  • Implementation should be in the hands of a committee able to ensure that short-term convenience did not nullify long-term planning.

The essence of pairing was the formation of joint academic units, often in emerging subjects. Many schools resisted closer association. Postgraduate institutes in particular were united and effective in their resolve to resist the proposal to associate them with a general medical school. Most pairing schemes proved costly or led to an inappropriate pattern of services and, once the economic crisis was obvious, the Commission’s proposals were increasingly seen as unrealistic.

Medical staffing

Hospital specialist care for patients had improved substantially, but often at the cost of sacrificing the postgraduate training of young doctors, a dangerous reliance on overseas doctors, who might not always want to come to Britain for training, and an excessive workload for regional consultants.208 Britain trained more doctors than was needed to replace the 2,500 GPs and consultants who died or retired each year, but far too few to staff the hospitals. Medical immigration from the Indian sub-continent had provided many hospitals with their junior staff. The central problem was that hospital doctors took about ten years to train in house officer, registrar and senior registrar posts. If appointed as a consultant, they then remained in a career grade for 30 years until they retired at 65; so, in theory, having four consultants to one junior doctor was desirable, or at most two to one. As it was, fully trained doctors waited years for a vacancy, often in an area that did not appeal to them. To create a system that provided an effective service, in which training and career posts were balanced and that provided an effective and forward-thinking consultant body, was a formidable task. A consultant grade lacking people in their early 30s also lacks a degree of initiative, and the influence of clinical advances.

Juniors were vocal about their career prospects, the length and quality of the training they received, and the hours of duty. Three ways of solving the problem had been canvassed: first a permanent sub-consultant grade, such as the old senior hospital medical officers, or a newly created medical assistant grade; second, a part-time sub-consultant grade drawn from general practice such as the hospital practitioner posts some GPs occupied; and third, an expansion of the consultant establishment compared with training grades. Discussions began with the medical profession, in a group appointed by ministers and chaired by Sir George Godber. To the puzzlement of the consultants, two junior doctors were included. The group’s proposals were set out in The responsibilities of the consultant grade209 and were accepted by the Secretary of State, Richard Crossman.

The responsibilities of the consultant grade (1969)

  • Increase the number of consultants faster than trainees
  • Adjust the geographical balance
  • No permanent sub-consultant grade
  • Training about eight years, no longer than necessary
  • Provide a responsible post for trained specialists.

Source: Department of Health and Social Security and Department of Health for Scotland209

The idea of a permanent sub-consultant grade was dismissed. Instead, it was proposed that training in a specialty should take no more time than necessary, ordinarily about eight years. Immediately training was completed, the opportunity should be given to assume responsibility, instead of waiting a further six or seven years for a consultant post. Training and career posts would have to be brought into line with each other, and it was proposed to increase the number of consultants more rapidly than the training grades: 4 per cent as against 2.5 per cent. Adjusting the distribution of doctors across the country would also be necessary. The metropolitan regions had far more juniors than regions such as Birmingham, Sheffield and East Anglia. Established consultants would lose some of their juniors, as would hospitals in the south. Consultants in regional hospitals, in which 90 per cent of the work of the NHS was carried out, saw the proposals differently from their medico-political colleagues in London and the teaching hospitals. A series of articles written by ‘unheard voices’ appeared in the BMJ.211 Buildings were decaying, staff were poorly paid, and morale was low. Quality junior staff were scarce, making delegation difficult. Merit awards that seemed routine at teaching hospitals were elsewhere as unexpected as winning a premium bond prize. A physician said: “Consultants have been told that they’ll have to make do with fewer registrars and housemen than at present; they must roll up their sleeves and do more of the humdrum work. This patronising directive has infuriated doctors in the periphery. We’re already fully stretched. The whole division is always short of one registrar or houseman away on study leave.” “Like most of my friends,” said an orthopaedic surgeon, “we’ve read all these reports, Godber, Cogwheel and so on. We’re all agreed that they show a total lack of any idea of what actually goes on in a district hospital, and the sheer problems of our workload. When I’m on my way between one hospital and another and it’s my registrar’s day off, I’m constantly on tenterhooks that an emergency, say a multiple road smash, will come into one of the hospitals that nobody is competent to cope with. We’re taking unjustifiable risks with our patients.”212 Once consultants seemed to inhabit paradise compared with the junior doctors. Regional consultants, seeing the progress made by the juniors and GPs, became increasingly vocal, convinced that the proposals were unfair and that the establishment at BMA House had dominated and mismanaged their affairs. The leaders of the profession backed off from proposals that would have balanced the workforce, but to the detriment of regional consultants.

Michael Freeman, an orthopaedic surgeon and one of the authors of The responsibilities of the consultant grade, said that consultants had three objections to the solution proposed. First, they would see fewer cases of interest and have to do more routine work; second, they would have to share out the beds, operating time and outpatient sessions with more consultants; and third, their earnings from private practice would probably fall. Though complaining of overwork, they might be reluctant to solve the problem by the appointment of a colleague.213 Any radical change in the balance of consultant work was resisted, while the Treasury hampered expansion of the consultant grade for financial reasons. To some it seemed that what was at issue was rewarding and satisfying lives for doctors rather than an effective patient-orientated health service. Although there was agreement that the existing system was wrong, there was none on how to put it right. The suggestions in George Godber’s report were delayed, opposed and undermined by the profession’s representatives, both centrally and in the regions. The opportunity was lost and would not come again until the mid-1980s. The regions spent years struggling, to little avail and with less support, to move registrar posts out of the teaching hospitals, and from London to the north.214 In 1972, a new Central Manpower Committee was established; it had a long haul ahead.

In April 1970, the Labour government held up the publication of the doctors’ 12th Review Body report and its recommendation of increases of 30 per cent. The government referred half the award to the National Board of Prices and Incomes, criticising some of the reasoning behind recommendations. The Chairman of the Review Body, Lord Kindersley, and his fellow members resigned. The BMA applied sanctions. After the 1970 election, the newly elected Conservative government, led by Edward Heath, withdrew the reference.215 A new review body was announced and the BMA sanctions were lifted. The new body could work as it chose and review pay any time, asking for any information it needed. Although the doctors remained suspicious of government, the combination of an independent body and a vociferous, strong and united (over the question of pay) profession seemed as invulnerable to the whims of government as anything could be.

The aspirations of seniors and juniors were different. Young doctors wanted more senior posts to be available, better training, and not to be used simply as ‘pairs of hands’. Junior hospital doctors were gaining power and better representation, and they pressed for a contract that recognised the long hours they worked. An extra duty allowance was introduced in 1970. In 1973, the juniors were granted an independent ‘craft committee’ within the BMA, and with it a stronger voice in negotiation. In 1975, a new contract, in the wake of industrial action, gave them a basic 40-hour week, while ensuring that they would still work whatever extra hours their posts required. They were paid overtime and, although this was at a lower hourly rate, it increased their earnings by a third. The changes were moving junior doctors away from a professional system of remuneration and closer to an industrial one.216 Slowly shift systems were introduced, particularly at night. Junior doctors might have only short-term responsibilities for patients whom they would never see again. They would find themselves on call with consultants with whom they never did a ward round. Working relationships were less close and the mutual support within the team was diminished.217 The traditional ‘firm’ system began to break down.

Pay beds

A crisis erupted in 1974 over the beds in NHS hospitals to which consultants could admit private patients – ‘pay beds’. Trades Unions disliked them, there was more than a suspicion that their existence allowed queue jumping, and Labour had pledged in its 1974 manifesto to phase them out. To the medical profession, pay beds were an issue of principle. National Union of Public Employees (NUPE) members refused to care for patients in new wards at Charing Cross. Negotiations over a new consultant contract were fouled up, and many consultants began to ‘work to contract.’ Barbara Castle, Secretary of State, ploughed ahead. Ultimately the Prime Minister, Harold Wilson, was involved and asked a distinguished solicitor, Arnold Goodman, to arbitrate. Two weeks of quiet negotiations took place in his flat. Henry Yellowees, the CMO, was kept in ignorance, perhaps because it was so political, or perhaps because Ministers were uncertain about his loyalty to them. The CMO was, inevitably livid, his role with the profession having been demeaned. A compromise agreement was reached.218

Nursing

Nurse education and staffing

Hospital nursing staff (England and Wales)

Hospital nursing staff (England and Wales)

1949

1968

Total

137,636

255,641

Registered nurses

46,300

85,898

Student nurses (studying for the Register)

46,386

53,148

Enrolled nurses

16,076

38,725

Pupil nurses (studying for the Roll)

1,515

18,406

Others

27,355

59,464

* excludes hospital midwives

Source: NHS Executive

Nurses and midwives were the largest group of NHS staff and their numbers had grown steadily over the previous decade; only community midwives were getting fewer. The demand from specialised units was rising. There were fewer school leavers, student entry was static, and there were never enough recruits of high quality. The post-war expansion of higher education provided wider opportunities for young women, and nursing had to compete. Degree courses were introduced for the small minority of students with the necessary academic qualifications who wanted them, so that such people were not lost to the profession. They had time to go more deeply into a subject and relate it to other areas of knowledge.219 A Scottish study showed that, the better the educational qualifications of the entrants, the less likely they were to give up. Some students were now married. By the time they took final state examination, nurses were often planning a family, travel or a new career. Wastage remained high – 36 per cent at the Central Middlesex, where a careful study showed that the cause was not just the hard work; most entrants were attracted to nursing precisely because it involved looking after people in a practical way. Anxiety about the responsibility, being away from home, emotional involvement with patients, problems in studying and lack of confidence in their own skills played a part as well.220 The problems identified by nurses on the wards had hardly changed over the years. They did not relate directly to quality of care, but to shortages of nursing, ancillary and secretarial staff, questions of status between various groups, travelling, accommodation, working hours and retraining.221 The Department of Health issued a hospital memorandum. It was a pink one, which meant it was for action, not information. Richard Crossman sent it to chairmen of boards of governors and HMCs, and the CMO to medical advisory committees.222 Because nurses were scarce, they should nurse, and not spend valuable time on non-nursing duties, chores that others could do, such as sterilising, distributing meals and taking messages. Intensive care units should be brought together in one place. Nearly everything doctors decided to do for their patients had nursing implications. The best way to help was to bring nurses fully into management.

The need for a second grade of practical nurse remained. Over the first 20 years of the NHS, the number of nursing auxiliaries more than doubled – a greater increase than students or registered nurses – and the numbers continued to grow. To the nursing profession, this was seen as a threat to standards and a continuing dilution of the profession, and there was a move to remove the word ‘nursing’ from their title. To management, it was the answer to many staffing problems. Not only were auxiliaries making a major contribution to patient care, but they were increasingly holding positions of responsibility, carrying out a wide range of nursing that registered nurses were either unable or unwilling to do in some areas of the country.223 After 1974, health authorities had many local authority members, some of whom saw political as well as practical reasons for recruiting widely from ethnic minority groups and the pacific rim; often they funnelled these recruits into the auxiliary grades. The nursing profession in the USA had constantly upgraded its educational requirements, creating a role for a support worker. The same process in the UK maintained the need for the auxiliary, the state-enrolled nurse (SEN), distinguishable by badge and uniform.

Conflicts within nursing

Exclusive profession

Extensive workforce

Profession

Trade union

Pure nursing

Management responsibilities

Holistic practice

Specialist practice

Senior clinicians

Senior managers

Female

Male

Source: Owens and Glennerster (1990)224

The development of a more open society, a larger group of nursing academics, and unionisation intensified the conflicts. Should senior nurses concern themselves with nursing alone or become involved in the wider environment of health care? Militant trade unionism and professional aspirations clashed. Nurses were seen in street protests. The Nursing Times carried a poem on ‘Thoroughly Militant Millie’ whose reward was a post, first at Region and then the Ministry.225 There was also the gender issue; male nurses had dominated the mental illness and mental handicap hospitals, and reorganisation in 1974 saw a substantial increase in the number of male chief nursing officers, particularly in the north. In tune with the times, discipline was relaxed. The nurse’s ‘mystery’ was diminishing and at times patients were encouraged to use the nurse’s first name. To maintain recruitment and morale, attempts were made to give nurses time off-duty convenient to their social life; when planning duty rotas ward sisters had to bear in mind the requests the nurses had made for time off, as well as ward staffing requirements. The question of uniform would always stimulate debate among nurses; apart from a shorter hemline, nurses’ and sisters’ uniforms had changed little in some hospitals since the times of Florence Nightingale. Rules governing dress were less strictly applied; sometimes caps were not worn. Now nurses rushed home in their uniform; it could hardly protect patients against cross-infection, although it still made clear the occupation and status of the wearer.226

In 1968, the Prices and Incomes Board was asked to examine nurse pay structure, levels of pay, and conditions of service. The Board frequently strayed far outside its terms of reference. In its report, it considered nurse training and management and suggested a new salary structure as an incentive to efficiency, translating existing posts into Salmon grades.227

Prices and Incomes Report no. 60

  • Lower entry age to 17
  • Larger training schools independent of hospital management
  • Implement Salmon in January 1969
  • Replace RHBs/HMCs with single-tier authorities
  • Modify off-duty restrictions
  • A general increase of 9 per cent in pay, with many other improvements, including enhanced overtime, mental and geriatric ‘leads’.

Higher rates would be paid where staff shortages existed. The Royal College of Nursing (RCN) initially welcomed the report for it offered the chance to revolutionise training. Later there were criticisms of its proposals and the size of the pay award. The government recognised that nurses were ‘an exceptional case’ and recommended the report to the Whitley Council negotiating body. In a subsequent debate in the Lords, Lord Amulree suggested that there should be a review body for nurses, as there was for doctors. The BMJ said that nurses had been exploited for a century; their medical colleagues should help them achieve economic justice.228

The Briggs Report (1971)

In 1970, when health service reorganisation was clearly coming, Professor Asa Briggs was asked to review the roles, education and training of nurses and midwives.229 The driving forces were the problem of recruitment, education and the conditions of work in nursing; there was little feminist pressure at that time. It was immediately apparent that many recommendations of past studies had not been followed through. The Committee chose to address issues that had immediate topical importance and to be forward looking.

  • Two 18-month modules
  • Relate theory to practice in four clinical areas medicine, surgery, psychiatry and community
  • A continuous process of education within a profession in the process of change
  • Colleges of nursing
  • More undergraduate nursing degree courses
  • Creation of a unified statutory body for nursing.

It reported the following year and it was not the easiest report to understand. Though produced by an eminent historian, there was little reference to the past; Briggs wrote in a desire to obtain action. The most immediate problems were often long standing; no solution was offered for some of them. Pointing to the changing social and medical context, Briggs thought nursing must also change and would be judged by the quality of care individuals received. Dividing lines between hospital and community services would have to be crossed if reorganisation was to improve patient care. In future, hospital beds would be used only when necessary, and be linked with outpatient and domiciliary services, based on group practices and health centres. Nurse attachment to practices was commonplace and health centres were spreading. Services for the mentally ill and handicapped were increasingly based in the community.

The recommendations were concerned more with the structure and organisation of nurse education than with the role of nurses and the nature of nursing. Asa Briggs thought the current system of training inadequate and did not always provide a satisfying range of opportunities for nurses. He proposed fewer but larger colleges of nursing and midwifery, which should recruit from a wide range of intelligence. It was, however, NHS reorganisation in 1974 that provided an opportunity to halve the number of schools to about 200, each usually relating to one of the new area health authorities (AHAs). The report said that basic nursing could be learned thoroughly only in clinical settings, and theoretical instruction should be related step by step to this. The course should be in two stages, each of 18 months’ duration. Continuing education, specialist and back-to-nursing courses should be developed, and there should be a drive to produce more teachers. Conditions of work should be improved and the hospital and community services should work more closely. The Nursing Times commented on the extent to which Briggs followed Miss Nightingale’s ideas, that nursing could only be taught at the bedside, that students should be encouraged to have an ésprit de corps, that education was a continuing process, and students needed library facilities.230

Asa Briggs wished that his report had appeared before Salmon and Seebohm. He felt that Salmon’s elaborate grading structure took little account of the varied aspects of the nursing profession, and Seebohm dealt with matters that affected some nursing roles, particularly health visitors. The report was presented before Britain joined the European Communities (EC), but it subsequently became muddled up with the EC directives, and its implementation was repeatedly delayed. The Treaty of Rome meant that the UK had to be in compliance with the directives. Important Briggs recommendations could be implemented only if there was a single body controlling nurse education – and there were 13 or 14. Briggs proposed changes in the statutory framework of the profession and a single central body responsible for professional standards, education and discipline. This meant lengthy discussions with the many nursing professional bodies, each of which had a desire to fight its corner. Roland Moyle, the Labour Minister, not fully realising what he was letting himself in for, agreed to chair a group bringing the conflicting interests together. Each had its own ethos and the loss of individual statutory bodies was therefore a cause for anxiety. Each wanted maximal representation on the new United Kingdom Central Council (UKCC) so ‘horse trading’ led to a top-heavy organisation and accusations that the representation of some countries and nursing disciplines was inappropriate. Financial and legislative problems delayed action on Briggs, which was allowed to mature until Barbara Castle, in May 1974, announced that she would accept the main recommendations concerning training: £18 million would be found for tutors, clinical teachers and ward sisters; in the event, it was put off again because of an economic crisis. It took seven years before the legislation set up the statutory framework; every time legislation reached the House of Commons, a general election was called, to the despair of those responsible.

Nursing practice

Theories of nursing

Developments in medicine were forcing changes onto nursing, and there was a greater advance in medical than in nursing knowledge. Over the years, nursing had followed medicine in the pursuit of cure, and hospital nursing was largely synonymous with tasks associated with diagnosis and treatment. Much in nursing depended on the relationship between patient and nurse, and that took time to develop. Patients, however, now left hospital more rapidly; what could be done to maintain this central feature of the profession? High technology, the development of intensive care units and electronic monitoring placed new burdens on nurses’ shoulders. There was little room in hospital for the ambulant patient. In a crisis, nurses needed both to diagnose and to act in a way outside the traditional nursing role, and in the process developed skills and knowledge in specialised fields beyond the competence of nursing management and not covered in student training, becoming an integral part of medical teams.231

The first English university department of nursing studies was established in 1970 in Manchester, although there were university degrees available elsewhere (as at Southampton) associated with nursing or health visitor registration. Nurse educationalists looked for autonomy and a role less dependent on the processes of diagnosis and treatment of the acutely ill. Doctors who had traditionally taught student nurses were invited to do so less often. A partnership of trust, working to a common purpose, began to be replaced by mutual wariness and attempts to define territory. New ideas in nursing were sometimes unattributed adaptations of sociological or educational concepts, or medical systems of history taking, diagnosis and record keeping. Many came from the USA.232 Technical concepts often transferred well, but those relating to how people thought and behaved were culturally based. The fundamental assumptions on which the ideas were founded might not fit the culture into which they were being transferred. For example, in the USA, nurses had long distinguished between ‘nursing therapy’ and ‘patient care’, the latter often not being the nurse’s concern but something left to non-professional workers.233 Nursing was more technical, and nurses worked within a context of defensive medicine. New theories of nursing, for example, the nursing process, were developed at least partly by American nurses in an attempt to retake some of the basic nursing territory occupied by nursing assistants. They were not ideally suited to the UK, with its tradition of bedside nursing. Virginia Henderson, the American nursing academic and, for 25 years, editor of The principles and practice of nursing, published her definition of nursing in 1955, which gave the nurse a role as the authority on the maintenance of daily living activities, the doctor the authority on the diagnosis and treatment of disease, and stressed partnership between the two.

The unique function of the nurse is to assist the individual, sick or well, in the performance of those activities contributing to health or its recovery (or to peaceful death) that he would perform unaided had he the necessary strength, will, or knowledge. And to do this in such a way as to help him gain independence as rapidly as possible. This aspect of her work, this part of her function, she initiates and controls; of this she is master. In addition she helps the patient to carry out the therapeutic plan as initiated by the physician. She also, as a member of the medical team, helps other members, as they in turn help her, to plan and carry out the total program whether it be for the improvement of health, or the recovery from illness, or support in death.234

Professional divisions

Changes in the way health care was provided altered the way doctors and nurses interacted. Because patients were discharged with ever greater rapidity, and beds were allocated more flexibly, a consultant would have patients on many wards, and each ward would be visited by many consultants. The traditional ward teams were disappearing. Formal ward rounds by ‘the chief’ became less a feature of ward life; some sisters ceased to accompany the consultants when they were there, and, where team nursing had been introduced, it was the staff nurse leading the team who was best informed. Continuity of care seemed fractured beyond repair. A consultant wrote “I turned to ask sister a question during my regular ward round and she had gone – off duty, I was told; Miss Nightingale had fled and the long admired dedication with her. Similarly, the younger doctors were good competent professionals when on duty, but when they were off, they were off.”235 Doctors and nurses were beginning to speak different languages, not necessarily to the advantage of patients. Doctors could never understand why those who taught nursing did not practise it; or why there was so much concentration on nursing as a profession, its education and structure, and so little research on patient care.

Virginia Henderson was critical of some developments, believing that too much time was being spent on theoretical approaches, and that nurses were using terminology that was vague and could not be understood. She wished that some theoreticians were at least part-time practitioners.236 She thought the distinction between the role of doctors and nurses was “pretty absurd” considering the varying patterns of clinical practice in different parts of the world. In her textbook of nursing, she placed equal emphasis on the technical and the holistic. Nursing, always an amalgamation of different groups with different values, was dividing. On the one hand were nurses who were claiming professional autonomy. On the other were those who enjoyed the challenge of intensive care, dialysis and transplantation, and active clinical work in all disciplines. Doctors, recognising the contribution that nurses made, encouraged them and trained them. The nurses saw themselves as valued partners who often substituted for the doctor and key people in a unit run under protocols agreed by all professionals.

Community Nursing Staff (England and Wales)

1949

1959

1968

Home nurses

5,776

7,087

8,803

Health visitors

3,753

4,278

5,409

Midwives

No data

4,820

4,861

Source: NHS Digest of Statistics

Community nursing was also altering. The growing numbers of elderly people made increasing demands on district nurses, who were enjoying the experience of working with the better general practices. Domiciliary midwives saw their home deliveries disappearing and, from 1967 onwards, their numbers fell rapidly. Health visitors, with their background in social and environmental issues, and a concept of health linked to local authority social services and voluntary organisations, had greater readjustments to make. The place of health visiting within primary health care, the health visitor’s role in relation to general practice and how – within the attachment schemes – she could maintain her public health role, were constant topics of debate. The supporting structure of the MOH’s department had been lost. Health visitors were now fully part of the NHS, part of a group of community nursing services.

Nursing administration

The Salmon Report

Brian Salmon’s Committee had recommended an organisational structure for nursing, with a nurse at every level in the hospital hierarchy, and this reduced the historic power of the matrons of the large acute hospitals. Salmon wanted to pilot implementation gradually over a period of years, so that the new structure would be well understood. Instead, in 1968, shortly after pilots had begun, the DHSS responded to the report of the Prices and Incomes Board and decided to implement the new structure nationally. Nurses’ pay would be increased at all levels, and simultaneous implementation of the staffing structure was required to avoid the anomalies of two parallel pay structures. In Salmon’s view, 80 per cent of senior nurses were placed in the new structure with no management training, a foolish way of introducing a new system. By 1970, approvals had been given to 166 Salmon schemes and the comparable Mayston Report on community nursing was implemented simultaneously.

The interpolation of additional levels of management reduced the traditional supervision of the wards by ‘matron’s office’, ultimately to the detriment of patients’ care. Problems on the ward were no longer instantly known to hospital management. Chief nursing officers (grade 10) were in a weak and lonely position, with few contacts and without the intimate hospital involvement matrons previously had. Their tasks were recruitment, finance, the appointment of senior staff and the representation of nursing at the highest levels.237

The Salmon Report aroused many passions. The two senior nursing grades (10 and 9) concerned with policy decisions, two middle grades (8 and 7) dealing with programmes to apply policies and two front-line grades (6 and 5) at the bedside had, according to the BMJ, emerged like a trident with one prong amputated. There were beckoning heights for nurses who became administrators, but for ward sisters who preferred nursing patients (the Salmon no. 6s) prospects appeared depressingly flat.238 A follow-up report, Progress on Salmon,239 was confident that the proposals were right, while admitting that there was lack of understanding of the tasks of the different grades, particularly grade 7, the new nursing officer. Those in grade 7 were to be experienced clinical nurses able to take responsibility for the standards of nursing. They would combine clinical and managerial roles and be based in the unit, not in matron’s office. Sometimes grade 7s had to cover units with mixed functions, such as general medicine, coronary care, renal dialysis and children’s surgery. They were unlikely to have expertise in all these fields. Salmon resulted in the promotion of experienced ward sisters to grade 7, leaving many wards with younger, less-experienced staff. Consultants noticed the change and complained that the most highly trained clinical nurses were being promoted away from patients and into management. Salmon said that expertise should stay on the ward – and ward sisters should be properly rewarded for their work.240 The new structure did not survive for long because NHS reorganisation in 1974, and subsequent management changes, killed the concept of Salmon, and ruined the morale of senior nurses. Simultaneously Cogwheel was changing the organisation of the medical staff. Nursing was hierarchical but medicine was not, and frictions developed between the professions.

The path to NHS reorganisation

Industrial unrest

Industrial action had been rare in the NHS, and no more than 3,000 staff had been involved in any year until 1971. Thereafter, inflation and industrial action became significant. Union power was strong. Income policies affected hospital ancillary staff, and management might concede unnecessary overtime to increase earnings. Previously a small pay award might last two or three years but no longer. The introduction of strict government control of wages at a time of high inflation, and the breaking of traditional links with local government workers who had agreed terms just before the start of a pay freeze, led to the first major national dispute, the ancillary workers’ strike in 1972. The numbers involved in industrial action rose to 97,000.241 The last months of 1973 saw the Yom Kippur war and the decision by the oil producing and exporting countries (OPEC) to raise the price of oil substantially. This produced a financial crisis and, in December 1973, the Conservative Chancellor, Anthony Barber, introduced a package of cuts concentrated on capital projects, supplies and services.242 Attempts to resist wage demands led to further industrial unrest. In January 1974, a ban on overtime working by the miners led the government to take emergency powers and impose a three-day week throughout industry. The Lancet (which had to fly copies into the country from the USA) thought the time was now ripe for a governmental retreat with dignity, by giving the miners a large increase of pay in compensation for the injuries and ill-health associated with mining.243 The crisis led to the fall of the government.

Organisational issues

Alongside the economic problems, there was increasing concern about the division of the NHS into three parts – hospital, GP and local authority services – which organisationally and financially seemed to have little to do with each other.244 Because so many people with long-term problems required both the NHS and social services, co-operation between the two was desirable. Within the NHS, the medical profession, critical from the time of the Porritt Report, argued increasingly for structural change to improve co-operation and co-ordination. A hospital-orientated NHS was said to be anti-GP and out of contact with the community services run by the local authority.245 Past planning had been based on consultant numbers or on hospital building, taking little account of developments in primary health care or objective criteria of patients’ needs. Yet the diseases with which the NHS dealt increasingly had multiple causes and required long-term care, mostly outside hospital. Health education, hospital services, GP and community services all needed to be brought together. Walter Holland, Professor of Public Health Medicine at St Thomas’, believed that a planning process was necessary, considering the ideal future, forming an objective, setting a target, allocating resources and implementing a programme that was followed up and evaluated. This would provide a clear and coherent framework for all the myriad decisions, small and large, within the new organisational structure on the horizon.246 From the patient’s perspective, or from that of the GP, the problems were less apparent. At ‘grass roots’ people were blissfully unaware that the tripartite nature of the service was considered unsatisfactory. John Reid, the MOH for Buckinghamshire, himself an advocate for unification, was planning the new health services for the future Milton Keynes effectively, even though the managerial division into hospital, local authority and GP services was deemed to make co-ordination impossible.247

A multitude of reports and proposals

The pace was set by the existence of the Royal Commission on Local Government and the Seebohm Committee on Local Authority Social Services. Local government boundaries were going to change. In principle, Labour and Conservative parties were agreed that a unified health and local authority system would be ideal but was not practical politics. Both the medical profession and the local authorities had, in effect, a veto. Similarly there was tacit agreement that, if amalgamation was not possible, alignment of the boundaries of health authorities and the local authorities was desirable. Between 1968 and 1972 there were several attempts to create a better organisational structure. The plethora of reports was itself confusing. In November 1967, Kenneth Robinson announced that he intended to review the administrative machinery of the NHS, looking particularly at the tripartite structure. In 1968 the report of the Royal Commission on Medical Education and the Seebohm Report were published.248 Robinson’s Green Paper on the Administrative Structure of the Medical and Related Services in England was published after them, in July 1968. Its central theme was the unified administration of services in each area, in place of the nearly 700 separate authorities currently existing. Regions would go and there would be 40–50 area boards in England and Wales, a proposal criticised on the grounds that they would be too remote from the field and too many for the Ministry to handle effectively.

The Future structure of the National Health ServiceIn 1968, Richard Crossman succeeded Kenneth Robinson to head a newly merged Department of Health and Social Security (DHSS). As Secretary of State, he appointed Brian Abel-Smith, the economist who had worked on the Guillebaud Committee, as a special adviser. Abel-Smith maintained close contacts with Professor Jerry Morris, at the London School of Hygiene & Tropical Medicine. Crossman had to conduct the negotiations on Kenneth Robinson’s proposals and to deal with the additional criticisms that the hospital service would dominate the area boards, and the advantages of regional planning would be lost. He established a committee to examine possible regional functions, and favoured a bottom-up system in which regions would essentially be a federation of district representatives. In March 1969, he announced that a new Green Paper would be issued, that there would be a two-tier system with some 200 district committees and above them a second tier of about 20 regional authorities.249 The report of the Royal Commission on Local Government, published in June 1969, recommended that consideration be given to unifying responsibility for the NHS within a new system of local government, proposing 58 unitary local authorities and a two-tier system for Birmingham, Liverpool and Manchester similar to that in Greater London.250 Crossman’s Green Paper of February 1970 set out the two main objectives of integration. The first was to establish a unified administration at all levels of the service, to facilitate distribution of resources between the three sectors of the NHS, and to secure better balance between hospital and community. The second was to promote continuity of care, better communication between health service personnel and more flexible use of staff to secure a better quality of service from existing expenditures.251

Neither of two obvious solutions for reorganising the NHS would work:

  • One could integrate all the local authority community services that were even marginally connected with health into the NHS – for example, home help services and old people’s homes. This was the BMA solution, but it would continue a hospital orientation and increase neither local authority nor GP influence. It might increase the aloofness of the service.
  • One could integrate the whole NHS into the new large local authorities. There were two problems with this.252 First, local authorities would need a new and growing source of finance. Was there a Chancellor who would allocate a tax that naturally expands, like income tax, to the local authorities? There was also the medical veto, for the doctors had made it clear that they would not permit it to happen, fearing that local authorities might give health a lower priority than other services, for example education, and that standards would differ from place to place. To work under a local authority was seen as an end to clinical freedom and the doctors’ standing in society.

By the time the Green Paper appeared in 1970, Crossman had changed his views and there would now be Area Health Authorities varying in population between 200,000 and 1.3 million, which would establish district committees with a chair and half the members drawn from the area and the other half from those working or living in the district.253 The district’s committee would be served by the area officers and would not have a separate budget. No powers would be delegated to them. Regional Health Councils would reflect the areas grouped in the region, drawing their membership from the areas. They would not supervise or control the areas, which would relate directly to the department that would manage capital building, taking advice from regional councils.

Crossman thought government was being forced into a ‘miserable middle way’ and local authorities would be alienated if the boundaries for health authorities and local government were not the same. He believed the NHS would wobble between the two simple options.254 George Godber opposed coterminosity, as it might open the door to a future take-over of the NHS by local government; he thought their boundaries were all wrong for the NHS. However, he argued for an elected component to health authorities. In the event, the reorganised NHS neither took over those local government services that were essentially community services nor was it taken into local government. Crossman tried to wobble it as near the local authorities as he could. Area authorities were proposed that would match the local authorities. They would have a substantial membership of local people, partly councillors, partly representatives of local doctors with GPs strongly represented, and nurses and others who would want to develop community services. There would be regional councils to co-ordinate the areas, but they would have little power as the areas would be directly accountable to the centre.

Conservatives’ proposals for NHS reorganisation

The Conservative victory in the 1970 election stopped consultation on the second Green Paper. The BMJ wasted no time in reminding Keith Joseph, the new Secretary of State, that the fundamental problem was a shortage of resources, not the organisation of the service.255 Keith Joseph spoke of a search for alternative sources of money but said that the NHS would continue to be paid for very largely out of taxes and contributions. Although the service would be reorganised under health authorities and outside local government, he did not think Richard Crossman’s system would be efficient; managerial and financial efficiency mattered to him. The regional tier would be maintained with its power intact, and there would be new areas that would match the local authorities. This added another tier and, in retrospect, it was a major error, inevitable given the extent of local authority influence and the absence of anyone as tough as Nye Bevan to say ‘No’. Keith Joseph issued a further consultative document in May 1971.256 It was brief and largely confined to issues where his views differed from those of Crossman. The objective was a unified and efficient management, and members of authorities would be chosen for their management ability, not because they represented different interests. The BMA supported the continuation of regions as it would favour proper organisation of hospital services. Line management and chains of command now became dominant. Much attention was paid to recommendations from the management consultants, McKinsey’s who were told by Joseph what they were to do, and Brunel University. The Lancet thought they were given far too much attention. Both bodies added greatly to the richness of the jargon used in the NHS.

“The exclusion of representatives,” said The Lancet, ‘is defended on the ground that management should not be confused with the community’s reaction to management – administering the health service is too serious a matter to be shared with the citizenry.”257 Crossman believed that Keith Joseph did not appreciate that one problem of the service was its remoteness from the public, and his ideas would maintain the self-perpetuating regional oligarchy.

The White Paper appeared in August 1972.258 It was full of management jargon which the medical profession was becoming tired of and sceptical about. The sections on private practice would certainly have been different had it been Richard Crossman’s rather than Joseph’s document, because private practice was seen as ‘giving people an opportunity to exercise personal choice’. The Lancet had now mellowed; Theodore Fox had retired and the journal was less radical.

This week’s White Paper on the reorganisation of the NHS (in England) is welcome and wise the picture is emerging and it looks none too bad. The call for integration has been heard for so long that some forceful utterance such as this was needed to save the plea from becoming a dim echo Ever since 1948, the imperfections of the NHS have sprung mainly from three faults: lack of money; shortage of skilled staff in many areas (partly the result of fault no.1); and too little enthusiasm among some of the service’s members – enthusiasm to make it really work. The White Paper, should it become the foundation of a new NHS structure, backed by ample resources, should revivify the whole scene. Some critics will regret the dearth of research and experiment before action; others will rightly point to the scarcity of able administrators; others to the continuing separation of health and social services Yet all these will be in a minority The future looks brighter.259

NHS reorganisation

Shortly before reorganisation, Sir George Godber retired as CMO at the Department of Health; it was the end of an era. A professional civil servant since the late 1930s, he had taken part in the shaping of the service and, with his close contacts in the profession, increasing seniority and unswerving loyalty to a succession of governments, he had used his influence to mould and improve the NHS. The BMJ said that his unceasing effort to make the NHS a harmonious organisation for doctors and patients, his knowledge, fair judgement, patience and courtesy had endeared him to a profession not notably tolerant of bureaucracy.260 The quality of CMOs could now be measured in units known as ‘the godber’; none exceeded 1.0. The power he exercised was not permitted to his successors.

More work and consultation took place in the run-up to the 1974 reorganisation than on any structural change before or since. Public interest was virtually nil and there were no fanfares from doctors. There had been no bitter political battles and no threats of doctors walking out. Neither the health needs of the population nor the staffing of the service were going to change. Perhaps the absence of drama accounted for the general lack of enthusiasm for the first major facelift in 25 years.261 The failure of TV and radio ‘to fulfil their duty to stimulate informed public debate on important issues’ was raised by the Social Morality Council, which pointed to the absence of any serious television programme on the topic in the two years after May 1971 when government’s intentions became clear. The BBC replied that there had been a programme on Radio 4 in September 1972.262

Community medicine

Increasingly, the term ‘community medicine’ was used to describe the branch of social medicine that dealt with matters relating to the health of groups rather than individuals. Such doctors were anxious and in a state of uncertainty. Reorganisation was moving the MOH out of local government, and there was a wish to form a single specialty to include the public health doctors, the medical administrators and the epidemiologists in academic departments. There were debates about the definition, aims and methods of their disciplines.263 Professor JN Morris, influential at the London School of Hygiene & Tropical Medicine, believed that public health practice should be firmly based in epidemiology, advocacy, health promotion and the control of disease. The community physician “should be teacher, watchdog and trouble-maker … In promoting the people’s health, the community physician must be directly concerned with the mass problems of today and be able to draw on the community’s resources to deal with these, not be limited to the categories of need or service that history happens to have deposited in his office.”264 Others emphasised planning and management, in a service crying out for the evaluation of services and prioritisation. The groups came together and established the Faculty of Community Medicine, uniting the factions as recommended by the Royal Commission on Medical Education. Max Rosenheim, a great leader of the RCP, helped the creation of the faculty within the Colleges of Physicians of London, Edinburgh and Glasgow. Archibald Cochrane became its first President. Crossman had established a working party to look at the work of medical administrators in the reorganised health service, chaired by Dr R B Hunter, Vice-Chancellor of the University of Birmingham. Keith Joseph continued it and, in 1972, the report suggested a pattern of training and staked out a place for the community physician of the future at region, area and district levels.265 The future community physicians would be key in assessing the health needs of the population, assisting integration of the health services, linking administrators with clinicians and co-ordinating the work of the NHS with that of the local authorities. ‘Community’ came to have two meanings; those in the new specialty saw it as meaning the whole population, while others increasingly spoke of the community as the non-hospital services. Many were uncertain about the role of the new community physicians; they seemed to be involved in decisions affecting the curative work of clinicians, just as the MOH had been before the NHS began. The crux of it appeared to be health service planning, priorities and the interpretation of statistical and epidemiological information.266

The loss of the MOH with reorganisation was widely regretted. The Second Report of the Parliamentary Select Committee on Health (2001) stated:

The post of MOH was abolished in 1974 and the responsibility for monitoring environmental determinants of health passed to Directors of Environmental Health who were employed by local authorities. Doctors trained in public health medicine became Community Medicine Specialists employed by health authorities to monitor the health status of the population and advise health authorities on how best to tackle the health problems of their community. Before 1974, the MOH had responsibility for the provision of some personal health services and in addition, was able to influence, as an officer of the local authority, social and environmental aspects of health. These functions were lost as a result of the transfer of the MOH into the Health Service. Community Medicine Specialists fulfilled three basic functions: they were medical administrators who assisted in planning and managing clinical services; they were advisers on the medical aspects of environmental health to the local authority; and they continued to have a role in epidemiology and the evaluation of health status and programmes of health care.

The structure of the reorganised NHS

Key features of NHS reorganisation

  • Coterminosity of health and local authorities
  • 14 regional health authorities (RHAs), 90 area health authorities (AHAs) and family practitioner committees (FPCs), 192 districts
  • Integration of health services in districts with a population of 250,000–300,000
  • Participation of clinicians in management
  • Clear allocation of responsibilities of an authority and its officers
  • Consensus decision-making
  • A planning system with decentralisation of decision-making, balanced by national and regional strategies
  • Better use of resources by greater efficiency.

Source: Adapted from the BMJ 1975267 

The English RHBs were reconstituted, with minor boundary changes, as 14 RHAs. Their role was strengthened and they were responsible for strategy, the building programme, staffing matters and the allocation of resources to their 90 subordinate AHAs. Local authority health departments, hospital management committees and the teaching hospital boards of governors were replaced by the AHAs, each coterminous with one of the 90 new local authorities. Universities with medical schools could nominate the area within which they mainly worked, which became teaching area health authorities, with modified membership to reflect their academic role. Where the new areas encompassed two or more major DGHs, each with its own territory, a multi-district area was established, each district having its own management team. There were 192 districts, each of which was divided, in turn, into sectors. There were acute hospital sectors, and community sectors that were generally seen as lower in status. The boundaries of the London areas were bitterly contested, for here there was little relationship between the realities of health service provision and local authority boundaries.268 Reorganisation had little effect on general practice, and once GPs realised this, they took little interest in it. The executive councils that had dealt with family practitioners – the doctors, dentists, pharmacists and opticians – were abolished. In their place were 90 FPCs coterminous with the AHAs and, in theory but not in practice, subordinate to them. Keith Joseph wanted no fights with the GPs. They still dealt directly with the DHSS. A clearly defined pyramid now existed, with the Secretary of State for Social Services at the top. ‘Maximum delegation downwards’ was matched by ‘accountability upwards’.

Membership of area health authorities

  • Chairman – appointed by the Secretary of State
  • 15 members; 16 in teaching areas
  • 4 members representative of local authorities
  • Others appointed by RHAs after consultation with universities associated with the region, bodies representative of the professions and any federation of workers’ organisations.

The responsibilities of the disciplines involved, the administrators, doctors, nurses and finance officers, were analysed. Detailed role specifications and job descriptions were prepared in an attempt to reflect the clinical autonomy of professionals and the need for effective management. The results appeared in Management arrangements for the reorganised NHS (the ‘Grey Book’).269 For the first time, the NHS officers had clearly defined duties. There was no flexibility to meet local circumstances. It was all down in ‘the Grey Book’ in black and white. Consensus management was introduced, an idea derived from the work of Elliot Jaques, a management scientist and psychoanalyst who was on the steering committee of the reorganisation management study. It was the choice of a Study Group of DHSS and NHS officers working with the consultants McKinsey and Co and the Health Services Organisation Research Unit of Brunel University. There was inadequate emphasis on the need for someone to bring the disciplines to consensus, or if that were not possible, to report the failure to the authority. If one member of the team did not like a proposal or decision, that tended to be the end of the matter; agreement was at the lowest common denominator. The officer teams consisted of an administrator, a community physician, a nurse, a finance officer and, at district and area, a consultant and a GP. The consultants, recruited from the greatest pool of experience and talent in the NHS, rapidly learned the managerial game. To co-ordinate health authorities with local authorities there were joint consultative committees. Then there were joint care planning teams, responsible for part of the AHA allocation earmarked for local authority schemes that would benefit both the health and the local authorities. In response to the criticism that patients had little influence on the new system, community health councils (CHCs) were created in the hope that, if they were involved in planning, it would reduce pressure on ministers. A gesture to consumer participation, they were regarded with suspicion by members of authorities and the medical profession. Appearing just as Schumacher published Small is beautiful, the new structure was, to quote The Times 20 years later, “a bureaucratic structure of mind-boggling complexity”.270

1974 shared one characteristic with 1948: the outcome was the best available compromise. Many groups had to be placated by the addition of another tier, committee or special interest group. Attempts were made to be fair to the staff who were having to apply, if not for their own jobs, for something like them. MOsH who, in 1948, had stayed in their posts now had to compete to be area medical officers. The number of top posts was smaller, many moved to new areas and much talent was lost. There were personal tragedies and at least one suicide. There was similar turbulence among nursing and administrative officers.

Some were converts; one quoted Peter Drucker as saying that the NHS was grotesquely over-administered and dangerously under-managed.271 Most, including clinicians, probably agreed with a writer in The Lancet who said:

We are at last coming in sight of the Great Day when, according to the prophets, a second coming of the NHS is going to cause the rooting out of all that is bad in the present system and lead us into some therapeutic Heaven, where all will be perfection, peace and light. But the whole business is being viewed with much less than fervent optimism by many of us who actually come into contact with patients – the ‘grass roots’ of the service. And let’s face it, grass roots puts us in our place, as low down as you can get. From our lowly viewpoint the NHS looks like a particularly nervous colony of ants which has just had a particularly large garden fork shoved in and stirred around. Individuals race hither and thither, carrying little schemes with them and giving them to others, who carry them a little further and pass them on in their turn. Of course the trouble is we don’t understand. We are unable to share the enthusiasm of our administrative colleagues as, with the schoolboy eagerness of modern Druids waiting for the midsummer sun to rise at Stonehenge, they prepare for the New Day.272

Florence Nightingale said that hospital management was important and difficult to learn, requiring experience.273 Senior staff whose length of service had given them pride and loyalty to their institution now either left or became disillusioned. Expertise in the management of outbreaks of infectious disease was lost, as over 100 MOsH left the NHS. Few of the new managers were committed to the hospitals, regarding their jobs as stepping stones to something better. They were so busy with paperwork and meetings that they were never seen about the place. Their future lay at the upper levels of an over-heavy bureaucracy and they deserted their posts in droves, leaving huge general hospitals to be run by a succession of juniors, to the dismay of medical staff. Merely to run a hospital did not seem of significance. It was the ‘big picture’ that mattered, strategic planning at region, area or district. A few trouble-shooters, walking round their hospital, inviting complaints and criticisms and pointing out possible improvements would have helped morale. Instead, to the clinician’s eye, there were unrealistic committees run by grey people of second-rate ability with a natural reaction to say ‘no’.274 Already there was a common and rational belief that only two tiers were really necessary and that, in time, one of the three would dwindle in importance; in applying for new jobs, officers could only make their own guesses.275 It was the ‘night of the long knives’; the joke was repeated of the old administrator, displaced by a young manager, who left three sealed envelopes containing advice to use when crises broke. The young man, confident in his ability, nevertheless ran into trouble. The first read ‘blame your predecessor’, a strategy that worked. The second read ‘reorganise’, and this was also successful for a time. The third merely said ‘prepare three envelopes’.

A failed solution

The new Labour administration

A snap election in March 1974 saw the election of a minority Labour administration led by Harold Wilson, and a new Secretary of State, Barbara Castle. The economic situation was grim and consultants were discontented to the point of considering sanctions. They felt that they were being expected to do more work in deteriorating physical facilities with diminishing support, reducing living standards, all the time beset by an unimaginative and over-centralised administration.276 Labour had made its priorities clear, ending prescription charges, banning private practice in NHS hospitals, reducing the bias towards hospital medicine, encouraging consultants to work full-time for the NHS, and strengthening local democratic control of the NHS.277 The ranks of health service workers were sown with the seeds of mutiny. Clive Jenkins, leading the Association of Scientific, Technical and Managerial Staffs, said he intended to raise hell in the health service about the highly qualified and poorly remunerated people who worked in it. In May 1974, nurses marched with banners from the RCN to Hyde Park and Barbara Castle acknowledged their grievance, saying that everyone accepted that they deserved priority. She announced a rapid independent inquiry into the pay of nurses and the professions supplementary to medicine. It was chaired by Lord Halsbury and recommended an average increase of 30 per cent. But the hospital ancillaries also had a strong case and could not be held back for long. Bleak times, said The Lancet, for strikes were also taking place throughout industry. The NHS stood in more peril than ever before.278 In August, Mr Wilson pledged that more money would be found for pay awards.

Although the new NHS structure had never seemed satisfactory to the Labour Party, to tamper with it at such a late stage would create chaos. For all its manifest faults, wrote David Owen the Labour junior minister, the structure of the reorganised NHS could provide a framework for achieving good health care, sensitive to the consumer and at reasonable cost. The new administration’s policy was therefore to make changes in an evolutionary way. The objective would be to devolve more power, only possible, to Labour thinking, if there was a strong democratic element locally.279 A paper on Democracy in the NHS was published in May 1974 that added local government representatives to the new RHAs and increased their proportion on the AHAs to a third.280 It also proposed to extend the role and influence of CHCs by aiming for a high calibre of secretary and seeking to give them better access to information and planning processes.202 The equitable distribution of money was moved higher on the agenda. Two initiatives flowed from this – a new method of allocating resources nationally and a further attempt to tackle the organisation of health care in London.

Private practice

An increasingly unionised nursing staff resented the time spent on caring for private patients, although money received for patients’ stays fed back centrally into NHS funds.282 Matters came to a head at the Charing Cross Hospital. Led by Mrs Esther Brookstone, a NUPE branch secretary, staff went on strike in an attempt to close private facilities. The medical profession had seen union power wielded the previous year with disruptive effect to demand politically motivated changes, and believed that they too would be heard only if they were prepared to back their views with action. The BMA at once threatened to withdraw from contractual discussions with the Department and to impose sanctions from midnight on 8 July 1974 unless Mrs Castle intervened. Immediate talks took place and the unions backed off.283 Private practice, ideologically significant to Barbara Castle, and of fundamental importance to the consultants, was far less important to the health of the public than many other issues that could only be solved in co-operation with the doctors. The nurses chose this moment to strike for more pay, and the NHS was sinking into chaos.

In October 1974, another election was called and Labour was returned with a slim majority. In parallel was a change in the professional power structure. In 1947/8 the Royal Colleges had acted as mediators between Bevan and the profession, receiving little thanks for their efforts. Subsequently, there was a deliberate reduction in the Colleges’ medico-political role because of the risk to their recognition as ‘charities’ for tax purposes. This coincided with a stiffening of resolve in the BMA leadership to maintain the position of the profession, just as the political left was becoming more militant.

Labour moved to phase out pay-beds from NHS hospitals and to negotiate a new consultant contract. The existence of pay-beds had been part of a concordat between Bevan and the consultants from the beginning of the NHS. It had advantages for all concerned: if consultants were going to undertake private practice, it was better for NHS patients for them to be in the main hospital rather than at a private unit down the road. Nevertheless, strong opposition to private practice was traditional Labour Party policy, and phasing it out had twice been a manifesto commitment. It was an ideological blot on an otherwise pure NHS landscape and, at times, it allowed queue-jumping while NHS patients waited months or years for treatment.284 Barbara Castle believed that the facilities of the NHS should be available on medical priority alone and not made available to those able to pay.

Consultants work to rule

Matters were made yet worse by the simultaneous attempt to negotiate a new consultant contract, a process that had begun under the Conservative administration. The consultants believed that they were over-worked and under-paid, and wanted a work-sensitive contract or item of service payment. Barbara Castle saw an opportunity to achieve her own goals and proposed a joint working party between the profession and the Department, chaired by David Owen.285 Doctors hoped their discontents would be resolved peacefully, and were in for a shock. Barbara Castle was prepared to offer an extra 18 per cent pay to consultants who took whole-time contracts, which would have reduced private practice and placed consultants under tighter control. The consultants rejected the proposals. In addition it was proposed that merit awards, which went disproportionately to high-tech medicine and the staff of teaching hospitals, should increasingly recognise service in unfashionable specialties and unfashionable places. The distribution of merit awards was a long-standing problem which required a solution, but in the atmosphere, prevailing consultants saw Barbara Castle’s move as an unacceptable intrusion into professional territory. Meetings between ministers and the medical profession ended in acrimony and broke down dramatically.286 In December 1974, consultants took industrial action, and for 16 weeks many ‘worked to contract.’ Restrictions were placed on the numbers seen in outpatient clinics and waiting lists rose. Clinics became a pleasure rather than a chore for the doctors, and there was little prospect of a return to the old ways of rush and scramble, working long hours at a frantic pace.287 Doctors began to feel less personal responsibility for the NHS. Chaos was all around. GPs were signing undated resignations; junior doctors were preparing for battle. However, several small improvements were made to the consultant contract and the Review Body made a pay award of more than 30 per cent. Mrs Castle honoured the recommendation and the medical profession settled.

Immediately after, in May 1975, Mrs Castle announced that the government would not only phase out pay-beds but would also seek powers to regulate private practice more closely. The medical profession rejected Mrs Castle’s proposals.288 The BMA felt that patients had a right to choose private medicine; that government should maintain the right that consultants had under their contract to private practice, if it did not encroach on their NHS duties; and that the availability of private practice was essential because it made it possible for consultants to achieve incomes above the comparatively low salaries that the NHS was prepared to pay. An alliance was formed between the BMA, the Royal Colleges, the Hospital Consultants and Specialists Association and the private insurers to oppose the proposals, employing Lord Goodman, an eminent lawyer, as adviser. In November, a Royal Commission on the NHS was announced. Barbara Castle had looked to Harold Wilson for support on a manifesto commitment, but her fierce commitment had come into conflict with the cold determination of Anthony Grabham, the consultants’ leader. To attempt to resolve the deadlock over private practice, Lord Goodman was asked to mediate. Wilson met the leaders of the profession, with Lord Goodman and Barbara Castle. During the meeting an attempt was made to entice individual College presidents into debate and break the impression of professional unity. It failed, and Anthony Grabham presented the united views of the profession.289 A compromise followed that radically altered Labour’s proposals for private practice. Facilities would be maintained where they were needed, although some unoccupied private beds would be shut. The dispute slowly subsided but the issue of principle had not been resolved. Labour maintained that it had never wished to abolish private practice, but merely to separate it from the NHS. There were no winners; the medical profession was seen to be prepared to strike, waiting lists had risen, the government had been forced to compromise, and the NHS as a whole had been undermined. Barbara Castle believed that some senior medical civil servants supported the profession rather than her, and she barred them from key meetings. The damage she inflicted on the NHS is hard to overestimate and, when James Callaghan became Prime Minister in April 1976, David Ennals succeeded her.290 Pay restraint, introduced in July 1975, increased the doctors’ sense of grievance.

A Bill to begin to phase out private practice more slowly, still opposed by the BMA, received Royal assent in November 1976. Private beds and outpatient facilities would remain if there was a reasonable demand for them, facilities would be withdrawn only if there were reasonable alternatives, and continued authorisation would depend on steps having been taken to meet reasonable demands outside the NHS. Within six months 1,000 private beds would be eliminated and a Board would be established to consider further proposals.291 Labour policy achieved the opposite of what was intended; there were only small reductions in NHS private beds, but the alarm led to a substantial increase in the number of private hospitals throughout the country. The numbers taking out private medical insurance (including some in the trades unions) rose, as did the demand from foreigners for treatment.292 London already had private hospitals that were ‘disclaimed’ under the original 1946 legislation because they were religious foundations or hospitals associated with a particular group such as the Royal Masonic, King Edward VII’s Hospital for Officers, or the trades unions’ Manor Hospital. Nuffield Nursing Homes ran others with charitable status. The new expansion came from the commercial sector. Some were started with American investment (e.g. the Princess Grace Hospital). The Wellington Hospital, opened in 1974, became American-owned. Others had Middle-Eastern financial backing. The Cromwell aimed to become the home of the super-specialties, and the Portland Hospital was developed on the site of the old Royal National Orthopaedic Hospital to provide services for women and children. The proximity of these developments to Harley Street was no coincidence; the development of the special hospitals a century earlier had the same raison d’être. In the territory of the Kensington, Chelsea and Westminster Area Authority, 25 per cent of the beds were in the private sector.

Two further years of incomes policy brought the medical profession to the point of rebellion again. Doctors had acquiesced for a time to government policy, believing that bringing inflation under control was vital. By 1977, the Review Body reported that doctors had fallen 15 per cent behind comparable groups in the previous two years. Others could take advantage of increased productivity and overtime to an extent that doctors could not.293 Pay policy was still in place and an appeal to the Prime Minister was fruitless. Morale, said the BMJ, was lower 12 months after Mrs Castle had left office than when she was Secretary of State. There was growing hopelessness that the medical profession would ever be treated fairly again.294 The new consultant contract, the product of years of work, was not agreed until 1978, and in the event was never implemented.295

Planning and priorities

Increasingly the way money was spent was driven by planning processes. The DHSS issued guidelines on the staffing and organisation of services, guidance that was expensive to follow and, in the wake of the problems in mental illness and mental handicap, was taken seriously. The NHS had suffered from a surfeit of reports on how services should be staffed and organised, calling attention to unmet need and pressing for more money. These optimal standards, published in good faith and often with political support, left regions with an impossible bill. At the sharp end of exhortation, they rebelled. When Keith Joseph, as Secretary of State, visited the South-East Metropolitan RHB in 1973, he challenged the assertion that the money available would not pay for his policies. Malcolm Forsythe and his colleagues assembled the detailed guidance that had been issued on staffing levels and calculated the costs of implementation for their population of 3.5 million. The region would have needed two and a half times the revenue and five times the capital allocation to do everything required of them, even over a ten-year period. David Crouch MP, a member of the RHB, asked to see Keith Joseph, and the findings were presented. The region thought that, instead of normative planning, it would be better to give the money available to the health authorities with the fewest possible strings attached and expect them to produce the best mix of services possible.

The Priorities document

The high point for RHAs and for corporate planning was 1976. The RAWP Report appeared, as did Prevention and health: everybody’s business. Barbara Castle published Priorities for health and personal social services in England, which made it clear that, because of the economic limitations, choices had to be made.296

Priorities for health and personal social services in England (1976)

Indicative change in funding

Services for elderly people

3.2%

Services for mentally ill people

1.8%

Services for mentally handicapped people

2.8%

Services for children

2.2%

Acute and general hospital services

1.2%

Hospital maternity services

- 1.8%

Source: Department of Health and Social Security.297

The BMJ referred to it as a “document of despair”. At a time of economic recession, current expenditure would continue to increase in real terms but the capital programme would be halved.298 It was increasingly clear, said the journal, that the NHS could not balance its books and stood no chance of doing so. Growth was concentrated on primary care, health promotion and services for children, the elderly and the mentally ill and handicapped, putting “people before buildings”, and ignoring the decaying hospitals and their effect on staff morale. The Priorities document translated general objectives into specific financial policies. It looked coldly at the money likely to be available, calculated programme budgets and named the winners and losers. It was a mine of information about what was happening, reviewing virtually all NHS policies, looking at costs and trends, and taking account of local authority services. Although consultative, it was intended that the strategy would provide authorities with a basis for planning and was to be applied at once. Low-cost solutions must be sought, levels of provision examined and redeployment of resources, in discussion with the professions, would be required in the acute sector. Yet the acute services were to reduce waiting times and geographical disparity, and facilitate medical advance. The priorities, the Royal Commission on the NHS was later to say, were not the result of objective analysis but of subjective judgement; they were not the only possible choices but were broadly correct.299

The planning system became a central feature of the reorganised NHS. It required good information and, in spite of the effort that had gone into HAA, there were well-founded criticisms of the accuracy and timeliness of the information available to management. Planning was all-embracing and of a new order of complexity. The system swept into action just as growth money was disappearing, so ‘planning for negative growth’ and ‘zero-based planning’ became watchwords. Ten-year strategic plans aimed to redress the perceived imbalance between hospital and the community, acute medicine and the long-stay services. Almost immediately, further reduction in the funds likely to be available forced some regions to rewrite their proposals. Sir George Godber rightly predicted that planning would be a shambles for some time. He thought the procedure was too formalised, allowing little for the change inherent within the service. It failed to recognise that what happened was a continuous moulding exercise, and elaborate production of new plans each year was wasteful of time and money.

The system aimed to produce ten-year strategic plans every three years and a shorter-term operational plan. District, Area, Region and the Department all had to interdigitate, the ‘superior’ authority providing priorities and an indication of the resources for the next four years, the ‘inferior’ authority producing plans after consultation. Clinical staff spent long hours constructing ideal paediatric or mental handicap services that were generally impracticable or unaffordable. Some regions, like North East Thames, produced a central plan and imposed it on their areas. South East Thames RHA pointed to the difficulty in planning ten years ahead without clear guidance about the money likely to be available. Many existing policy commitments were unattainable, given existing budgetary constraints. Areas such as Medway, where the population was expanding rapidly, had problems both with general practice and the hospital sector, and there were no funds available to help. The aims in the Priorities document could be achieved only by putting acute services in jeopardy and, in South East Thames, that would mean closing one of its three central London teaching hospitals. The BMJ thought that regional strategic plans, though variable in quality, should be welcomed, for they prompted a debate about fundamental issues.300

The results of consultation on the Priorities document were published in 1977 by David Ennals in The way forward.301 Though less explicit, it maintained the same general principles and programme budgets. Ennals was convinced that, with skilled planning, faster progress could be made. Regions, particularly those losing money, were less convinced and the losing RHAs in the south maintained that money first had to be spent to rationalise the acute services before funds could be released for transfer to the priority sectors. North East Thames gave priority for capital spending to five acute hospital projects, such as Homerton Hospital and The London.

Complaints about the complexity of the planning system spurred review but the modifications increased its complexity. Consensus management did not work out as planned. Unless a strong leader emerged locally – often the administrator – matters tended to drift. Within the consensus teams, some disciplines flourished. Many consultants, in management for the first time, learned the necessary lessons. Community physicians varied widely in their impact; some made their mark but many did not. GPs were unable to speak authoritatively for their colleagues who were independent contractors; nurses could speak for their own discipline but seldom showed much interest in matters of wider policy.

Financial problems

In July 1974 the BMA, the Royal Colleges of Nursing and Midwives and the British Dental Association met the Prime Minister and asked for £500 million extra for the NHS and for an independent inquiry into its financing. Harold Wilson promised that extra money would be provided to meet the cost of pay claims and inflation but refused an inquiry.302 In October 1974, the Royal Colleges and Faculties took an unprecedented step and sent a joint statement to the government, again asking for careful scrutiny of NHS funding and clear recognition of the extent of the shortfall. The gap between the care provided for patients and what might now be achieved was widening and, denied the resources they needed, the morale of doctors was low.303 The time had come, said the BMJ, for realism. If the NHS was to remain short of money, intelligent use must be made of what there was. The medical profession itself, at a conference held in Winchester in 1974, recognised the effects of economic stringency and the importance of cost-effectiveness.304 There was certainly waste – unnecessary drugs prescribed and investigations performed. Much conventional practice, for example, the length of stay in hospital after operation, was governed more by custom and convenience than by efficient use of scarce resources. If each area were given a budget and told it could keep the savings from economies, said the BMJ, there would be a real incentive.305 It would, however, be necessary to rectify the regional disparities. The claim that every patient could be offered all available treatments had always been something of a fraud. It was no longer possible even to go on pretending. Choices would have to be made. What advances in treatment could be afforded and how many doctors were needed?

BMA’s proposals for reform (1975)

  • Prune the extravagant and elaborate administrative structure
  • Doctors should take a critical look at the costs of their treatment
  • Better use could be made of skilled staff, for example, outpatient surgery
  • Health education should be encouraged.

Source: BMJ.306

Reorganisation was losing its sparkle. The BMJ said it had suffered from a failure to define what ‘delegation downwards’ meant; there were fights for power between regions, areas and districts. The NHS was trying to operate with too few staff in hospitals scheduled for rebuilding 25 years previously, in an atmosphere of resentment and despair.306 Social services had, in effect, been excluded from the integrated health service. GPs, valuing their status as independent contractors, had maintained independence through the creation of family practitioner committees. Disastrously, in 1974, the oil crisis pitched the nation into retrenchment, so all plans for expansion had to be frozen or cut. Clinicians found that the new structure had led to a plethora of reciprocating committees. “Take the simple matter of getting approval for a new registrar – in the old days it went straight to the Department of Health. Now it has to go through at least 14 stages – it’s a sort of mad administrators’ Monopoly.”287 In October 1975, Harold Wilson announced the setting up of a Royal Commission to consider the best use and management of the financial and staff resources of the NHS. Sceptics saw it as a public relations exercise to pacify doctors angered by the private health care issue. It was, however, just the sort of inquiry that the profession had been demanding. The BMA began to prepare its submission, drew attention to the dangerous state of doctors’ morale, and argued for the abolition of area health authorities.309

Resource reallocation

Health service funds had always been distributed unfairly. When the NHS was established in 1948, the regional hospital boards and the boards of governors were allocated the expenditure of the previous year. For 20 years, estimates and forward looks operated largely from that baseline.310 The resources available were far higher in the south than in the north of England, although this was not clear from publications that usually presented revenue allocations on a functional rather than a geographic basis. The significant exceptions were Wales, which had caught up a 25 per cent deficit on England, and Scotland, which had moved from a deficit to a substantial surplus on a population basis. When a new hospital was built, the region received the RCCS. Most new hospitals were in the south, skewing allocation even further. “Territorial justice” was absent.

From 1969, it became policy to attempt to achieve equity over ten years, but progress was slow. A paper by MH Cooper and A J Culyer, published by the BMA in 1970 , was among many making the academic case for change. Richard Crossman introduced a new formula based on the number of beds in existence, the patients admitted and the needs-weighted population. As this took into account existing services, redistribution did not occur. He despairingly said,311

In order to fulfil this demand for equality, in England for example, you have to have a revolutionary change in the relationship between the health services provided in the south east and the services provided in the rest of the country. If we look, for instance, at the standard of service which a Londoner can get and which somebody in Sheffield can get, they are poles apart. Measured in terms of access to a GP, access to hospital, or standards of nursing, London does far better. The reason is that when we took over the health service the London hospitals were the dominant hospitals of the country because the wealthy lived in the south east and the GPs and hospitals were therefore concentrated in that area. The standard of service has indeed gone up outside the south east but the gap, judged even by the money they receive in their annual budget, is not much better than it was.

Crossman showed that fair shares would mean taking 10 per cent away from London, in which case wards would be closed and there would be a political problem. It could only be done on an expanding budget from which a higher percentage of the gross domestic product was allocated to the NHS. That was why there had been virtually no change in 25 years. Successive governments had given reallocation no priority and seemed to have been pulled between the vociferous claims of the regions and the teaching hospitals.202

The grouping of teaching hospitals under the new RHAs made the extent of the problem more obvious. ‘District profiles’, produced to help new AHAs, showed that the crude variations between areas were even bigger than between regions. Within Trent RHA, one area was 39 per cent above the regional average and another 62 per cent below.313 The incoming Labour government in 1974 was keen to redress inequality but could hardly have chosen a worse time. David Owen, Minister for Health, believed that the NHS should be a tool to achieve a more uniform standard of medical care. He commissioned studies by the Department’s economists. His worst fears were reinforced even allowing for approximations, there were quite horrendous differences in the level of provision in different parts of the country that could not be explained by differences in need. In the mid-1960s, the expenditure per head on the hospital service varied from £9.32 in Trent to £15.33 in North East Thames. In 1971/2 the spread was as big as ever.

By the end of 1975, there was a deep-seated political imperative to redress the inequalities in provision. A system had to be created, so that planning in the reorganised NHS could be on an equitable basis. Answers had to be found that did not require massive injections of additional cash or jeopardise developments already under way. It was launched against a background of fiscal crisis. Above all, the solutions had to be acceptable to everyone concerned as a fair way of achieving, over time, more equitable distribution. Ministers set up a working party in a way to ensure that it would be “owned by the Department and the NHS”, as any process would take many years to come to fruition. Forsythe and Gentle, from South East Thames RHA, suggested that revenue should in future be allocated on the basis of population, with weighting to take account of the size, age and sex of the population, teaching commitments and cross-boundary flow.314

The Resource Allocation Working Party

Sharing Resources for Health in EnglandThe formation of the Resource Allocation Working Party (RAWP) was announced in July 1975, and its membership included Walter Holland and Malcolm Forsythe. The BMA agitated in favour of resource reallocation, because consultants outside the southeast were tired of seeing all the money going to London. The task of the working party was to devise a formula for England that would: allocate regional resources in relation to health care need, rather than supply, demand and historical factors; be robust and relatively stable from year to year; be dependent on valid, reliable and readily available health data; and be understood by most of those affected. The working party accepted that major determinants of health included deprivation and poverty in all its aspects (e.g. housing). However, the formula was based on health data to ensure that the health service was not seen by government as a means of correcting deprivation. It published its report in 1976 as Sharing Resources for Health in England.315

RAWP, as the policy became known, was concerned with allocations rather than the way money was used – that was a matter for the authorities locally. The system was based on the residential populations adjusted for differences in age, gender and death rates. There was no simple way of taking account of the costs of long-term morbidity that were often higher than mortal illness. Calculations included allowances for patients who lived in one area but received treatment in another, and for the cost of teaching and research, which had an impact on hospitals that was not spread evenly over the country. Some problems were impossible to solve. There was little information about the cost of individual procedures (e.g. heart surgery), which made it difficult to be fair to districts that collected complex and expensive cases. A national system was established for the central funding of ‘supra-regional’ units in fields such as paediatric liver transplantation, in which only three or four units were either necessary or desirable. The major weakness was that no link was created between resource allocation and medical staffing policy. The BMJ criticised the report as dealing only with resources and not the reasons for high expenditure in London. The fantasy solution would be to move some of the London teaching hospitals to the underprivileged regions of the north and west; that was improbable, but the survival of centres of excellence in London and the transfer of resources would be possible only if the number of major hospitals were reduced.316

The RAWP formula set target allocations that showed the distance of each region’s allocation from what it would be if equity were to rule. The Thames regions were considerably over-target. Trent, Northern and North Western were well below it. Additional ‘growth’ money available each year would be used to move regions nearer to equality of treatment. Barbara Castle thought that the policy of redistribution was possible, even at a time of world recession for public expenditure, and a high public sector deficit would help to sustain employment.155 The broad principles of RAWP were introduced in the 1977/8 allocations, as power passed to the Thatcher government, whose instinct was to favour decentralisation over national planning. Targets were recalculated each year and growth money gradually used to redress the situation. Each region was asked to apply the same principles to its areas so that, in a gaining region, some areas would gain more than average. Provincial regions, and particularly North Western and Trent, could expect to do well. There was ill-disguised glee in the north at the difficulties facing clinical staff in the metropolis. In a losing region, some areas would lose proportionately more if their resources seemed large compared with their resident population. Central London teaching areas were hit three ways: by national redistribution, by regional redistribution and by the need to move funds away from acute services towards long-stay specialties. The priority services for people who were elderly, mentally ill or mentally handicapped were now unambiguously the responsibility of the same district authorities as were the teaching hospitals and came into direct conflict with new initiatives in acute treatment.

RAWP remained contentious and much intellectual effort was spent on proving that justice was not being served in particular districts. The annual meetings of regional chairmen with the Secretary of State to learn of their allocations were tense affairs. Allocations aimed to bring the poorer regions closer to their target while giving headroom to the richer Thames regions to help them switch funds to their poorer areas. David Ennals was told by the Chairman of the Northern Region that he could not go back to Newcastle with only 3 per cent. ‘Living in London is very expensive,’ Ennals told him. Sir Frances Avery Jones said that inequality was the price of progress, a comment that had a kernel of truth as far as teaching districts were concerned.

The University of London and its medical schools, themselves under financial pressure, rapidly appreciated the potential effect on acute hospital services and medical education, and established a working party to look at the problem. The financial pressures on London teaching districts were severe but their ability to react was constrained by the need for public consultation and local opposition to any reduction in services. Some of the districts with the greatest problems were matched by left-wing local authorities that considered any reduction in service to be anathema.

The state of the NHS

A decade that had started well ended in disarray. NHS reorganisation was not a success. The oil crisis had led to recession and the building programme was cut to save jobs. Devolution downwards was long in coming. Far more rapid was the increasing centralisation of powers and the issue of immense amounts of detailed guidance, epitomised by a ‘turkey circular’ that advised hospitals to ‘cook the Christmas fowl fully’. In 1976, lack of enthusiasm for NHS reorganisation, its cumbersome nature and the costs of management led David Ennals to ask three RHA chairmen to comment on the relationship of the regions with the Department, and to invite area health authorities to comment on AHA/RHA relationships.318 They believed that the DHSS had become too large and complicated and the absence of a clear division between the Department and NHS resulted in duplication of activities. The BMJ was pessimistic.

Any future historian looking at the NHS is likely to see the 1970s as the decade of the decline of the hospital service. Mrs Barbara Castle shattered the political confidence of consultants as effectively as Henry II slighted his opponents’ strongholds. Next came the war of attrition in which the hospital unions undermined medical authority, a long, drawn-out and covert process helped by the administrators’ appeasing tactics which kept as many incidents hidden as possible. Finally, freedom itself was eroded, not only by the clamour of numerous pressure groups and watchdog associations but also by the intrusion of an ombudsman into areas where he had neither knowledge nor competence.319

Yet, clinical care continued to develop apace. Enoch Powell’s contrast of the ever-expanding demand and the limitation of resources was more and more relevant. The public recognised that much had been achieved, and they prized the NHS. However, politicians and the media fuelled public expectations and the sums did not add up. The gap between what was possible and what was provided seemed to be widening all the time.

< Previous

1958–1967: The renaissance of general practice and the hospitals

Next >

1978–1987: Clinical advance and financial crisis