In the Whole System Demonstrator (WSD) trial, a team of researchers studied the impact of installing telehealth technologies in patients’ homes to monitor their vital signs such as blood sugar levels.
Debate continues over whether the findings justify the Government’s policy of encouraging the NHS to invest more in telehealth. At the same time, the trial has raised a potentially even more significant discussion.
How useful are randomised controlled trials for evaluating new, rapidly evolving, ways of delivering health care given the length of time that can elapse between study design and evaluation?
In a randomised controlled trial, patients are assigned randomly to either receive the new form of health care or an alternative. Randomisation means that the two groups of patients should be similar, so any differences in their outcomes should be due to the intervention.
In other words, randomisation is useful to ensure that findings are not biased. In the case of WSD, groups of patients received either telehealth or usual health care, and the findings challenge some common assumptions, for example, that telehealth reduces costs.
The WSD project took six years to complete. However, the achievement was enormous.
New telehealth services were implemented in three parts of England. 3,000 patients were recruited, making it the largest trial of telehealth in the world.
Just recruiting patients into the trial and getting the kit installed took 17 months, and then the evaluation team had 12 months to observe what happened to patients while they were in the trial.
After the last patient finished the trial, the evaluation team had to extract over a billion rows of data from 250 health and social care organisations.
It has been argued that this is a disadvantage in a field like telehealth where technology changes all the time. For example, home-based telehealth may soon be superseded by solutions using mobile phones.
But telehealth is still only used by a relatively small number of patients in the UK. The trial is producing insights that will be relevant regardless of the precise form that the technology takes, such as about why some patients might not want to use it.
Speeding up randomised controlled trials is possible, but not without drawbacks. The evaluation team could have done interim analysis and reported on hospital use after, say, six months of the 12 month trial.
But the findings may not have been statistically significant, so the cost of earlier information could have been greater ambiguity in the results, and therefore greater scope for the wrong decisions to have been made.
Another idea raised by some commentators is to move from post publication to peer publication review.
This could have reduced the length of the project by one-sixth, but I’m still undecided about what’s best. In this case, the Department of Health chose to release some of the results before peer-reviewed publication and so did not give the full picture.
Randomised controlled trials have a place along with many other types of research. For example, researchers at the Nuffield Trust are using retrospectively matched controls to evaluate the impact of telehealth in North Yorkshire.
The retrospective approach means that we can capture the impact of telehealth in routine practice, outside of a trial setting. Other research is using ethnographic methods to design telehealth to be centred more closely on individual needs.
By drawing on evidence from a variety of sources, including randomised controlled trials, we might be able to improve health care.
This article also appears on the Health Service Journal website.
Steventon A (2012) ‘How useful are randomised controlled trials in evaluating new ways of delivering care?’. Nuffield Trust comment, 24 August 2012. https://www.nuffieldtrust.org.uk/news-item/how-useful-are-randomised-controlled-trials-in-evaluating-new-ways-of-delivering-care