AI: how can general practice make the best use of it?

Alongside our new report, Becks Fisher looks at how GPs are using AI, what worries them the most about it, and what should happen next if general practice is to harness the benefits from AI. This article was originally published in GP Online on 5 December and was written for a GP audience.

Blog post

Published: 12/12/2025

Artificial intelligence (AI) seems to be everywhere. From how we shop, to the way we plan holidays, AI’s tentacles are reaching into our personal lives. It’s reasonable to think that AI is creeping into our professional lives too. The government certainly hopes that it is. The 10 Year Health Plan for England – published in July – puts AI at the centre of a technological revolution in the NHS. One that will free up clinician time, deliver better patient care and save money. But relatively little is known about how GPs are currently using AI – or what they view as barriers or enablers to the wider use of AI in general practice.

This summer, the Nuffield Trust partnered with the Royal College of General Practitioners to find answers. We put questions on AI use into the annual GP Voice Survey, and we ran focus groups to get into more detail with smaller groups of doctors. The results – revealed in our new report – suggest that policy-makers have significant work to do if AI really is to transform our working lives in general practice.

Overall, 28% of the 2108 GPs we surveyed across the UK are currently using AI in their clinical practice. And while some GPs (13% overall) are using AI tools supplied by their practice, a sizeable minority (11%) are using self-obtained tools (4% use a mix). Male GPs are more likely to be using AI at work than female GPs, younger GPs are more likely to be AI users than older GPs, and GPs working in richer areas are more likely to be using AI (including practice-supplied tools) than GPs working in poorer areas.

How are GPs using AI?

We can all imagine horror-headlines about GPs being replaced by AI, or treating their patients based on advice from ChatGPT, but it’s pretty clear that’s not what’s happening. By far the most common use of AI tools by GPs is for clinical documentation and note-taking. Plenty of GPs we spoke to are also using AI for administrative tasks and for professional development, and although some are testing tools to support clinical decision-making, that’s not currently a common use.

GPs are also clear about what they want AI to help them with: handling routine time-consuming tasks reliably, freeing them to focus on patients, and on complex clinical reasoning. Perhaps mindful of the array of (often disconnected) software we already work with, GPs want AI tools to integrate seamlessly into patient records – not be yet another ‘bolt-on’.

What are GPs worried about?

If policy-makers want wider uptake of AI tools in general practice, they’ll need to allay GPs’ fears about using AI. Regardless of whether they’re already using AI, GPs at all career stages are worried about patient safety, professional liability, patient consent, data privacy, the impact of using AI on the doctor-patient relationship, and risks around digital exclusion. We heard concerns that AI use would deskill clinicians, that GPs might miss errors that AI had introduced – and be responsible for patient harm as a result – and that patient-related data entered into AI tools could be used for commercial purposes. Perhaps unsurprisingly, these worries were more common in GPs not currently using AI than the 28% who already do.

Priorities for action

If general practice is to harness benefits from AI, policy-makers will need to act fast. First, we need more national guidance. Some integrated care boards (ICBs) are encouraging practices in their areas to test tools and share learning, but others are actively forbidding surgeries to use AI tools. This postcode lottery approach to AI needs to end, and clear national guidance should specify the role of ICBs in defining the range of AI tools approved for general practice. Expecting individuals – or individual GP surgeries – to make these decisions isn’t rigorous enough.

Second, policy-makers and regulators should clarify professional liability and governance frameworks for AI use in general practice. GPs are clearly concerned about their personal accountability for failure of AI tools – and many will be unwilling to use them in clinical practice until they’re more convinced of their safety, and have clarity on who is at fault if things do go wrong.

Third, many GPs want bespoke training on how to use AI tools. We consistently heard that GPs feel unprepared to navigate the AI landscape in a professional context, don’t have time or headspace to invest in detailed self-teaching, and want tailored rapid training.

Can AI undermine the government’s best laid plans?

General practice has long led the way in implementing technological change in the NHS. 28% of GPs in the UK are already using AI in their practice, but for AI to truly change care in the ways the government hopes it will, that percentage needs to rise fast. Understanding how patients engage with AI as part of their care, and what they find acceptable, is crucial.

There are also some signs (including our finding that GPs working in practices in richer areas are more likely to be using AI tools) that AI use in general practice could exacerbate problems – like health inequalities – that the government hopes to tackle. The Secretary of State has been clear that he expects AI to free up time for GPs, enabling us to deliver more appointments. We heard very clearly that GPs are using the time they save to go home earlier, reduce burnout and spend time with their families. That may not be such a bad thing either.

*This blog was originally published in GP Online on 5 December and is reproduced with permission.

Suggested citation

Fisher R (2025) “AI: how can general practice make the best use of it?”, Nuffield Trust blog

Comments