Artificial intelligence (AI) is becoming increasingly ingrained in the world around us, from the cutting edges of science and industry to our most mundane internet searches. Governments worldwide are looking to AI for economic growth and efficiency savings. This is obvious in the NHS’s 10-year plan, which bets heavily on AI’s transformative potential.
While adult social care currently lacks its own overarching strategic vision, many leaders in the sector, including the Minister for Care, have highlighted AI as a strategic priority. But it is often discussed in very abstract terms. Can we take a pragmatic view on how it fits into the realities of social care today?
What can AI in social care look like?
AI is a very broad term, and AI tools can take many different forms in social care. They can be implemented to assist with more generic backend business functions, elements of workforce planning like the rostering of homecare teams, or the development of care plans. AI can be used to monitor things like people’s sleep patterns, falls or toilet visits, to flag risks to their physical or psychological health. AI assistants can even act as companions and aides for people who spend a lot of time alone. There are plenty of products on the market, but there is a poor understanding of how, or if, different care providers are currently using AI.
Whether AI can offer a positive return on investment is something many providers may be thinking about from both a commercial and quality of care perspective. Success stories are easy to come by, ranging from examples of homecare staff being deployed more efficiently, to avoided hospital admissions, to people being enabled to live more independent lives.
But evidence is piecemeal, and it can be challenging to distinguish genuine opportunities from PR hype. Plenty of legitimate concerns exist, including around privacy, workforce replacement and a general degrading of the human element of care. A coalition of key actors in the space have co-designed a pledge on the responsible use of AI, which is a step in the right direction, but more action is needed to guarantee sector-wide safe and effective use.
Why is sector-wide implementation going to be difficult?
Despite evidence of effectiveness often being in short supply, the expansion of the use of AI in social care feels inevitable. But certain factors make it a particularly challenging sector to roll it out at scale.
Firstly, while the government has begun taking steps to standardise workforce development pathways in social care, there is still huge inconsistency in skills and training. Many staff are likely to feel they lack the digital skills to implement AI tools well. Additionally, most care providers are small and do not have the back-office staff needed to carry out sufficiently rigorous procurement processes in this crowded market, which means they risk adopting tools that are not safe, useful or cost effective enough. Many commissioners are similarly ill equipped to support their local provider markets in this pursuit.
Digital infrastructure is another hurdle. Many providers have only recently adopted digital care records. While these may be necessary facilitators of the implementation of some tools (such as AI scribes or care planning aides), their late emergence is telling of the limited capacity many providers have to leverage data and technology. A suite of cutting-edge AI tools would be a very drastic step forward.
Beyond this, the affordability of some products (prices will vary massively by product) will undoubtedly be a barrier for the many care providers who find themselves in states of financial precarity that make it very difficult to go beyond covering existing day-to-day costs.
This means that if the AI revolution carries genuine promise for people who receive care and support, uneven capacity to implement AI risks widening inequalities. Providers who work in more affluent areas and serve a higher proportion of self-funders are more likely to be able to purchase the best products, and to use them in the most optimal way. Meanwhile, groups who are already more deprived could be on the wrong side of a widening quality gap.
What next?
This is just some of what constitutes a mountain of challenges, many of which reflect broader structural issues in England’s social care sector. While they cannot be solved overnight, policymakers and commissioners should be actively thinking about certain issues.
Firstly, there is a need for a formalised strategic approach to AI in social care. AI should be embedded thoughtfully into any plan for a future system that emerges as part of the Casey commission. Policymakers should be considering the capacity building and regulatory infrastructure that is needed to enable safe and effective AI use in social care.
Additionally, national guidance or registries of suppliers could give providers better clarity on which tools might work for them (the MHRA has made some steps to develop similar resources for approved AI medical devices). This would provide some help to providers with minimal procurement capacity. Relatedly, clarity and consistency are needed on how care providers can and should use AI tools.
Finally, a strategic approach should be designed based upon a robust evidence base. More data is needed on opportunities for AI use in social care, the enablers of successful implementation, and on the effectiveness of tools. Stakeholders need to be engaged constructively at all levels. Particularly, this must include people who deliver and receive care and support, to ensure innovations align with what they want and need. Opportunities to improve quality of life should be at the forefront of a national approach. Better evidence will, hopefully, help to clear the fog and facilitate more level-headed thinking on an issue that needs proactive action from the government.
Suggested citation
Lobont C (2026) “Clear the fog: charting a course towards an AI-enabled future that works for social care”, Nuffield Trust blog