Artificial intelligence is rapidly becoming the infrastructure that modern societies run on. It shapes how healthcare systems operate, how public services are delivered, and increasingly how decisions about people’s lives are made.
Yet in the UK, most of this infrastructure is not being built here.
We are becoming dependent on AI platforms developed elsewhere, within ecosystems shaped by different priorities, different accountability structures, and different political realities. American technology companies dominate the landscape. Tools like ChatGPT are embedded across government, business and public sector workflows. Companies such as Palantir now sit at the centre of some of our most sensitive public data infrastructures.
The clearest example is Palantir’s £330 million contract with NHS England to build the Federated Data Platform. The platform integrates large volumes of health system data to improve planning, logistics and patient flow. Critics and health justice organisations have raised serious concerns about centralising such vast amounts of health data within a system operated by a private foreign technology company, and about what could happen to that data if governance protections were weakened or political conditions changed.
This is not a hypothetical risk. It is a structural one.
The geopolitical reality of AI
Technology companies do not operate independently of the states they come from.
Palantir was originally funded by In-Q-Tel, the CIA’s venture capital arm, and built its reputation on data integration systems designed for national security, counterterrorism and law enforcement. OpenAI has worked with US government agencies as the American AI sector becomes increasingly intertwined with defence priorities. Across the United States, AI research is closely tied to Department of Defense initiatives and military technology investment.
None of this is remarkable in itself. Governments everywhere collaborate with technology companies.
But it raises an unavoidable question for Britain.
If the technologies underpinning our healthcare systems, government services and data infrastructure are developed within ecosystems closely tied to foreign defence institutions, what does that mean for national sovereignty, public trust and long-term resilience? AI is not just software. It is infrastructure. And infrastructure always carries political and strategic implications.
The case for building here
Technology reflects the values, priorities and regulatory environment of the societies that build it. When AI systems are developed overseas, they carry assumptions shaped by those environments. Those assumptions may not always align with British law, democratic oversight or public expectations.
The NHS is one of the most trusted institutions in the country. The data it holds is among the most sensitive datasets anywhere in the world. Decisions about how that data is processed, analysed and governed should be shaped by British priorities and British accountability structures, not outsourced to platforms built for different markets and different political contexts.
British AI must also be ethical by design. That means actively working to identify and mitigate bias at every stage of development, and building systems that are tested, scrutinised and proven to work equitably across different populations, communities and demographics. AI that works well for some people but not others is not a technical success.
It is a failure dressed up as progress.
As AI becomes more deeply embedded in healthcare and government decision-making, technological sovereignty is no longer an abstract concern. It is a practical one.
Do we want the core infrastructure of our public services to depend on external technology ecosystems? Or do we want to build that capability here, on our own terms?
Why I founded Astronomical AI
This question is one of the reasons I founded Astronomical AI.
My goal has always been to build British AI that addresses real societal challenges, starting with healthcare. Astronomical AI focuses on improving lung cancer diagnosis and treatment pathways. Lung cancer remains the leading cause of cancer death in the UK. Delays in diagnosis and treatment have devastating consequences for patients and families, and the pressure on radiologists and oncologists is immense.
The technology we are developing supports clinicians by analysing CT scans and assisting with radiotherapy planning, helping to reduce delays and free up clinical time for patient care. Critically, it is being built with bias mitigation at its core, because we know that healthcare AI which has not been rigorously tested across diverse patient populations risks deepening existing health inequalities rather than reducing them.
The wider ambition goes beyond a single product. It is about demonstrating that Britain can build an ambitious, ethical and clinically useful AI of its own, developed here, accountable here, and designed around the needs of British institutions and patients.
Who gets to build
Building an AI company in Britain is not equally accessible to everyone.
Over more than twenty years in AI and technology, I have encountered barriers that many women and ethnic minority founders will recognise immediately. I have been told that women cannot code and should do something else. I have had my technical expertise doubted, been talked over in meetings, and watched people reinterpret my own ideas back to me as if they had just thought of them. As a South Asian woman, I have also faced the assumption that you are there to support rather than lead, that you may be technically capable but not visionary, that you are part of the team but not the one with the idea.
I have written more about these experiences in my post on being a female founder, which you can read on my website.
Research consistently shows that women founders receive around 2% of venture capital funding in the UK. Female-only founding teams represent less than 10% of funded deals. For ethnic minority founders the numbers are lower still: studies of UK angel investment data show that roughly three quarters of funded founders are white, with South Asian and other minority founders significantly underrepresented.
These are not abstract statistics. They determine who gets the opportunity to build technology and who does not. They shape who gets heard, who gets funded and who gets to build the technologies that will shape our future.
Despite all of this, I have kept building. That resilience, the refusal to be pushed out of spaces I belong in, is something I share with many founders from underrepresented backgrounds who are quietly doing extraordinary work with very little structural support.
Progress has often depended more on persistence than on institutional backing. Some of that is beginning to change, which is encouraging. But the barriers remain very real, and they matter, because a Britain that cannot support diverse founders in building technology is a Britain that will build worse technology.
A different vision
The conversation around AI tends to focus on capability: larger models, faster systems, more automation. But capability without accountability is dangerous, and often beside the point.
The more important questions are simpler.
Who builds these systems? Who benefits from them? Who governs them? And who is left out?
A genuinely British AI ecosystem should prioritise public good, transparency, safety and fairness. It should be built ethically, with rigorous attention to bias and a genuine commitment to ensuring these systems work for everyone, not just the majority, not just those whose data was most abundant in the training set, but everyone. It should support diverse founders and researchers, because innovation is stronger when more perspectives shape the technology. And it should ensure that critical infrastructure, particularly in healthcare, is not quietly handed over to platforms developed entirely outside the country, within ecosystems we cannot fully audit or govern.
AI will shape the future of Britain.
The question is whether Britain will help build that future itself, or simply rent it from others.
Written by Femma – CEO