It was our first week back for Vineyard English School after the summer break1. Many familiar faces were absent, but one young Eritrean was eager to see us – he’d just received a letter about his asylum claim.
The letter was dense, bureaucratic, and impenetrable. It’s a far cry from the aspirations for content design that so many advocate for as a central plank in reimagining the relationship between the state and its users.
He looked to us for an explanation. But even among the fluent English speakers, we had to consult amongst ourselves to ensure we understood it correctly. Hardly surprising, since according to The First Word’s readability test, this letter is on par with reading Nietzsche.
The power of AI
I reached for ChatGPT.
And in moments we had a simple set of bullet points. In further moments they were translated to Tigrinya. Language that any fluent English speaker could grasp was now available in a form our guest could comprehend, and a language they could read. Relief washed over him.
In that situation, AI provided almost instant comfort. It was a powerful reminder of the genuine potential this technology has to make a real, human impact. Sure, I’ve used AI to make this website more accessible and to write the code to help pray for MPs. Those things were good, but seeing the direct impact on someone’s mental well-being was profoundly moving.
Handle with care
In my last blogpost I quipped:
While ChatGPT misattributing a ’90s catchphrase is an amusing example of AI’s imperfections, there are more concerning instances that underscore the need for vigilance. Incidents like the one reported by Terence Eden highlight how AI can sometimes go astray. Yes, it can do impressive things, but we must be cautious about the breathless excitement over technology that can so easily get it wrong.
The trust chasm
People are the single most important part of any conversation about AI – whether they’re on the receiving end or involved in developing it. Unlocking AI’s value is therefore deeply intertwined with implementing it in ways that retain the public’s trust.
I’ve written before about the apocalyptic state of public trust in the UK. We’re not just facing a trust deficit – we’re staring into a full-blown trust chasm. In this context, we must be absolutely meticulous in how we develop and deploy new technologies. Until we seriously address those wider issues of trustworthiness, even the most promising AI initiatives risk being drowned in scepticism.
But how do we bridge this trust chasm? How do we ensure that technology serves as a tool for empowerment rather than a source of further alienation? These are questions many people are grappling with, especially when it comes to the public sector, and I think there is broadly some consensus.
A couple of months ago I joined a discussion on a Roadmap for Progressive Tech Policy. Just the day before my experience at the asylum accommodation, Stefano and I published our article on PublicTechnology.net, focusing on the importance of human foundations for AI2. That same evening, I attended The Secrets of Delivering AI at Scale (and received a complementary copy of a book that might hold the answers), and while we were wrapping up English classes, the Institute for Government was asking the question How Should Government Use AI?. Across all these conversations, similar themes are emerging: governance, ethics, infrastructure. But if there’s one thing everyone agrees on, it’s that people – those impacted by these technologies and those delivering them – are the priority.
As we advance technologically, it’s imperative that we don’t lose sight of the people these technologies are meant to serve. By focusing on trustworthiness and transparency, as well as skills and capability, we’ll get a lot closer to unlocking the value of AI to make meaningful, positive impacts on individuals and society as a whole.
Retaining our humanity
Back at the asylum hotel, the simple act of translating a letter was a small victory. But it is a reminder that trust, whether in a person or a machine, isn’t automatic; it must be earned. This is why, when we talk about AI in government, the conversation can’t just be about shiny technology or efficiency gains. It has to focus on the people building these systems, and the people they’re built to serve. That’s particularly true for those navigating complex, often opaque processes.
This ties back to the time I’ve spent thinking about how we welcome asylum seekers and refugees, and help them feel at home. Material support would immediately make things a lot better, but the longer-term, more sustainable transformation recognises that valuing people with compassion, dignity and respect is The Most Important Thing. Even our most advanced technologies must be grounded in these simple human values. Good policy should never be driven by the technology we build, but by the people it’s being built to support.
The palpable relief in that hotel lobby underscores for me that the success of AI won’t be measured by how efficiently it processes information but by how well it earns and sustains public trust. AI must serve people. That service begins with ensuring that the people it impacts can trust it to work in their best interests.