It was our first week back for Vineyard English School after the summer break1. Many familiar faces were absent, but one young Eritrean was eager to see us – he’d just received a letter about his asylum claim.

We were back in the hotel today after stopping over the summer (more volunteers would allow for doing this year round). Here's a photo of a letter that had been received by one of the hotel residents. Two native English speakers had to check with one another that we actually understood it.

Benjamin Welby (@bm.wel.by) 2024-09-11T17:13:33.941Z

The letter was dense, bureaucratic, and impenetrable. It’s a far cry from the aspirations for content design that so many advocate for as a central plank in reimagining the relationship between the state and its users.

He looked to us for an explanation. But even among the fluent English speakers, we had to consult amongst ourselves to ensure we understood it correctly. Hardly surprising, since according to The First Word’s readability test, this letter is on par with reading Nietzsche.

A visual display of book covers arranged by difficulty level, ranging from "Very Easy" (0-20) to "Very Challenging" (61-100). The cover in the middle, labeled "20 - 30," stands out in yellow and features the title "Beyond Good and Evil" by Nietzsche. Other covers represent a range of genres and styles.

The power of AI

I reached for ChatGPT.

And in moments we had a simple set of bullet points. In further moments they were translated to Tigrinya. Language that any fluent English speaker could grasp was now available in a form our guest could comprehend, and a language they could read. Relief washed over him.

Whatever the ins and outs of how you do #AI at scale to really make an impact on some of the intractable policy challenges, it really can do amazing things in moments (not that I could vouch for the quality of its Tigrinya).

Benjamin Welby (@bm.wel.by) 2024-09-11T17:21:04.273Z

In that situation, AI provided almost instant comfort. It was a powerful reminder of the genuine potential this technology has to make a real, human impact. Sure, I’ve used AI to make this website more accessible and to write the code to help pray for MPs. Those things were good, but seeing the direct impact on someone’s mental well-being was profoundly moving.

Handle with care

In my last blogpost I quipped:

It is a truth, universally acknowledged, that a person in possession of a question must be in want of a chatbot. Yet, one must also remember, that a chatbot, no matter how clever, is not always in possession of the truth.

While ChatGPT misattributing a ’90s catchphrase is an amusing example of AI’s imperfections, there are more concerning instances that underscore the need for vigilance. Incidents like the one reported by Terence Eden highlight how AI can sometimes go astray. Yes, it can do impressive things, but we must be cautious about the breathless excitement over technology that can so easily get it wrong.

The trust chasm

People are the single most important part of any conversation about AI – whether they’re on the receiving end or involved in developing it. Unlocking AI’s value is therefore deeply intertwined with implementing it in ways that retain the public’s trust.

I’ve written before about the apocalyptic state of public trust in the UK. We’re not just facing a trust deficit – we’re staring into a full-blown trust chasm. In this context, we must be absolutely meticulous in how we develop and deploy new technologies. Until we seriously address those wider issues of trustworthiness, even the most promising AI initiatives risk being drowned in scepticism.

But how do we bridge this trust chasm? How do we ensure that technology serves as a tool for empowerment rather than a source of further alienation? These are questions many people are grappling with, especially when it comes to the public sector, and I think there is broadly some consensus.

A couple of months ago I joined a discussion on a Roadmap for Progressive Tech Policy. Just the day before my experience at the asylum accommodation, Stefano and I published our article on PublicTechnology.net, focusing on the importance of human foundations for AI2. That same evening, I attended The Secrets of Delivering AI at Scale (and received a complementary copy of a book that might hold the answers), and while we were wrapping up English classes, the Institute for Government was asking the question How Should Government Use AI?. Across all these conversations, similar themes are emerging: governance, ethics, infrastructure. But if there’s one thing everyone agrees on, it’s that people – those impacted by these technologies and those delivering them – are the priority.

As we advance technologically, it’s imperative that we don’t lose sight of the people these technologies are meant to serve. By focusing on trustworthiness and transparency, as well as skills and capability, we’ll get a lot closer to unlocking the value of AI to make meaningful, positive impacts on individuals and society as a whole.

Retaining our humanity

Back at the asylum hotel, the simple act of translating a letter was a small victory. But it is a reminder that trust, whether in a person or a machine, isn’t automatic; it must be earned. This is why, when we talk about AI in government, the conversation can’t just be about shiny technology or efficiency gains. It has to focus on the people building these systems, and the people they’re built to serve. That’s particularly true for those navigating complex, often opaque processes.

This ties back to the time I’ve spent thinking about how we welcome asylum seekers and refugees, and help them feel at home. Material support would immediately make things a lot better, but the longer-term, more sustainable transformation recognises that valuing people with compassion, dignity and respect is The Most Important Thing. Even our most advanced technologies must be grounded in these simple human values. Good policy should never be driven by the technology we build, but by the people it’s being built to support.

The palpable relief in that hotel lobby underscores for me that the success of AI won’t be measured by how efficiently it processes information but by how well it earns and sustains public trust. AI must serve people. That service begins with ensuring that the people it impacts can trust it to work in their best interests.

  1. We would love to offer classes year-round but to come close to meeting demand across the Borough we’d need a lot more volunteers so compromises have to be made. ↩︎
  2. This blog post was originally conceived as a very brief self-promoting piece signposting off to that other article ↩︎