In a world increasingly obsessed with the capabilities and adoption of automated systems, it can feel like humanity – and empathy, that most essential human quality – is being lost. But is that actually true?
The answer is more nuanced than we might assume. In this post, I’ll give an overview of the data on global perceptions of AI from various trustworthy sources, including the Alan Turing Institute, The Ada Lovelace Institute, and Ipsos.
I’ll primarily focus on Large Language Models and similar AI systems, as they are the most relevant to the topic at hand. However, broader applications of AI will also be discussed, providing additional context where necessary.
We don’t need any studies to tell us that generative AI is incredibly useful, and we see evidence every day of humans being replaced by AI workflows in all sectors and professions. What effect does this have on our interactions with companies and colleagues? Are we losing something important when we replace people with systems, or is AI so powerful that we can’t really tell the difference?
Perceptions are divided. Not only along national lines, but also by generation and sector. According to the globally focused 2025 Ipsos AI Monitor report, respondents from the “Anglosphere” (The UK, the United States, Canada, Australia and New Zealand, etc.) were more cautious in their perceptions of AI than respondents from Southeast Asia.[1]
A more tightly focused report from the Ada Lovelace and Alan Turing Institute from 2023, How Do People Feel About AI? Took its data pool exclusively from the United Kingdom. The executive summary includes one passage of particular relevance:
People also note concerns relating to the potential for AI to replace professional judgements, not being able to account for individual circumstances.[2]
The concern is evident; judgement and accounting for individual circumstances are human factors, they rely on empathy. Current AI models are capable of a lot of things, but feeling is not one of them. AI, as it stands, is iterative – not innovative. It cannot “think” or “feel” in the way that a human being can. In fact, were one to ask one of the more popular AI models that question, the model itself would respond in such a fashion. I encourage you to ask one yourself if curiosity strikes.
Of even greater interest, however, is a report recently published by the Journal of Medical Internet Research (JMIR), specifically its mental health branch. The report, entitled Empathy Toward Artificial Intelligence Versus Human Experiences and the Role of Transparency in Mental Health and Social Support Chatbot Design: Comparative Study (a bit of a mouthful, I know) found specifically that respondents in the test:
significantly empathized (sic) with human-written over AI-written stories in almost all conditions, regardless of whether they are aware.[3]
In my opinion, this seals the issue: empathy does matter, human voices matter, and human interaction matters.
There is one curious counter to this, in that most respondents trusted AI to show less bias than human beings. I’d posit that it’s too early to tell – AI language and art models are a product of data fed to them by humans or scraped from human sources, even those used for medical analysis (generally thought of more positively than language models even by AI sceptics in the reports I’ve cited), are reliant on preexisting data pools and pattern recognition. This would imply to me, at least, that bias is inherent, because there’s high potential for bias in the data.
The situation is not cut-and-dry. AI models are still an immature technology, and their impact has yet to be fully felt in a multitude of ways. But even with my relatively brief analysis of publicly available data, the trends were clear: trust in institutions, especially private institutions, to use AI responsibly is low. Especially in developed economies and among working-age individuals.
AI is a tool, a potentially useful one, but a spanner can’t think. Nor can these systems. It’s perhaps worth remembering that when potentially integrating AI into a workflow.
[1] THE IPSOS AI MONITOR 2025 A 30-country Ipsos Global Advisor Survey p.2 (June 2025)
[2] Modhvadia Roshni: How do people feel about AI? A nationally representative survey of public attitudes to artificial intelligence in Britain – The Ada Lovelace Institute and The Alan Turing Institute, p.5 (2023)
[3] Shen J, DiPaola D, Ali S, Sap M, Park HW, Breazeal C. Empathy Toward Artificial Intelligence Versus Human Experiences and the Role of Transparency in Mental Health and Social Support Chatbot Design: Comparative Study. JMIR Ment Health. 2024 Sep 25;11:e62679. doi: 10.2196/62679. PMID: 39321450; PMCID: PMC11464935.