The Value of Human Insight in the Artificial Age

Image created by ChatGPT, edited by Me

AI has been heralded as a potential substitute for traditional methods of gathering and analysing data, promising speed, efficiency, and a wealth of knowledge at our fingertips, without the need for the time and cost of engaging with real world users. However, there is a critical distinction to be made between recontextualising existing knowledge and uncovering new, actionable insights.

Many AI tools rely on synthesising patterns from existing data, offering solutions that assume past findings apply universally to new contexts. While this can sometimes be useful, it introduces significant risks. Familiar problems or findings are rarely an exact match for your business or product. Over-reliance on AI can result in decisions built on shaky assumptions, undermining the development of truly user-centred solutions.

I am are going to look at three key topics today which influence the need for human insight in the age of artificial intelligence: authenticity, regurgitation, and the role of AI.

Authenticity

Direct engagement with users provides authentic insights into how actual people experience your product or designs. This authenticity is rooted in real-world, specific feedback that can be interrogated, expanded upon, and reproduced with reliability. Unlike AI, human insight doesn’t hallucinate, confabulate or generate fabricated conclusions.

The power of authenticity goes beyond the findings themselves. Direct quotes, video clips, or the ability to observe live testing (in-lab or remote) can breathe life into the design process. These moments of genuine interaction often become pivotal in engaging stakeholders and product teams, ensuring that the user-centred design journey feels tangible and meaningful. Human feedback captures nuances and emotions that algorithms simply can’t replicate, offering clarity that AI often lacks.

Regurgitation

AI’s strength lies in its ability to process vast amounts of data quickly, but even the most advanced systems are constrained by their foundations: existing knowledge, and the algorithms and choices it makes in how to use this. At its best, AI can recontextualise relevant insights from similar products or services in ways that feel tailored and useful. But this process is opaque. While AI, and in particular Perplexity, is getting better at sourcing it’s specific sources - this is still not ubiquitous, and still a black box to a large extend. This can leave businesses to trust blindly that the data it draws from is accurate and applicable.

In practice, this can mean AI regurgitates irrelevant or outdated information - or worse, fabricates connections altogether. Known as hallucination (though the term confabulation may be more accurate), this phenomenon is a significant limitation of current AI systems. For businesses striving to innovate and differentiate, relying solely on AI’s opaque outputs risks embedding errors or assumptions into critical decisions.

The Role of AI

This isn’t to dismiss AI’s potential outright. I’ve been exploring how AI might assist with analysing complex briefs, generating content, moderating data, and managing large datasets. These applications hold promise, particularly when paired with robust safeguards for client and participant confidentiality.

However, we must remain discerning about where AI adds value and where it falls short. While it excels in areas like data processing and content production, it is not yet a substitute for human judgement in research or design. Studies have even shown that use of AI in this fashion can reduce critical thinking, and leave us susceptible to groupthink or an appeal to authority. As practitioners, designers, and product teams, our responsibility is to understand AI’s limitations and deploy it thoughtfully, ensuring it complements rather than compromises our work.

The Bottom Line

AI is an extraordinary tool, but it cannot replace the authenticity, depth, and reliability of human insight. Real feedback - gathered directly from users - provides context, emotion, and specificity that no machine can fabricate. As we continue to explore the potential of AI in research and design, the key is balance: leveraging AI’s strengths while recognising the irreplaceable value of genuine human connection and understanding.

Relevant Further Reading

AI-Powered Tools for UX Research: Issues and Limitations" – Nielsen Norman Group
NNG breaks down the fundamental shortcomings of AI in UX research, highlighting why AI struggles with interpreting user behaviour beyond text-based inputs. If you’re curious about where AI fails to capture the human experience, this is an essential read.

AI in User Research: Benefits, Risks, and Tools Explained" – Marvin
This piece explores how AI can enhance efficiency but warns against blindly trusting its outputs. It unpacks how AI can miss crucial details and misinterpret context, making a strong case for keeping human intuition at the heart of research.

User Research with Humans vs. AI" – UX Tigers
UX Tigers tackles the growing temptation to replace user interviews with AI-generated personas. They reveal why AI can’t replicate the unpredictability and emotional depth of real users—making this article a must-read for anyone considering AI-driven research shortcuts.

Using Generative AI in Research: Limitations & Warnings" – University of Southern California
USC’s research lays bare the risks of over-relying on generative AI, including its tendency to fabricate facts. If you want a well-researched, academic perspective on why AI still needs human oversight, this is the one to read.

The Illusion of Artificial Inclusion" – arXiv
This thought-provoking research paper challenges the idea that AI can replace human participants in research. It dives into why true representation and authenticity can only come from real people—perfect for anyone questioning the ethics of AI-driven decision-making.