Qualzy Blog

Balancing the Promise, Pitfalls
and Progress of AI in Qual Research

The qual research industry is at a tipping point. ChatGPT only burst into public consciousness a few years ago - and already it's reshaping how we work. An honest look at what's working, what isn't, and how to stay grounded.

AI research and technology concept

Can you believe that ChatGPT only burst into public consciousness less than three years ago? No, nor can we. The world has changed so much.

Since that time, AI has dominated conversations in every industry that relies on gathering or using knowledge, and most others too.

The self-evident commercial opportunity of AI had a powerful impact on platform developers. We saw a flood of new tools, endless hype, and constant promises about speed and automation from our software peers.

It rapidly became clear that there was really no question as to whether to use AI for qual research. The question became how. Fast forward to today, and researchers face a harder question still: what is actually working effectively, positively and helpfully?

The qual industry finds itself at a bit of a tipping point. It must now balance the promise of AI, the pitfalls of misuse, and the real progress already being made to find its future.

The journey from AI hype to AI reality

The arrival of LLMs created a wave of change. Suddenly, mundane tasks like transcription, summarisation and reporting seemed as though they should be effortless, while there were high hopes for AI-assisted idea generation.

Many certainly hoped AI would take care of routine day-to-day tasks so researchers could focus on the value-added stuff: creativity, human interaction, problem solving.

The reality has been a bit less easy. Ever since LLMs hit the mainstream, watchers and analysts have been keeping a close eye on the perils of over-reliance. For researchers:

  • Some tools promised instant insight - but delivered pretty shallow outputs.
  • Developers rushed to add AI features before thinking about researcher workflow.
  • The results weren't brilliant - meaning researchers and moderators had to spend a fair amount of time checking and correcting.
  • Clients assumed AI assistance could instantly change the economics and costs of a project - and didn't grasp that there were real risks to manage.

The excitement was real. The challenges were (and still are) just as real.

Facing the challenges

Researchers across agencies, client teams and consultancies share similar struggles.

1. Quality and authenticity

AI can too easily misinterpret nuance, flatten tone, or miss cultural signals buried in language-based text. Extracting them from other feedback media like video is even harder. For qualitative insight gathering, this is pretty dangerous.

On top of that, fake participants and AI-generated responses make it harder to trust data - because AI often just doesn't notice. Industry bodies have highlighted this, with QRCA members identifying fraud and respondent authenticity as major problems, calling for stronger tools and standards to protect data quality.

Qualzy has directly addressed this via its Qualzy Protect AI response detection feature, deploying a mix of content and timing signals to detect and warn moderators if there's an issue.

2. Speed versus depth

Of course, agency clients will always demand faster and cheaper service. Internal clients will always want results yesterday. Yet accurate, in-depth understanding still takes time to gather and interpret. Yes, AI can help with summarising, but to reach what can be called a true report or analysis a researcher must still check, edit and interpret. Time might be saved in some areas but it is often lost again in others.

3. Researcher identity

When AI takes on project setups, suggests probes, and interprets inputs, the question becomes: what is left for the human? The answer is pretty clear: contextual understanding, real empathy, curiosity and interpretation. These skills just cannot be automated - they need to stay at the centre of the work.

We really don't believe AI will ever understand the flippant, tongue-in-cheek or purely humorous comments that can often hide vital emotional undercurrents and motivations.

4. Workload pressure

Every shiny new research tool feature promises relief from something - often centred around speed and efficiency. In reality, some just add to a researcher's workload. Building new research features on top of any of the LLMs requires a lot of effort on the part of the developer - otherwise the training, querying, testing and oversight burden falls onto the researcher. Without care, AI can become just another layer of work rather than a help.

5. Ethics, trust and security

Participants share personal stories that must be handled with care. If AI is used for summarisation or translation, it needs to be transparent for community members and incorporated faithfully into analysis. Trust is at the heart of qualitative research, and it must not be undermined - which shines a light also on data security. Qualzy worked hard to become ISO 27001 certified for good reason.

How the industry is responding

It's great news that professional bodies such as MRS, AQR and QRCA are now drafting rules for responsible AI use. MRS and AQR have both stressed that while AI can accelerate processes, empathy and storytelling must remain central. MRS draft AI guidelines emphasise transparency, consent and protecting participants.

The common theme is simple: AI should support the researcher, not replace them. Hybrid models are starting to show their value - but the researcher still decides what it all means.

There are also a few warnings to heed. As AI transforms the landscape of social research, some worry that qualitative research is drifting into "quasi quant" territory if everything is reduced by AI analysis to counts, clusters and trends. It could potentially strip away so much of the human richness that is distinct with qual research.

Putting researchers first

At Qualzy we have always had a suspicion that some research software AI features were built for show - not really for the needs of researchers. Our approach has always been different, and we call it researcher-first AI.

This means two things:

  • AI should follow the natural flow of research. It should support immersion, reflection, iteration and sense-making. It needs to feel like a partner, not a controller.
  • AI should protect the essentials. Empathy, nuance and participant dignity and personal data security must come first. AI should flag risks and provide transparency - but leave the researcher in charge.

This thinking is what shaped the design of Maizy Chat, our AI research assistant. She is built as a true research co-pilot, not a replacement.

Maizy Chat in action

We hear too often from researchers who feel let down by AI tools. Maizy Chat was built to avoid the traps and irritations they talk about:

  • Context-aware support: She understands project goals, not just raw text.
  • Layered summaries: She provides themes, quotes and confidence markers so researchers can decide.
  • Probe generation: She suggests context-aware follow-ups for moderators.

And Maizy Chat is available from the moment your first responses arrive - during fieldwork, not just after it closes.

Where are we now?

Today, the responsible rollout of AI in research has to centre around helping researchers adopt it properly and get real benefit from it. What responsible means in practice:

  1. Transparency is key: document and communicate AI's role and where human moderators sit.
  2. Hybrid models are ideal: let AI build the project scaffolding, but let real humans interpret and tell the story they uncover.
  3. Ethical guardrails are essential: protect participant privacy and ensure accuracy.
  4. Gradual adoption is best: pilot and test new tools before rolling out widely.
  5. Human AI skills matter: researchers need training in effective oversight and critical use of AI tools.
  6. Positive culture: AI can't be used effectively in any business unless it's presented and treated as a helper, not a threat.

AI is now a fundamental, inextricable part of qualitative research processes and businesses.

The way we use it will decide whether our field becomes sharper and more valuable - or diluted, commoditised and poorer as good researchers are lost. Used well, AI can be an incredible time-saver and efficiency booster. Used badly, it risks stripping out the meaning of inputs and damaging participant trust.

AI should be a trusted co-pilot, not a crutch. With Maizy Chat, we are showing how researcher-first AI can make projects faster and smarter without losing all the wonderful things that make qual so powerful.

PK
About the author
Paul Kingsley-Smith

Paul Kingsley-Smith is a qualitative research professional with over two decades of experience. He specialises in online research methodology, community design, and bridging the gap between technology and qual practice.

View LinkedIn profile →
See it in practice

Ready to see researcher-first AI in action?

Book a discovery call and we'll show you how Maizy Chat and the full AI feature set works on real research data.