What does the future hold for Conversational AI and Market Research?
In our blog posts over the last couple of months, we’ve provided an introduction to Conversational AI; what it is, its advantages and how it leverages AI to probe to get rich feedback from people in their own words and theme this feedback at speed. We’ve done this through the lens of exploring a few of the ways that inca already uses conversational AI to enhance the process of understanding people. In this final blog of our series, I’m handing over to my colleague and CTO, Josh Seltzer, who will discuss what the future might hold for conversational AI and its application to market research.
As we mentioned, the magic sauce of conversational AI goes by the technical term of natural language processing (NLP), along with techniques from related fields such as information retrieval and statistics. In the last few years, the field of NLP has seen some major advancements, which some experts argue constitute a paradigm shift in what is possible with language technology. The technology is new and has a lot of hurdles to face before seeing widespread incorporation across industries, but already tech startups and enterprise R&D are dumping venture capital and ramping up to understand and leverage the competitive edge to be gained as these technologies mature.
Eventually, one could imagine a world of virtual agents regularly conversing with humans, with at least some ability to understand (and even influence) people while emulating the voice of a real human being. While this might sound a bit scary—especially given the growing prominence of political propaganda and false information on the web—even with the latest advances we are still a far ways away from having language models that can hold deep, nuanced conversations on their own. To keep things a bit more grounded, for this blog I’d like to focus on some concrete examples of how conversational AI can transform market research even within the next decade.
Quantitative Research
Although some platforms have already started to move towards an increased focus on text analytics and other NLP-powered features such as sentiment analysis, the industry has seen very little use of conversational AI capabilities to date. While it is probably safe to say that humans will always lead the research design process, in terms of deciding the research objective and relevant demographics, the traditional paradigm of devising and analyzing fixed questionnaire structures will be challenged by conversational AI. As we have seen in the previous weeks, a conversational agent can not only increase engagement but also elicit richer information from respondents.
Conversational AI will be built into the role of a virtual moderator, providing an extra layer of interaction which brings surveys closer to in-depth (or semi-scripted) interviews.
Virtual moderators will be trained to detect and respond to fundamental consumer motivations and opinions such as emotions, brand awareness or perception, impressions and judgments of products and services, and expected behaviour. Virtual moderators will use conversational cues to pick up on these dimensions, but will also proactively probe consumers to explore and to clarify their perspective.
Nuance, rationales, and unanticipated opinions will become commonplace expectations of quantitative research, ultimately blurring the lines between quantitative and qualitative.
Transforming Qual
With COVID-19 having forced qualitative research online, digital qualitative research has grown and this trend is expected to continue. Although qualitative research will never lose its quintessential human dimension, the digital nature of online interviews and focus groups provide ample opportunities for conversational AI to improve researchers’ workflows.
Moderation Assistant — AI will be increasingly used to automate the tedious and painful parts of digital qualitative research, for example prompting participants to provide more information or to ask common follow-up questions.
Tagging / Analysis / Reports — basic features like sentiment and keywords will be replaced by more sophisticated thematic and semantic analyses of conversations, allowing researchers to derive and aggregate insights across many different conversations.
Multimodal machine learning — this is a hot area of research which will extend conversational AI beyond the text-or-voice modalities, allowing truly integrated analyses of human facial expressions, vocal cadence and intonation, and semantic meaning of utterances to produce a holistic understanding of consumers which can be operated at-scale.
Social Media
Conversations on social media constitute a huge portion of the data that large language models are trained on.
Social media monitoring is already a key tool for understanding consumer opinions (and dissent) for marketers.
As NLP advances we can expect sentiment models applied to social media data to become more accurate and nuanced, resulting in a much better understanding of the opinions people voice online and what it means for brands, for example by gaining a more accurate read on the emotions behind the language people use.
Fraud / Fraud Detection and Simulated Conversations
Large language models such as GPT-3 make it possible to generate texts that are convincingly human (at least, until held up to a fairly high level of scrutiny).
The downside is that the technology is ready to be abused by fraudulent respondents and, more worryingly, at scale by bot farms. Though at this point it may still be prohibitively expensive for large language models to be abused in this manner, in the very near future it will be possible for fake respondents to make use of NLP to input plausible sounding open-ended responses.
To counteract the above, researchers are hard at work developing methods for detecting when text has been generated by a language model. In the interim we may see more use of methods such as asking respondents a nearly-identical pair of questions at the start and end of a survey, to ensure that their answers are consistent (logical inconsistency and limited long-term memory are one of the major constraints of current language models).
It’s possible that companies may emerge that offer ‘respondent-less’ market research via a proliferation of low-cost services for simulated “panels”, maybe called “aggregate personas”, which use language models trained on specific demographics of participants. We should be wary of such services: the illusion of real-sounding conversations specific to your research objectives could be created with the click of a button, but it’s important to understand that these are statistically-plausible inventions based on existing data (typically scraped from the internet). In other words, language models can generalize to previous conversations that they have seen, but if your research objective is unique or nuanced then you need genuine human understanding and judgment.
What does this mean for market researchers?
Statistical and analytical methods will need to be extended to incorporate insights derived from open-ended conversations, in order to make sense of huge amounts of conversational data
Blended methodologies which interweave quantitative and qualitative insights will become the norm
The industry of conversational design, which has already grown to support customer support agents and other uses of conversational AI, will bleed over into market research: research will increasingly involve designing dynamic conversations around targets of interest.
The impact will be deeper, richer understanding of people and, consequently, a stronger base of insight for business to make better decisions. This can only serve to strengthen the importance of research and insight to organisations.
That’s it for now, we hope you’ve enjoyed our series of blogs about conversational AI. We’re delighted to say that we’ll be bringing the blogs together in the next few weeks into an eBook which will be available via the fantastic Insight Platforms Home - Insight Platforms. We’ll post on Linked In when its available.