ChatGPT and Implications for Market Research
What is ChatGPT?
ChatGPT is the latest large language model released by OpenAI. It follows other models they’ve released previously, which also share the name GPT. GPT stands for Generative Pre-trained Transformer. The first part, “Generative”, refers to the type of language model that it is, so what it’s doing at a basic level is trying to figure out, based on some prompt that it’s given, what text to generate next that would be most plausible based on all the text that it has already seen.
Then the “Pre-trained” part refers to the fact that it has been trained on an almost incomprehensible amount of data from across the internet. Finally, “Transformer” refers to the specific neural architecture that it’s using, introduced in 2017 and underlying most large language models.
Taking these three things together, you have a model that’s been trained on so much data, with a large capacity for representation, that it’s able to generate texts which sound eminently plausible based on other human generated texts from the internet. This means that GPT, and ChatGPT in particular, is able to do a very good job of improvisation.
It’s able to take on roles really well, for example, if you give ChatGPT a personality and say, ‘this is your personality’ and ‘what are you going to say?’, then ChatGPT is very good at adapting. So even if it’s going to make things up (and that’s a big concern) it’s very good at pretending. It’s also very good at repeating plausible things that it’s memorized in the past that are relevant to the prompt that you’ve given it. So essentially we can think of ChatGPT as a model that’s really good at generating text based on some sort of role or persona that it’s adopting based on the prompt that the user gives it.
What are the implications of ChatGPT for market research?
A key implication of ChatGPT for market research is in the further advancement of Conversational AI as a key methodology. Conversational AI has been talked about for a while within the context of market research as an emerging technology but, until recently, the AI capabilities weren’t really strong enough. So more than anything else, ChatGPT exemplifies the quality of conversations that AI can have, and demonstrates that the technology which will inform the future of market research is already here!
However, there’s a lot more to be done. For example, the types of personas that ChatGPT could take on might be useful in some situations, but it’s like a human, you can have someone that knows how to ask questions within a conversation but market researchers, especially qualitative moderators, go through years of professional training and experience in order to be able to ask the right questions in the right context. So getting to the point where we can actually use technology like ChatGPT to make sure that it’s asking questions that are appropriate, that aren’t leading and that are context specific is where further innovation is needed.
So what needs to be done to make the AI effective for market research? That is, to be good at asking questions, as well as answering them?
A lot of what ChatGPT has been trained on or trained for is information extraction. So it’s useful for people to ask ChatGPT questions and then for it to use the knowledge base that it’s accumulated through all the training that it’s done to answer those questions. But like I said, ChatGPT can also take on certain personas, so it’s possible that you could prompt it to ask questions instead of answering them. Even just giving the right kind of prompt can nudge ChatGPT towards asking questions rather than answering them.
That said, that’s just the starting point. For the purposes of market research, there is a lot more that we would want out of an AI interviewer. We want it to be able to incorporate the context of the research project that’s currently being conducted—not just having some subject matter knowledge of what’s being discussed, but also understanding the specific research objectives.
We’ve seen this a lot in all the R&D that we’ve been doing related to natural language processing: it’s often quite easy to just ask some question about what’s being discussed but that doesn’t actually mean that the question is going to be useful, and is going to advance the research objectives of the researcher. That’s definitely one of the big challenges, to actually nudge the AI in order for it to ask questions that are relevant to the research objectives.
So that relates to the analogy with human researchers, particularly qualitative researchers, that not just anyone can ask questions that are going to get to deep insight. They need to be trained, they need to know how to phrase questions appropriately and to probe. So that’s analogous to what needs to be done to large language models like ChatGPT to make them suitable for asking relevant questions with conversational AI.
Another issue to overcome is that with large language models there’s a certain lack of control. That is, the user can’t know exactly what the model is going to say, because it is generative. Models can also have what I call hallucination properties, i.e. they may just make things up, bring up topics which were not discussed before and even put words into the research participants’ mouths.
So, there’s a danger which needs to be avoided there as well. But back to the analogy, all these things are also possible if a non-expert human is conducting an interview. Therefore, it’s necessary to augment models such as ChatGPT with the goals of being a market researcher, and the objectives for a particular project. Doing this in a project-specific, real-time, and cost-effective way is very tricky!
To achieve this takes a lot of R&D to work on representations that can be integrated with language models such as ChatGPT so that we can prompt or nudge the model in order to ask questions that are relevant. In particular this means having representations of what the researcher wants to know within a given market research context, to actually represent their research objectives in some way. For example, if someone brings up a certain phrase that the researcher might be particularly interested in then we can make sure that the model is going to ask questions about that specifically.
There are other features that can be built in. One of the weaknesses of language models is that they don’t have a symbolic representation system. That’s why, for example, they often make arithmetic mistakes, because they are manipulating the symbols as though they were language rather than having their own symbolic meaning.
The lack of structured representation presents a challenge for researchers wanting to do quantitative analyses. Likewise, it presents a challenge to eliciting information in a structured manner, to ensure that participants are consistently asked for the information of interest to a research. To address these concerns, we can also build symbolic representations—you can think of them as opinion networks—which are incorporated into language models, and which are informed by market research principles.