Artificial Intelligence (AI) is a hot topic in all industries and market research is no different. In this article Dan Buckley and Jacob Marshall discuss how AI impacts recruitment of market research projects and how we can protect our studies from respondents using AI.
Large Language Models (LLMs) like ChatGPT and other 'AI' solutions have flooded into various industries at an unprecedented rate and Market Research is no exception. We have already seen many market research companies starting to utilize LLMs for automating transcription, developing PowerPoint slides and graphics, and quickly jump-starting desk research. The utilization of LLMs can be highly beneficial as they allow researchers to analyze large volumes of data in a shorter amount of time or help to brainstorm alternatives they wouldn’t have otherwise come up with. While there are many drawbacks and considerations relevant to any industry using these tools (some examples include ensuring the data remains confidential, recognizing those who’s data was used in the model, ensuring the accuracy of the responses just to name a few), Market Researchers and recruiters have a unique challenge with the rise of these tools; the respondents we are recruiting can use the Language Models just as easily as we can.
Fraudulent respondents have always been a significant issue in the market research industry, and with the help of LLMs, it is becoming easier than ever for fake respondents to bypass traditional security techniques preventing them from participating in our research studies. In one recent study with a rare condition, we had an extended conversation with a ‘patient’ about their diagnosis story, how their condition impacted their family, and which treatments gave them their best results! On the surface, they seemed like a perfect fit for the study. At least they would have been if they were not creating these stories using AI-assistance. In another study, we believe we had a respondent use AI image generation to create documentation of their specialty. Researchers and recruiters must take necessary steps to ensure that they are connecting with real respondents and not those aided by LLMs. What then do we do to prevent the new rise in fraudulent respondents?
Double-down on our traditional techniques for confirming respondent legitimacy. While these techniques won’t be sufficient in and of themselves, they will support your other efforts. I have outlined below how some of these techniques are helpful specifically for combatting AI-powered respondents.
Check for things that you would expect from a machine. This could be in speech patterns (like irregular time gaps before answering questions) or in content (incorrect or odd information). Here are a few suggestions to check for these things.
These are just a few of the things we are already doing to help get ahead of respondent use of AI and Language Models. Thought it may take more time and more thoughtfulness, we believe that it is worthwhile to achieve the best results.
It is worth mentioning here that as AI language models become more and more ubiquitous, it would not surprise me to see even some legitimate respondents using these tools to support their answers. But differentiating those respondents from those that are the ‘fakers’ will have to be a topic for another day. And who knows, maybe one day we will be looking to recruit an AI to give us insight into their preferences and their decisions.
Email [email protected] or use our contact us form below for bids, questions or further information.