AI and Market Research: How do we prevent the rise in fraudulent respondents?

  Dan Buckley      26 July 2023     

Artificial Intelligence (AI) is a hot topic in all industries and market research is no different. In this article Dan Buckley and Jacob Marshall discuss how AI impacts recruitment of market research projects and how we can protect our studies from respondents using AI.

AI and Market Research: How do we prevent the rise in fraudulent respondents?

Large Language Models (LLMs) like ChatGPT and other 'AI' solutions have flooded into various industries at an unprecedented rate and Market Research is no exception.   We have already seen many market research companies starting to utilize LLMs for automating transcription, developing PowerPoint slides and graphics, and quickly jump-starting desk research. The utilization of LLMs can be highly beneficial as they allow researchers to analyze large volumes of data in a shorter amount of time or help to brainstorm alternatives they wouldn’t have otherwise come up with.  While there are many drawbacks and considerations relevant to any industry using these tools (some examples include ensuring the data remains confidential, recognizing those who’s data was used in the model, ensuring the accuracy of the responses just to name a few), Market Researchers and recruiters have a unique challenge with the rise of these tools; the respondents we are recruiting can use the Language Models just as easily as we can.

Fraudulent respondents have always been a significant issue in the market research industry, and with the help of LLMs, it is becoming easier than ever for fake respondents to bypass traditional security techniques preventing them from participating in our research studies.  In one recent study with a rare condition, we had an extended conversation with a ‘patient’ about their diagnosis story, how their condition impacted their family, and which treatments gave them their best results!  On the surface, they seemed like a perfect fit for the study.  At least they would have been if they were not creating these stories using AI-assistance.  In another study, we believe we had a respondent use AI image generation to create documentation of their specialty.   Researchers and recruiters must take necessary steps to ensure that they are connecting with real respondents and not those aided by LLMs.  What then do we do to prevent the new rise in fraudulent respondents? 

Double-down on our traditional techniques for confirming respondent legitimacy.  While these techniques won’t be sufficient in and of themselves, they will support your other efforts.  I have outlined below how some of these techniques are helpful specifically for combatting AI-powered respondents.

  • Verbally rescreen every respondent.  Before moving forward with an interview, recruiters can conduct a verbal rescreening with respondents to re-ask key open-ended questions, and compare the responses to previous ones. Identical, word-for word, responses and drastically different responses are both red flags.  
  • Confirming respondents details is still important as well.  Reviewing IP addresses, phone numbers, mailing address and reported locations can be helpful flags. Finally, where appropriate, requiring respondents to verify their identities somehow for incentive payments can help to ensure that respondents are not fake. 

Check for things that you would expect from a machine.  This could be in speech patterns (like irregular time gaps before answering questions) or in content (incorrect or odd information).   Here are a few suggestions to check for these things.

  • Analyze Responses for Consistency.  AI has a long way to go before it can replicate consciousness.  Because of that, it often will give inconsistent answers throughout screening. We can use AI-powered tools to monitor responses for consistency and identify any of the suspicious patterns mentioned above. For example, text analytics tools can analyze responses and identify any inconsistencies or contradictions in respondents' answers.
  • Add AI-check questions.  These can be varied, but one of the things that we have found helpful would be questions that a person probably wouldn’t have the ‘right’ answer to, but that an AI model might be able to answer with ease.  What these questions are will vary on a case by case basis, but could include questions about the nature of the their condition or medication. 
  • Ask the Model yourself first.   If you are already familiar with what types of responses you would get from a large language model, you can be more aware of what to keep an ear out for as you speak with your potential respondents.

These are just a few of the things we are already doing to help get ahead of respondent use of AI and Language Models.  Thought it may take more time and more thoughtfulness, we believe that it is worthwhile to achieve the best results. 

It is worth mentioning here that as AI language models become more and more ubiquitous, it would not surprise me to see even some legitimate respondents using these tools to support their answers.   But differentiating those respondents from those that are the ‘fakers’ will have to be a topic for another day. And who knows, maybe one day we will be looking to recruit an AI to give us insight into their preferences and their decisions.

 


Contact us today

Email [email protected] or use our contact us form below for bids, questions or further information.

Do you have a question?
Get in touch.