Are you being catfished by your consumer research?

Michael Nestrud, Ph.D.
9 min readMar 4, 2021

--

Catfishing is traditionally defined as someone who pretends to be someone else online (and typically associated with false-pretense online dating). If you are led to believe your consumer recruit is exactly whom you have defined it to be, but in reality it isn’t, you are being catfished. How do you know if this applies to you? It probably does. Read on!

I specialize in a particular type of consumer research, taste tests, that are themselves a special type of product testing. A few years ago I was reviewing open ended responses as they came in during a live consumer taste test. The response was triggered to explain a low rating on a previous measure. In the response, the consumer said “I have never purchased [product] before and only tasted it once before today.” Our recruiting specifications explicitly required frequent usage of the product — the person should never had made it into the taste test. I kicked them out of the test on the spot and told the agency that my client would not be paying for that consumer. Then I saw it happen again, and again. Panic set in. On the fly I implemented a series of quality control procedures both qualitative and quantitative. By the end of the study, I had removed close to 15% of the respondents. There were big stakes at play too — the results were to be used to inform potential M&A activity. What if I didn’t look? Do you?

My subsequent investigation into the matter led me to believe that the single biggest risk to consumer research is the accuracy and integrity of the consumer recruit. Without the right consumer, nothing else matters. I further believe this isn’t well know. As a consultant, I have found that it isn’t a strong selling point to tell potential clients that I implement measurable systems to ensure quality respondents. The assumption is that this is already happening. I assure you, chances are if you aren’t specifically aware of how QC is happening, it isn’t happening in a way that protects your interests, if at all.

This article will cover the following hypothesized reasons and propose solutions:

  1. The Focus Groupie Effect
  2. The Downfall of Malls
  3. Monetary incentivization against quality

The Focus Groupie Effect

The focus groupie effect is one that many people who have their own panel or attended panels has seen. This is the consumer that researches the internet for survey panels to join. They’ve participated in 10s or 100s. In a written survey, they are harder to identify, but in focus groups they are ID’d by their “leadership” of trying to influence other participants and desire to help the company by speaking for everyone, not only themselves. They might even start talking to the people behind the mirror. Good moderators can work around the Focus Groupie, and good researchers can identify them and remove them from quantitative / long form surveys. The problem is that in both circumstances you’ve wasted a costly seat, and if they aren’t identified, you’ve tainted your results unknowingly.

The Downfall of Malls and other Suburban Research Centers

If you’ve ever contracted with a research firm whom either doesn’t own a panel with the characteristics or physical location you need, you’ve probably been subcontracted to a panel provider in a mall at some point in your career. This isn’t always transparent — you have to ask where the people come from. Start doing that. (if they tell you its proprietary, run away).

In peak-mall-culture in the US, mall recruiting firms thrived. Due to the massive amount of foot traffic, they leased off-the-beaten-path units in malls and other highly trafficked centers — seemingly always in a back hallway — at relatively low cost and then used access to foot traffic to intercept people and either survey them on the spot, or invite them back to the facility to sit down and do more lengthy 1:1 interview, focus groups, surveys, taste tests, or other research. They also used intercepts to fill up their own panels with tens of thousands of consumers whom they profiled and could reach out to do research. Research firms across the US subcontracted (and still subcontract) to these agencies. There was always a large influx of new consumers and the people represented were truly a cross section of America.

Malls are in decline, and this impacts in person research significantly

You can start to see the problem. As malls declined, so did the health of these panels. Days now go by without research. The staff becomes under immense pressure to conduct research quickly and efficiently. Quality Control slips due to short or poorly trained staff. Panels are filled with respondents who aren’t ever going to do research and it becomes harder and harder to find people. The cost of poor quality panels is passed on to the researcher via the “incidence” metric. Incidence is driven by two things: the screener restrictions and the quality of the panel. If the incidence is low, the research costs increase, no matter the reason.

The incentivization against quality

Always follow the money is good advice and it applies here. I’m talking about unconscious bias and the misalignment of priorities, not overt scamming.

The client side researcher is paid to deliver research — really a report — that delivers against objectives and helps make some sort of business decision, on time and on or under budget. The research agency is paid to manage the research and create the report. The panel provider is paid to get “completes” as fast as possible with as little staff as possible. The panel participants are paid to attend studies and answer questions. Everyone gets paid whether the research is good or bad, and good research costs more money.

Further, and this is just a point of education — agencies often take margin on subcontracted panel costs — as high as +100%-+150% over the actual cost of the research. Almost everyone costs out research this way, but it creates an uncomfortable conflict of interest to me because it threatens accountability (I only do it when the client specifically requests it).

What incentives are in place for ensuring quality? Everyone SAYS that they deliver quality. But with the entire system rigged against it, don’t believe it. Do the following.

Survey Quality Control

I have had lengthy conversations with both on-site managers, off-site facility leadership and client agencies and I really believe that the people that work at and with these facilities are really doing the best they can within a less than perfect system. Owning both a panel and facilities is a high cost, low margin business, which creates a race to the bottom to reduce costs. Further, if the client is happy at the end of the day, and that is the metric for success, what they don’t know won’t kill them, right? My advice is to not outsource the protection of the integrity of your research. You can identify and implement a few simple systems to help everyone, or insist those that you work with do the same.

My advice is to not outsource the protection of the integrity of your research.

This requires some up front conversations letting your research partners know what you are doing and that you are happy to pay for accuracy but not poor quality. Here’s some things you can do:

Attend at least day 1 of your tests

Day 1 is when the test is new and the staff (often hourly) is new and carries the most risk. It is best if you are there. Get involved. Give the research brief personally to the staff. Watch the setup, watch the flow, watch the responses come in ‘live’. Take occasional walks around the research room and watch respondents fill out your survey. Check on the stimuli. Look out for agencies that try and push back against observers. They tell you that the presence of a client means that they can’t do two studies at once and you have to pay for it to disincentivize you. They trust hourly workers at minimum wage to handle confidential products all the time, but not a client covered under an NDA? They tell you that they have to hire staff to bring you lunch. Do you really need someone to order Cheesecake Factory for you and fill a mini fridge with old cans of Squirt? Resistance are signs that the attention is in the wrong place and HUGE RED FLAGS. You need to be there, or have someone go on your behalf that is transparent and you trust.

Ask for a re-screen on-site

Focus Groupies are a little less likely to game the system in person. But this isn’t perfect — I’ve personally witnessed multiple agencies skip this step, even when required. If I didn’t personally insist on attending taste tests, how would I know? I prefer the next recommendation and implement it in all of my surveys.

Build the screener into your survey

Everybody hates this one, which is why you need it. This gives you back control of the screening. On a recent taste test, I picked out a the most key screener questions and implemented them at the BEGINNING of the survey. If a respondent answers one of these incorrectly, I branched to an exit screen and a staff member escorted them out (doing this at the beginning also allows the facility to not pay them, which they appreciate to an extent). Between this and post-hoc analysis (below) I kicked out >10% of respondents in a recent survey. It was close to 30% for the “make up” panelists — the ones brought in last minute to fill up no-shows. My response to the push back from the facilities is that it is not up to me or my Client to pay for a low quality panel. Uh, Yikes.

Implement post-hoc data analysis

One strategy, the use of fake questions e.g. about past purchase of fake products, is somewhat well known, though I don’t love this one as authentic people have poor memory of brands. My favorite is simply reading verbatims for red flags (“I have never purchased [category]” when purchase is required, is an all-time favorite) and implementing a screening strategy based on a metric called the Coefficient of Variation (CV). A CV is the ratio of the standard deviation divided by the mean (theta/mu). Calculate this at the individual respondent level for a group of related questions — e.g. in the sensory world, calculate the CV for all of your Likert measures, your Overall Liking Measures, and your Just-About-Right measures separately. Outliers on the high end are likely responding randomly (or the favorite “Christmas tree” response style, pictured above) and outliers on the low end, CV~0, are straight lining. Screening out consumers should be done carefully, however, as it is a recipe for survey manipulation via unconscious bias. I don’t like screening people out unless they obviously have problems.

Start poking around price transparency

You’ll be shocked at the resistance. I personally develop contracts that align my own economic incentives with my client’s. Is your research expensive because you’re getting that much value? Or do you think it has more value because you’re getting fleeced?

Are you panicking yet?

I founded MNC in order to help organizations of all sizes and stages approach consumer research, food choice and especially taste tests with the best scientific and culinary design matched with the needs of the modern, connected organization and consumer. I have numerous scientific publications and presentations but am most proud of helping bring new thinking to marketing and R&D organizations. Prior to MNC, I both matriculated and taught at the Culinary Institute of America, I led the sensory team at both a $2B FMCG organization and a small boutique market research firm, and completed a postdoc at the U.S. Army Natick Labs.

--

--

Michael Nestrud, Ph.D.

Founder of MNC; Sensory Scientist; Culinary Psychologist; Connects organizations to the hearts, minds & taste buds of their consumers. www.mnestc.com