
"...There's a lot we can do as research practitioners to maximize the effectiveness of explicit, conscious techniques (such as the common survey), to make them as predictive as possible..."
In my last post, I talked about the importance as researchers to continue to strive to be able to measure and understand non-conscious decision making, but that many of the current approaches out there now are over-hyped and not quite ready for prime time. I also am getting the sense that due to the attention behavioral economics principles have been getting lately, some marketers and researchers have gotten to the point where they don’t believe you can get any accurate information about consumer behavior by asking people questions. This is absolutely not the case, but there’s a lot we can do as research practitioners to maximize the effectiveness of explicit, conscious techniques (such as the common survey), to make them as predictive as possible and avoid many pitfalls that drive their weaknesses. In this post I am providing six tips that can go a long way in doing just that. In general, the key theme is to try to make surveys as behavioral as possible and avoid relying on people’s self-stated perceptions of how they think they would act in certain situations
1. Where applicable, use techniques that simulate the shopping environment
A good example of this would be choice-based conjoint, where you try to mirror the consumer experience as best you can by including a competitive environment and simply have people shop. This approach forces people make trade-offs just like they would when they are shopping and doesn’t rely on stated measures of what people say they would do under different circumstances. It also leverages System 1 subconscious decision making, because it’s simply a shopping exercise and you’re not asking people to think about why they are making their decisions.
You must keep in mind though that the way you set up the design and how you present it to respondents will create biases and could affect their decisions without them knowing it. My next post will go into this in more detail and offer conjoint best practices.
If you want even more realism, you can use simulated test market approaches that literally put consumers in a virtual shopping environment. This gets about as close as you can get to a live market test but is much more controllable, much cheaper, and has been validated.
2. Try to focus on the present and not the past
Despite the criticisms of explicit/conscious techniques you may hear, behavioral scientists have confirmed that people are pretty good at telling us what they like or how they feel about a product or idea at the present time. However, we’ve learned that people often are not very accurate in explaining why they act the way they have in the past or predicting how they would act in the future in certain situations. This is because people are oblivious to the priming, biasing, and emotional factors that feed their quick System 1 decisions. They will post-hoc rationalize their decisions made in the past to make them seem sensible. The brain does not want to admit it’s acting irrationally, so it’s very good at coming up with explanations and justifying them to itself.
3. Use smart test/control designs and benchmarks
This is related to the last tip – rather than framing the study around how a stimulus (message, ad, product, image, etc.) makes people feel about a brand or product and how they would act, try to bring it to the present. Show them the stimulus, ask them how they feel right now, and then compare it to a control cell that captures the attitudes of a similarly recruited group that does not see the stimulus. Use relevant benchmarks of existing products, messages, brands, etc. to help assess the potential of the stimulus. If possible, try to hide what you’re testing to respondents. A pre-post method is an option instead of a test-control, but you could introduce bias from the “pre” questions.
4. Use derived importance or max-diff rather than stated importance techniques
Stated importance is so easy and tempting and can be useful to an extent but tends to overemphasize the importance of functional benefits over emotional, such as the impact of the brand. A better approach is to use a derived importance technique that uses a statistical model that correlates brand or product ratings relative to an overall measure, such as purchase interest. This way you don’t have to ask people what is important but rather can derive it relative to how and why they feel about products or brands in the category. Max-diff is another great alternative to stated importance forces people to make trade-offs similar to conjoint. The only drawback to these techniques is that they require greater survey length than simple stated importance so you have to plan for them.
5. Don’t overly rely on people’s memory
Rather than ask people what they did in a certain situation in the past, put them in that situation in the study and assess their behavior. Even better would be to measure the behavior as it happens, which is becoming more possible with the advancement of mobile research and big data methodologies. When designing a study, you might need to use some common sense – is what you’re trying to measure really possible with a survey or is it a stretch?
6. Be careful to avoid bias and priming effects
I know this one is pretty obvious, but it is just so critical, and made even more evident by some of the crazy experiments the behavioral scientists have come up with. We need to realize that every single question in a survey is biased by every other question, stimulus, and copy that comes before it or in the way the question is worded. We can’t prevent bias, but we can manage it. Do this by putting the absolute most important questions you want to answer toward the front of the survey and evaluating the flow of information people are getting through the survey. If you make this a priority, you’ll spot the issues. I think a lot of researchers just don’t think as hard or as often about this as they should.
In summary, let’s keep exploring new ways to leverage new learning in behavioral science to improve our research techniques. Until we find a non-conscious approach that is practical, affordable, and scalable, let’s try to make the most out of traditional approaches. Rather than just criticize them, let’s recognize their limitations but be open about their strengths, and use that knowledge to make them as accurate as possible by using smart and clean research designs and techniques.
About the Author
Rob Riester is Founder and Partner of Peel Research Partners, Inc, a market research firm. Rob leads market research engagements to help companies effectively manage risk and make better business decisions. Find out more about Peel Research Partners