Show and Tell: Applying Behavioral Economics to Surveys

Hero Image: Show and Tell: Applying Behavioral Economics to Surveys

What impact will this ad have on your purchase likelihood?”

Do you prefer to get a $10 bill credit, or two free movies?”

Do you find the current offer attractive, or this new one we are unveiling?”

Have you asked these kinds of questions in surveys? Or, have you seen them in other surveys, and questioned if this was the right way to go? You are not alone. Many clients have often wondered about the suitability of certain types of questions in the standard survey format. And they are right to wonder.

Typical survey questions are good at getting at issues directly. So if we want to know if a consumer is satisfied with a product, we just ask it outright. Yes, there are plenty of discussions about the number of scale points, anchor point wording, use of mid-points etc, but ultimately it is still a direct question. We expect the respondent to understand it, perhaps think a little, and then answer it as best she can. Other questions may require more effort. Ranking a set of features or asking for a new feature addition (in an open-ended format) may require deeper thinking on the part of the consumer, but it is still the same basic process. A question is asked directly and the respondent understands the question and its intent, and answers it. To put it another way, the consumer tells us what they think.

Now consider the first question we posed – What impact will this ad have on your purchase likelihood? There is more going on with this question than with a direct satisfaction question. We are not asking if the respondent likes the ad, but whether it will influence something else (in this case her purchasing decision). So the respondent has dual tasks – deciding what she thinks about the ad, and then expressing whether it will influence her purchase decision. The former is easy, but the latter is more fraught. Why? Because she now has to expose herself in some way that might make her uncomfortable.

Many people do not want to think (or say) that they are influenced by (trivial) things like advertising. They may even genuinely believe that (“Of course not – what kind of person do you think I am, to be affected by an ad that showed a dancing monkey?”). But we know that reality is more complex than that. Research in Behavioral Economics has shown that purchase decisions are affected by all kinds of environmental stimuli. More importantly, it has shown that people are not conscious of this happening, how fast it can happen and how easily it can happen. Consider a study done to test the impact of the Apple logo. You remember – the one that looks like a bitten apple? People exposed to that Apple logo were subsequently shown to be more creative than those exposed to the IBM logo.

Think about that for a second. Mere exposure to a corporate logo (nothing more) actually made people more creative. But, of course, no consumer is going to think that can happen. In other studies people have purchased more, simply when put in a certain mindset with a simple manipulation (see the shopping momentum effect research by our friends at Yale School of Management). Even if consumers knew that such effects were possible they would never admit to that.

So, how do we deal with these issues in a survey? We follow what researchers in Behavioral Economics do. Rather than asking consumers to tell us the answer we ask them to show us. Let’s go back to the first question again – What impact will this ad have on your purchase likelihood? Rather than ask this question directly in the survey, let us set up a simple between-subjects experimental design (also commonly known as a monadic design). One group of randomly chosen respondents will see ad treatment A, while a second group will see ad treatment B (or be part of a control cell with no ad treatment). Allocating respondents randomly to these two treatments is crucial; if not, we cannot control for the impact of extraneous factors. Each group sees only its own treatment and they are both asked the same question. Not the one about the influence of the ad on their purchase decision, but a more straightforward question about their likelihood to purchase the product. That is, we show the ad and have them tell us what they would do, and therefore infer the impact of the ad.

Let’s break that down. The middle part about getting respondents to tell us what they think perfectly fits the survey framework (we ask, they understand, think, and respond). But the crucial difference comes in the first and third parts. In the first part we frame the question differently by showing them something they can react to. This is a classic move in Behavioral Economics research, where the question is framed in a way that gets at the researcher’s objective, but is not transparent to the respondent. That is, in our example, the respondent has no idea that another cell (or cells, for that matter) is being tested and hence does not know our true intention. Instead, they respond as they normally would to a survey question. In the third part, we infer the impact of the ad by comparing the relative purchase likelihood scores for the two ad treatments.

The second questionDo you prefer to get a $10 bill credit, or two free on-demand movies (each valued at about $5)? – has a different problem. If the monetary value of the two options is equal, respondents may react negatively (“Aren’t they the same thing? Why is one option better than the other?”). Again, we know from research, that such small things do in fact affect people’s perception and behavior. For instance, some people may see more than a $10 value in two free movies, but only if they are not presented concurrently. So constructing the choice as an experiment can get the respondents to shows us the true value placed on each option.

The third questionDo you find the current offer attractive, or this new one we are unveiling? – is in the same vein. If the two versions of the treatment displayed are significantly different (such as a redesigned, more user-friendly bill), then a straight comparison in a regular survey will work well. But if the differences are more about non-central factors (colors, design, addition of small benefits, etc), or factors that may not be obviously notable, then we will have a problem. Respondents may insist that the substance is similar and hence there is no differential impact on them. The experimental approach will get at the true impact.

Surveys are a good way of gathering consumer feedback and they have clear advantages – they are easy to administer and respond to, quick, relatively cheap and when proper sampling is done, results can be generalized. However, they do have clear limitations and are often misused resulting in low quality information. One of the ways this abuse occurs is by asking questions that surveys were not meant to answer. Insights from Behavioral Economics allow us to use the simple and robust survey framework to get insights, if we are willing to think a little. The key is to understand what cannot be asked as direct questions and then devise an appropriate experiment that will get us the answer. The results are more interesting and useful, the respondent is not unduly taxed, and the study does not cost more or take longer. All that is required is the ability to think about when this approach is appropriate.

There is much research done in the world of marketing and consumer behavior, academically and practically. If we are able to apply some of those ideas, there is no reason we cannot get better quality insights from survey research.