The research industry has for at least a decade now been facing two conflicting challenges. At the same time the representativeness and quality of data collected is being called into question, our clients are asking us to make our results tie to and be predictive of the real world. I believe that even with the limitations of response rate and respondent behavior, we can achieve good results by asking questions in the right manner. We need to mirror the way people make decisions in the real world…namely by making choices.
The easiest way to understand the power of making choices is to look at political polling. In the days before the 2008 election, Realclearpolitics.com tracked 15 different polls predicting the outcome. These polls were conducted by news organizations, partisan firms and independent firms. They used different screening techniques, data collection methods (Web, Phone, IVR) and weights to predict likely voters. With such a mish mash of firms, methods and other factors one would expect results to vary greatly…instead we find a high degree of correlation between them and with rare exception results that are within the margin of error of the actual election outcome. Further, state by state polls and senate race polls are similarly accurate. So why, given all of the complications, did these polls come in so accurately?
There may be many reasons for it, but I contend one of them is that the questions asked mirrored the real life decisions that voters make. Most voters don’t list out all the issues, then decide which candidate is closest to their position, then weight them for importance and finally tally up the results to decide who to vote for. Asking them in the survey to sort out their decision making in that fashion causes them to struggle to answer and requires them to be totally honest with themselves and with us (We know that many voters cast their vote along straight party lines, but when asked directly many of them will claim otherwise). The combination of asking them to be self aware enough to understand their decision making process and expecting a level of honesty they might not even have with close friends leads to flawed data.
All too often, however, we do exactly this when conducting marketing research. We ask people to break their preferences down using attribute batteries to rate importance or perceptions. We then trust that these answers accurately reflect the respondent’s opinion and use regression and other key driver techniques to figure out which are the most important. Often this provides valuable information, but isn’t there a better way?
If we were limited to telephone data collection and analytical methods that have been in use for 20 years, then in fact the answer would be no. The reality is we no longer have to ask respondents to sort out why they make the decisions they do. Instead, we can use a variety of choice techniques (Discrete Choice, Max Diff, Configurators, Ranking, etc) in collecting the data and then use advanced techniques to sort out why.
Worth saying that choice techniques have limitations (many of them are more perceived than real) and attribute ratings have their place. My issue is, and forgive the pun, that far too often we make the wrong choice…we use attribute batteries simply because they are easy to set up and familiar to end users of the results. As researchers, we need to consider their limitations as well as the potential that choice methods offer.
Rich brings a passion for quantitative data and the use of choice to understand consumer behavior to his blog entries. His unique perspective has allowed him to muse on subjects as far afield as Dinosaurs and advanced technology with insight into what each can teach us about doing better research.