The surprising result of the election has lots of people questioning the validity of polls…how could they have so consistently predicted a Clinton victory? Further, if the polls were wrong, how can we trust survey research to answer business questions? Ultimately even sophisticated techniques like discrete choice conjoint or max-diff rely upon these data so this is not an insignificant question.
As someone whose firm conducts thousands and thousands of surveys annually, I thought it made sense to offer my perspective. So here are five reasons that I think the polls were “wrong” and how I think that problem could impact our work.
1) People Don’t Know How to Read Results
Most polls had the race in the 2-5% range and the final tally had it nearly dead even (Secretary Clinton winning the popular vote by a slight margin). At the low end, this range is within the margin of error. At the high end, it is not far outside of it. Thus, even if everything else were perfect, we would expect that the election might well have been very close.
More important from the standpoint of market research is the fact that a difference of 0-5% is rarely meaningful. Imagine you are testing two positioning statements and one is preferred by 45% and the other by 48%. Are you likely to choose the 48% message based on these data alone? I think it more likely that you will see these data as telling you that there is no difference and look for other data (either within the survey or outside it) to drive the decision.
2) The Respondents Were Targeted Incorrectly
Prior to the election, my friend and mentor posted a question on Facebook about whether the reported large number of new registrations was potentially skewing the results of polls. This brings up an important aspect of polling…the goal is not to represent the opinions of the country, but rather to represent the opinions of people who will actually vote. Since this group changes with each election (for example, the past five Presidential elections have featured 49-57% of eligible voters), this is very challenging. Different firms use different methods to predict this, but ultimately there is a high degree of “art” as opposed to science. Early reports are that rural voters came out in big numbers and most polls didn’t fully account for that.
For some types of research, we too can have this problem. For example, if we want to figure out preferences of folks who are going to buy a car in the coming year, how do we know who to talk to? Typically, like pollsters, we will ask their intention, but like with voters, consumers don’t always follow through on intentions. While we can’t eliminate this problem, we know that intentions heavily correlate with behavior. We did research which asked about future buying behavior and then we checked back to see what they did. We found that the higher the purchase intent, the more likely they were to follow through. Certainly, some with high intent didn’t buy and some with low did, but the correlation was very strong. As such, any error this creates should be relatively small. Large enough perhaps to make election polls that are within 3% wrong, but not large enough to cause us to lose trust when making business decisions.
3) Can We Trust Respondents?
When doing surveys we rely on respondent honesty and the same is true of polls. In both cases, however, there are situations in which you should be on the lookout for potential dishonesty.
In political polls, one major possibility for this is often referred to as “The Bradley Effect”. I like to think of it as “The Embarrassment Effect”. Imagine you are voting for a candidate that your family and friends dislike intensely. When the discussion turns to the election you might decide to keep quiet rather than draw the ire of people you like. Now imagine you are doing a survey about the same race. You might answer the vote question as “undecided” even though you are in fact decided. In this election, there was an unusually high number of undecided voters and they broke overwhelmingly for Mr. Trump. Given his position in the polls and often negative news about his campaign, it is not hard to imagine people being cautious in admitting he had their vote.
In product research, passions are rarely as strong as they are in politics, but that doesn’t mean we can ignore this. For example, I personally am always careful with pricing research. Respondents are consumers. We know, for example, that techniques like laddering will cause bias. If I offer you a product at $10 and you say “no thanks” and then I offer it at $8, some respondents will be savvy enough to guess that another “no thanks” will result in an even lower price. That’s why ideally I prefer the use of conjoint or monadic designs over laddering. When I do have to use laddering, I use other means to try to keep respondents honest.
4) Non-Response Bias and Other Targeting Issues
This is not a problem unique to elections. Cell phone-only homes, caller ID and so on have been an issue in our industry for decades now and the move to the web didn’t change that. In essence, a shrinking minority are participating in surveys in any form and our results could be badly skewed if their opinions differ from non-responders. There is no way to know if this is the case…we can only surmise that it might be.
This is not a new problem and thus far it has not proved to be a crippling one either. Both polls and market research have proven their value even as response rates have dropped. It is well worth wondering if non-responders to a certain survey are likely to think differently than responders. This is why we should seek to draw the most representative sample we can AND make our surveys engaging so that a higher percentage of people participate.
I also think that the potential for bias differs depending on the subject. For example, people who value privacy are far less likely to do surveys (they screen calls, they don’t join panels). If they also hold similar political views then polls are likely to be skewed a bit as a result.
5) Some of These Polls Are Just BAD Research
The polls were not universal in their results. The Investors’ Business Daily Poll was (as it has been for several cycles now) pretty close. The final poll had the race within a point. Others were farther off and most were around that 3 point range. These polls all used different methods for data collection (from phone to automated phone to web and so on) and different sample schemes. Some were done by partisan organizations and others by news organizations. Some were transparent in how they developed their results, others were not.
There is little doubt that some of these were intentionally done badly (to make a particular candidate look good) or not done with care. Presumably, our clients wouldn’t work with a firm that doesn’t dot the i’s or cross the t’s.
The failure of polls is not a universal condemnation of survey research. Rather it is a warning that you need to get the fundamentals right AND you need to put those results in the proper context. I think this is something that quality research firms do well and thus clients can have confidence in the research we do.
Note: I mentioned a Facebook discussion started by Dan Bernard prior to the election. Many of the thoughts and ideas above were influence by this robust discussion. So I’d be remiss if I didn’t thank those who participated: Lenny Murphy, Jess Sandstrom Buchanan, Gregg Kennedy, Peggy Stranner Handelman, Deb Smith Portz, George Wilkerson, Sandy McReynolds, Mary Jo Emery, Steve McFadden, Bruce Z Bortner, Tom Ramsburg Sr, and especially Dan Bernard for getting the ball rolling.
Rich brings a passion for quantitative data and the use of choice to understand consumer behavior to his blog entries. His unique perspective has allowed him to muse on subjects as far afield as Dinosaurs and advanced technology with insight into what each can teach us about doing better research.