The Economist Magazine did an analysis of political book sales on Amazon to see if there were any patterns. Anyone who uses social media will not be surprised that readers tended to buy books from either the left or the right...not both. This follows an increasing pattern of people looking for validation rather than education and of course it adds to the growing divide in our country. A few books managed a good mix of readers from both sides, though often these were books where the author found fault with his or her own side (meaning a conservative trashing conservatives or a liberal trashing liberals).
I love this use of big data and hopefully it will lead some to seek out facts and opinions that differ from their own. These facts and opinions need not completely change an individual's own thinking, but at the very least they should give one a deeper understanding of the issue, including an understanding of what drives others' thinking.
In other words, hopefully the public will start thinking more like effective market researchers.
We could easily design research that validates the conventional wisdom of our clients.
• We can frame opinions by the way we ask questions or by the questions we asked before.
• We can omit ideas from a max-diff exercise simply because our "gut" tells us they are not viable.
• We can design a discrete choice study with features and levels that play to our client's strengths.
• We can focus exclusively on results that validate our hypothesis.
So much has been written about conducting research for new product development. Not surprisingly, as this is an area of research almost every organization, new or old, has to face day in and day out. As market research consultants, we deal with it all the time and thought it would be beneficial to provide our audience with our own recommendations for some useful sources that explain conjoint analysis – a method most often used when researching new products and conducting pricing research.
This is a relatively brief article from Sawtooth Software, the makers of software used for conjoint, that provides an explanation of the basics of conjoint. The paper uses a specific example of golf balls to make it easy to understand.
I read an interesting story about a survey done to determine if people are honest with pollsters. Of course such a study is flawed by definition (how can we be sure those who say they always tell the truth, are not lying?). Still, the results do back up what I’ve long suspected…getting at the truth in a survey is hard.
The study indicates that most people claim to be honest, even about very personal things (like financing). Younger people, however, are less likely to be honest with survey takers than others. As noted above, I suspect that if anything, the results understate the potential problem.
To be clear, I don’t think that people are just being dishonest for the sake of being dishonest….I think it flows from a few factors.
First, some questions are too personal to answer, even on a web survey. With all the stories of personal financial data being stolen or compromising pictures being hacked, it shouldn’t surprise us that some people might not want to answer some kinds of questions. We should really think about that as we design questions. For example, while it might be easy to ask for a lot of detail, we might not always need it (income ranges for example). To the extent we do need it, finding ways to build credibility with the respondent are critical.
Second, some questions might create a conflict between what people want to believe about themselves and the truth. People might want to think of themselves as being “outgoing” and so if you ask them they might say they are. But their behavior might not line up with reality. The simple solution is to ask questions related to behavior without ascribing a term like “outgoing”. Of course, it is always worth asking it directly as well (knowing the self image AND behavior could make for interesting segmentations variables for example)....
My daughter was performing in The Music Man this summer and after seeing the show a number of times, I realized it speaks to the perils of poor planning…in forming a boys band and in conducting complex research.
For those of you who have not seen it, the show is about a con artist who gets a town to buy instruments and uniforms for a boys band in exchange for which he promises he’ll teach them all how to play. When they discover he is a fraud they threaten to tar and feather him, but (spoiler alert) his girl friend gets the boys together to march into town and play. Despite the fact that they are awful, the parents can’t help but be proud and everyone lives happily ever after.
It is to some extent another example of how good we are at rationalizing. The parents wanted the band to be good and so they convinced themselves that they were. The same thing can happen with research…everyone wants to believe the results so they do…even when perhaps they should not.
I’ve spent my career talking about how important it is to know where your data have been. Bias introduced by poor interviewers, poorly written scripts, unrepresentative sample and so on will impact results AND yet these flawed data will still produce cross tabs and analytics. Rarely will they be so far off that the results can be dismissed out of hand.
The problem only gets worse when using advanced methods. A poorly designed conjoint will still produce results. Again, more often than not these results will be such that the great rationalization ability of humans will make them seem reasonable....
While there is so much bad news in the world of late, here in Philly we’ve been captivated by the success of the Taney Dragons in the Little League World Series. While the team was sadly eliminated, they continue to dominate the local news. It got me thinking about what it is that makes a story like theirs so compelling and of course, how we could employ research to sort it out.
There are any number of reasons why the story is so engrossing (especially here in Philly). Is it the star player Mo’ne Davis, the most successful girl ever to compete in the Little League World Series or perhaps the fact that the Phillies are doing so poorly this year or maybe we just like seeing a team from various ethnicities and socio-economic levels working together and achieving success? Of course it might also be that we are tired of bad news and enjoy having something positive to focus on (even in defeat the team fought hard and exhibited tremendous sportsmanship).
The easiest thing to do is to simply ask people why they find the story compelling. This might get at the truth, but it is also possible that people will not be totally honest (for example, the disgruntled Phillies fan might not want to admit that) or they don’t really know what it is that has drawn them in. It might also identify the most important factor but not make note of other critical factors.
We could employ a technique like Max-Diff and ask them to choose which features of the story they find most compelling. This would provide a fuller picture, but is still open to the kinds of biases noted above.
Perhaps the best method would be to use a discrete choice approach. We take all the features of the story and either include them or don’t include them in a “story description” then ask people which story they would most likely read. We can then use analytics on the back end to sort out what really drove the decision....
Some months ago, Lily Allen mistakenly received an email containing harsh test group feedback regarding her new album. Select audience members believed the singer to be retired and threw in some comments that I won’t quote. If you are curious, the link to her Popjustice interview will let you see them in a more raw form. Allen returned the favor with some criticism on market research itself:
“The thing is, people who take part in market research: are they really representative of the marketplace? Probably not.” –Lily Allen
The singer brings up a valid concern. One of the many questions I pondered five months ago when I first took my current researcher-in-training position with TRC. Researchers are responsible for engaging a representative sample and delivering insights. How do we uphold those standards to ensure quality? Now that I have put in some time and have a few projects under my belt, I have assembled a starter list to address those concerns:
In order to complete any research project, there needs to be a clear objective. What are we measuring? Are we using one of our streamlined products, such a Message Test Express™, or will there be a conjoint involved? This may seem obvious, but it is also critical. A team of people is behind each project at TRC; including account executives, research managers, project directors, and various data experts. More importantly, the client should also be on the same page and kept in the loop. Was the artist the main client for the research done? My best guess is no, the feedback given was not meant to be a tool to rework the album.
Was the research done on Lily Allen’s album even meant to be representative? Qualitative interviews can produce deep insights among a small, non-representative, group of people. This can be done as a starting point or a follow-up to a project, or even stand alone, depending on the project objectives....
You may have heard about the spat between Apple and Samsung. Apple is suing Samsung for alleged patent infringements that relate to features of the iphone and ipad. The damages claimed by Apple? North of 2 billion dollars. The obvious question is how Apple came up with those numbers? The non-obvious answer is, partly by using conjoint analysis – the tried and tested approach we often use for product development work at TRC.
Apple hired John Hauser, Professor of Marketing at MIT’s Sloan School of Management to conduct the research. Prof. Hauser is a very well known expert in the area of product management. He has mentored and coauthored several conjoint related articles with my colleague Olivier Toubia at Columbia University. For this case, Prof. Hauser conducted two online studies (n=507 for phones and n=459 for tablets) to establish that consumers indeed valued the features that Apple was arguing about. Details about the conjoint studies are hard to get, but it appears that he has used Sawtooth Software (which we use at TRC) and used the advanced statistical estimation procedure known as Hierarchical Bayes (HB) (which we also use at TRC) to get the best possible results. It also appears that he may have run a conjoint with seven features, incorporating graphical representations to enhance respondent understanding.
There are several lessons to be learnt here for those interested in conducting a conjoint study. First, conjoint sample sizes do not have to be huge. I suspect they are larger than absolutely necessary here because the studies are being used in litigation. Second, he has wisely confined the studies to just seven attributes. We repeatedly recommend to clients that conjoint studies should not be overloaded with attributes. Conjoint tasks can be taxing for survey respondents, and the more difficult they are, the less attention will be paid. Third, he is using HB estimation to obtain preferences at the individual level, which is the state of the science approach. Last, he is incorporating graphics wherever possible to ensure that respondents clearly understand the features. When designing conjoint studies it is good to take these (and other) lessons into consideration to ensure that we get robust results.
So, what was the outcome?
As a result of the conjoint study, Prof. Hauser was able to determine that consumers would be willing to spend an additional $32 to $102 for features like sliding to unlock, universal search and automatic word correction. Under cross examination he acknowledged that this was stated preference in a survey and not necessarily what Apple could charge in a competitive marketplace. This is another point that we often make to clients both in conjoint and other contexts. There is a big difference between answering a survey and actual real world behavior (where several other factors come into play). While survey results (including conjoint) can be very good comparatively, they may not be especially good absolutely. Apple used the help of another MIT trained economist to bring in outside information and finally ended up with a damage estimate of slightly more than $2 billion....