Welcome visitor you can log in or create an account

800.275.2827

Consumer Insights. Market Innovation.

blog-page

Market Research

In my last blog I talked about the value of market research even if all it does is validate what you thought you already knew. A further question might be, "Should we encourage our clients to hypothesize?". My answer would be a definitive "YES!".

My answer is likely biased by the fact that we work with Hierarchical Bayesian (HB) Analytics so frequently (mainly using choice data such as that created by conjoint). After all, HB requires a starting hypothesis. But the reality is that even if we don't use HB, a hypothesis is a useful thing.

First, understanding what our clients EXPECT to find is a great way to understand what they NEED to find. They need to validate or reject their prior thinking so the more we understand their thought processes the more we know where to focus. In addition, this understanding often leads to insight into their firm's business decision making. This helps us to present results that tell a story that resonates with them. This is true even if the findings contradict their thinking.

Second, by presenting results in this way we help our clients to do more than meet the objectives of the current study, but to walk away with a better understanding of what to expect in the future. Flaws in logic will help them to avoid those flaws when similar issues come up.

Of course purists will point to the risk that starting with a hypothesis may bias our results. We might be inclined to design our research and reporting to match the narrative we expected to find. We might also be tempted to avoid the "kill the messenger" problem by sugar coating the truth.

These are fair points and well worth guarding against. They do not, however, undercut the premise that having a starting hypothesis makes for better market research and likely better use of results.

Hits: 5626 0 Comments

Market Research Data, A Love Story

Posted by on in Market Research

If you have read my blog, you know that I love digging through data to find new insights and I’m a believer that choice questions (such as those used in Discrete Choice Conjoint or MaxDiff) are the best way to engage respondents and unlock what they are thinking. Given that, a book called “Data, a Love Story” should be a natural fit for me because it is about the ultimate choice…choosing the right person to marry. Ultimately I decided against buying the book (wasn’t sure my wife would see it as purely a curiosity). At the same time, the review I read made me realize that some of the issues the author faced, are the same as those we face as researchers.

The premise is that dating websites can be gamed to find the right mate. Having never used one (my marriage pre-dates them), I assumed that these sites use complex algorithms to match compatible people. The trouble is that while this is true, these algorithms can break down.

First off, many people are not honest in their profile. They might be looking for someone to sit around with and watch television but admitting that is tantamount to saying “I’m really lazy” so they fudge a bit. Some go beyond this and tell whoppers like “I’m not married”. Obviously any bad data will lead to bad matches.

Second, aligning profiles is only a first step…it determines which profiles an individual sees. At that point the individuals are free to contact each other or not. Thus, how that profile reads is more important than the questions that determine the “match”.  

The author, Amy Webb, decided to gather her own data. After crunching the numbers she was able to both better attract invitations from the right men AND figure out which of them she should be talking to.  

...

Asymmetry and the Lottery

Posted by on in Market Research

If the lottery can accurately be called a “tax on the stupid”, does my playing it make me stupid? To understand (or perhaps rationalize) the answer, you need to understand the principles of Asymmetry

As usually happens when the jackpot on PowerBall goes into the stratosphere (in this case it reached nearly $600 Million), someone here at TRC started a collection to play as a group. A pretty high percentage of our staff decided to play, even those with the most advanced degrees in statistics. So given the chances of winning are something like 1:175 million per ticket, why did we do it?

It certainly wasn’t that by buying so many tickets (nearly 50), the odds became anything near a slam dunk. In fact, they were easy enough to calculate (1:3,650,489.79) so there was no doubt in my mind that I wouldn’t win when I played and yet I still did.

The reason was simple. I had to choose to play or not to play and consider the likely outcome if we won or didn’t win:

  • I play and lose (A small $6 loss and an outcome that my brain expected all along)
  • I play and win (A massive win with my share being $10Million…despite expecting to lose, my brain is now elated)
  • I don’t play and they lose (I have some very minor bragging rights, but ultimately I missed out on the fun and only saved $6)
  • I don’t play and they win (Even as I console myself that the odds were with me, I feel like a complete idiot)

In other words, playing offered only upside and not playing only downside. That is exactly why we consider Asymmetric effects whenever we do analysis.   Otherwise we may miss what really drives consumer decision making.

Hits: 5980 0 Comments

electoral map 2012 nov 7tWas the election outcome a surprise for you? It wasn’t for me.

In some ways election night was quite boring. And I blame Nate Silver, Sam Wang and others who predicted the outcome with such stunning accuracy that (at least for me) the drama was completely missing. While conventional pundits and partisans were making all kinds of predictions ranging from “Toss-up” to “Romney landslide”, a group of analysts (nerds, if you choose) were quietly predicting that Obama had a small but consistent and predictable lead. Turns out they were spot-on in their predictions (and were predictably smeared by vested interests).

In my last post I talked about Nate Silver and the approach he uses. This time I want to draw your attention to another analyst, Sam Wang of the Princeton Election Consortium. He is a neuroscientist who has been forecasting for the last three presidential election cycles and has been doing a remarkably good job of it. He nailed the Electoral College vote in 2004 and missed by just one in 2008. How did he do this time? Well, he had two predictions. One of them (based on his median estimator) was 303 for Obama, which is where the tally currently stands, subject to Florida being officially called. The second one (based on his modal estimator) was 332 for Obama which is where the tally is likely to end up if/when Obama wins Florida. Excellent calls whichever way you look at it, given the extremely close race in Florida.

Like any research, market research has always recognized that to be certain results of research can be projected to an entire population; you need to eliminate any bias. We worried about things like:

  • Representativeness Effects – Needed to not only make sure we selected a random representative sample, but then do everything possible to maximize the percentage of people who completed the survey.
  • Interviewer Effects – Surveys needed to be done identically.   If one was done by mail, all should be with identical forms. If done by phone interviewers needed to be careful not to lead respondents and to keep pacing at consistent rate.
  • Framing Effects– If responses from one question are going to potentially bias a future response then the order should be changed to reflect it. In cases where changing the order merely changes which question biases which, use rotation or split samples so that bias effects can be measured and softened.

I know this is a simplified view of things, but the above three do get at the major forms of bias that we seek to eliminate in market research. In this blog, I'll focus on representativeness and at some point in the future I'll cover the other two.

Want to know more?

Give us a few details so we can discuss possible solutions.

Please provide your Name.
Please provide a valid Email.
Please provide your Phone.
Please provide your Comments.
Enter code below : Enter code below :
Please Enter Correct Captcha code
Our Phone Number is 1-800-275-2827
 Find TRC on facebook  Follow us on twitter  Find TRC on LinkedIn

Our Clients