Welcome visitor you can log in or create an account
800.275.2827
Consumer Insights. Market Innovation.
blog-page
Recent blog posts

For years now, my colleague Jessica would solicit donations to the American Cancer Societythrough its annual Daffodil Days® campaign. Each year I'd give Jessica my donation and a few weeks later I'd receive 10 daffodil buds. I'd arrange them in a vase in my office and watch as they opened up into beautiful blooms over the course of a few days. And in doing so I'd be reminded that my donation is being used to find ways to eradicate cancer and help people in need.

It was announced that this year would be the final year for Daffodil Days®.

product optimization daffodils

I have to admit, my first thought was not, "how will I donate to ACS now?" My first thought was that something was being taken away from me! Which, of course, irritated me. My second thought was that I'll have to look for another way to get daffodil buds next spring. And then it dawned on me that by cancelling the daffodils promotion, the ACS could be losing a long-time supporter.

Businesses are faced with product optimization decisions all the time – what will happen if I remove a product, service or distribution channel from the market? Will customers be lost? What will the short- and long-term effects be?

In my last blog I talked about the value of market research even if all it does is validate what you thought you already knew. A further question might be, "Should we encourage our clients to hypothesize?". My answer would be a definitive "YES!".

My answer is likely biased by the fact that we work with Hierarchical Bayesian (HB) Analytics so frequently (mainly using choice data such as that created by conjoint). After all, HB requires a starting hypothesis. But the reality is that even if we don't use HB, a hypothesis is a useful thing.

First, understanding what our clients EXPECT to find is a great way to understand what they NEED to find. They need to validate or reject their prior thinking so the more we understand their thought processes the more we know where to focus. In addition, this understanding often leads to insight into their firm's business decision making. This helps us to present results that tell a story that resonates with them. This is true even if the findings contradict their thinking.

Second, by presenting results in this way we help our clients to do more than meet the objectives of the current study, but to walk away with a better understanding of what to expect in the future. Flaws in logic will help them to avoid those flaws when similar issues come up.

Of course purists will point to the risk that starting with a hypothesis may bias our results. We might be inclined to design our research and reporting to match the narrative we expected to find. We might also be tempted to avoid the "kill the messenger" problem by sugar coating the truth.

These are fair points and well worth guarding against. They do not, however, undercut the premise that having a starting hypothesis makes for better market research and likely better use of results.

Hits: 1095 0 Comments
higgs bosonI read an article about the discovery of the Higgs Boson at CERN. This is the so called "god particle" which explains why matter has mass. While the science generally is beyond me, I was intrigued by something one of the physicists said:

"Scientists always want to be wrong in their theories. They always want to be surprised."

He went on to explain that surprise is what leads to new discoveries whereas simply confirming a theory does not. I can certainly understand the sentiment, but it is not unusual for Market Research to confirm what a client already guessed at. Should the client be disappointed in such results?

I think not for several reasons.

First, certainty allows for bolder action. Sure there are examples of confident business people going all out with their gut and succeeding spectacularly, but I suspect there are far more examples of people failing to take bold action due to lingering uncertainty. I also suspect that far too often overconfident entrepreneurs make rash decisions that lead to failure.

Second, while we might confirm the big question (for example in product development pricing research we might confirm the price that will drive success) we always gather other data that help us understand the issue in a more nuanced way. For example, we might find that the expected price point is driven by a different feature than we thought (in research speak, that one feature in the discrete choice conjoint had a much higher utility score than the one we thought was most critical).

...

Okay, so it wasn’t really just the two of us – there were a few hundred others involved. Still it was a very memorable evening that I think is worth sharing.

The day started innocently enough. I was heading out to Yale for a guest lecture in the MBA Marketing Research class taught by Jiwoong Shin as I have done for several Spring semesters now. I like this trip a lot as it allows me to catch up with many of my friends in the Yale Marketing Department. One of those is Shane Frederick and I had emailed him to see if he was around. He replied asking if I was attending Kahneman’s lecture. I had no idea that Daniel Kahneman, Nobel Prize winner and godfather of behavioral economics was giving a lecture there. The day was already getting better! I quickly changed my Amtrak ticket to a later time and told Shane I would come by his office so we could walk over.

My guest lecture went off very well with the students asking plenty of interesting questions. Then I had lunch with Zoe Chance who is doing some very interesting work with leading companies, applying ideas from behavioral economics. After a couple more meetings, I went to see Shane and we walked over early knowing there would be a big crowd. And we were glad we did, as the auditorium was overflowing by the time the lecture started.

Daniel Kahneman (Danny to his friends) was introduced by another notable person from Yale, Professor Robert Shiller (yes, he of the Case-Shiller Index you may have heard about during the housing crisis). Shiller talked about the widespread impact of Kahneman’s work , especially after the publication of his best seller Thinking, Fast & Slow. Trying to find Kahneman’s connections to Yale, Shiller pointed out that two of his coauthors (Shane Frederick and Nathan Novemsky, both in the marketing department) were at Yale.

And then it was time for Kahneman to speak. His humility, thoughtfulness and eloquence came through pretty much from the first few words. He started by saying that he doesn’t do university speeches anymore since he is not actively doing any research (he is retired), but could not say no to Bob Shiller. Most of his recent speeches have been about his book, and there had been so many that as a consequence he seems to have forgotten everything else he ever did (laughter!). And that, he said, makes sense because as he points out in the book, we like things that are familiar (more laughter!).

...

Market Research Data, A Love Story

Posted by on in Market Research

If you have read my blog, you know that I love digging through data to find new insights and I’m a believer that choice questions (such as those used in Discrete Choice Conjoint or MaxDiff) are the best way to engage respondents and unlock what they are thinking. Given that, a book called “Data, a Love Story” should be a natural fit for me because it is about the ultimate choice…choosing the right person to marry. Ultimately I decided against buying the book (wasn’t sure my wife would see it as purely a curiosity). At the same time, the review I read made me realize that some of the issues the author faced, are the same as those we face as researchers.

The premise is that dating websites can be gamed to find the right mate. Having never used one (my marriage pre-dates them), I assumed that these sites use complex algorithms to match compatible people. The trouble is that while this is true, these algorithms can break down.

First off, many people are not honest in their profile. They might be looking for someone to sit around with and watch television but admitting that is tantamount to saying “I’m really lazy” so they fudge a bit. Some go beyond this and tell whoppers like “I’m not married”. Obviously any bad data will lead to bad matches.

Second, aligning profiles is only a first step…it determines which profiles an individual sees. At that point the individuals are free to contact each other or not. Thus, how that profile reads is more important than the questions that determine the “match”.  

The author, Amy Webb, decided to gather her own data. After crunching the numbers she was able to both better attract invitations from the right men AND figure out which of them she should be talking to.  

...

Our utilities clients have raised the issue of infrastructure improvements on more than one occasion. These improvements are often expensive to implement, but the average customer sees no tangible benefit -- the water ran yesterday and it’s running again today.

Yet maintaining the pipes, lines and wires are critical to keeping water and power flowing to our homes and businesses. When it comes time to invest in these improvements, it’s hard to rally support when communities face other issues that can produce more visible outcomes when addressed.

Just how far apart are community leaders and residents about the importance of improving their communities’ infrastructure?

We decided to find out.

Asymmetry and the Lottery

Posted by on in Market Research

If the lottery can accurately be called a “tax on the stupid”, does my playing it make me stupid? To understand (or perhaps rationalize) the answer, you need to understand the principles of Asymmetry

As usually happens when the jackpot on PowerBall goes into the stratosphere (in this case it reached nearly $600 Million), someone here at TRC started a collection to play as a group. A pretty high percentage of our staff decided to play, even those with the most advanced degrees in statistics. So given the chances of winning are something like 1:175 million per ticket, why did we do it?

It certainly wasn’t that by buying so many tickets (nearly 50), the odds became anything near a slam dunk. In fact, they were easy enough to calculate (1:3,650,489.79) so there was no doubt in my mind that I wouldn’t win when I played and yet I still did.

The reason was simple. I had to choose to play or not to play and consider the likely outcome if we won or didn’t win:

  • I play and lose (A small $6 loss and an outcome that my brain expected all along)
  • I play and win (A massive win with my share being $10Million…despite expecting to lose, my brain is now elated)
  • I don’t play and they lose (I have some very minor bragging rights, but ultimately I missed out on the fun and only saved $6)
  • I don’t play and they win (Even as I console myself that the odds were with me, I feel like a complete idiot)

In other words, playing offered only upside and not playing only downside. That is exactly why we consider Asymmetric effects whenever we do analysis.   Otherwise we may miss what really drives consumer decision making.

Hits: 1707 0 Comments

electoral map 2012 nov 7tWas the election outcome a surprise for you? It wasn’t for me.

In some ways election night was quite boring. And I blame Nate Silver, Sam Wang and others who predicted the outcome with such stunning accuracy that (at least for me) the drama was completely missing. While conventional pundits and partisans were making all kinds of predictions ranging from “Toss-up” to “Romney landslide”, a group of analysts (nerds, if you choose) were quietly predicting that Obama had a small but consistent and predictable lead. Turns out they were spot-on in their predictions (and were predictably smeared by vested interests).

In my last post I talked about Nate Silver and the approach he uses. This time I want to draw your attention to another analyst, Sam Wang of the Princeton Election Consortium. He is a neuroscientist who has been forecasting for the last three presidential election cycles and has been doing a remarkably good job of it. He nailed the Electoral College vote in 2004 and missed by just one in 2008. How did he do this time? Well, he had two predictions. One of them (based on his median estimator) was 303 for Obama, which is where the tally currently stands, subject to Florida being officially called. The second one (based on his modal estimator) was 332 for Obama which is where the tally is likely to end up if/when Obama wins Florida. Excellent calls whichever way you look at it, given the extremely close race in Florida.

obama-romney x-largeA friend of mine posted on Facebook that she’d taken a web quiz to tell her which presidential candidate best lined up with her stand on the issues. She was outraged that the web site thought she would vote the way it did. I’m not surprised (by the outrage, not her choice)…it is a case of a badly applied choice technique.

Basically the quiz worked by asking a series of questions to see where she stood on the issues. It then aligns her choices against the stand taken by the candidate (if you want to try one, here is one from the GOP Primaries this year). In essence it is a Configurator. Instead of building the perfect product for you (as you would with a Configurator) you build the perfect candidate. There are a couple of problems with this application.

First, Configurators allow you to build the ideal but generally don’t give a clear idea of what choices you might make if that ideal were not available (our proprietary Texo™ helps overcome that issue). In politics it is not unusual for voting decisions to hinge on a single issue and unlike products you can’t decide to add or subtract an important feature.  

I’ll give you the simple answer. Surveys!

No, I don’t mean looking at whatever survey happens to catch your eye or tickles your (or your favorite network or blog’s) ideological fancy. I mean, using a system that is powered by old fashioned surveys and making very, very good explanations and predictions based off that. There is someone who has been doing exactly that for several years now and it makes sense for anyone interested in surveys to understand how he is doing that. I’m talking, of course, about Nate Silver at fivethirtyeight.com.

Interestingly, Silver does not actually do a single survey himself. Instead what he has done is build a database of surveys (that contains thousands) and used some simple and clear rules to analyze them. Based on these rules and the statistical models he has built, he is able to provide the best, unbiased view of the race. All this from survey data. How does he do it? Let’s take a look at some (and by no means all) of his rules.

be true to yourselfWhen we dropped my daughter off for her first year of college a few weeks back my parting words were “Be true to yourself”. I thought this reflected both my accepting that my influence on her was now very limited and my hope that whatever good I’ve done should be put into practice. It strikes me that researchers too should heed the advice.

Our industry has changed and continues to change. Many of the old rules either no longer work or can’t be easily applied to the new tools at our disposal. So how can we apply what we know? A philosophy like “be true to yourself” allows us to do just that.

Personally it has allowed me to accept that representative sampling is no longer the most critical rule (it can’t be in a world where truly representative sampling is too slow and costly). It doesn’t mean I take any respondents I can get…care in trying to get as representative a sample as we can remains important. It just isn’t a stone cold requirement of quantitative research.  

The Olympics of Statistics

Posted by on in A Day in a (MR) Life
phelps-medal-recordWatching sports provides a lot of great entertainment. The thrill of victory, agony of defeat and all that. It also provides many great opportunities for never ending arguments about just how great various sports achievements are. Often these arguments are bolstered by the misuse of statistics. One such example was the constant references to Michael Phelps as “the Greatest Olympian Ever” which was based on the fact that he’d won more medals than any other athlete in history.  

To be clear, I’m sure an argument can be made that he is the greatest ever, but the use of one number, medal count, to determine that really bothers me. As often happens in the media, the number is looked at in only one context (compared to the number of medals other athletes have won) rather than considering a great number of other factors:

environmentProtecting the environment is in our collective best interest. Certainly, that’s a given, but people individually don’t always act in their long-term best interests (as behavioral economists posit) so why do we think companies would do so?

Turns out, employers are doing a lot to conserve and protect the environment and natural resources – at least according to TRC’s online panelists we surveyed this spring.

Nearly three-quarters of our panelists who are employed full or part-time told us their employer was actively doing at least one of five activities related to conservation and energy preservation. The larger the employer, the greater the participation. While we can't project our findings to corporate America as a whole, this is certainly encouraging news for our planet.

imaginelehrerOn vacation I read a number of books (love my Kindle) including Why Nations Fail by Daron Acenoglu and James Robinson and Imagine by Jonah Lehrer. While clearly quite different, one on what has allowed some nations to grow and endure while others fail and the other one about unlocking the creative processes of the brain; I took away lessons for my work from both.

“How Nations Fail” isn’t a business book. It is more of a history book than anything, but I saw parallels with what we are facing. The book details a long string of historical examples of nations that either failed outright or that saw some success but then reversed course. The central core is that nations that succeed over time always feature the same factors which feature truly inclusive systems. Meaning, everyone has a chance to succeed on an equal footing. 

healthcare reformWith the recent Supreme Court ruling, it appears that HealthCare Reform is here.  Regardless of which side of the fence consumers fall on, there is important information that they should understand about HCR in order to make critical choices for their care and coverage.  We were interested in finding out how well informed they are now, to see how far we need to go in educating them about their healthcare choices in the coming years. Just under half consider themselves to be slightly knowledgeable, which is about where we’d expect consumers to be at this stage.  One quarter considers themselves knowledgeable and a third report that they are not knowledgeable.

bank feesOver the past couple years there have been few topics as hot as “bank fees”. The financial collapse of 2008 started a chain reaction that included lots of consumer outcry and intense regulatory scrutiny. As a result, banks got squeezed…hard. Whether they deserved it or not is a debate for others who are smarter and better informed than I am, but what even I can figure out is that when a business starts to lose money and has its revenue streams cut, it has to identify ways to stop the bleeding. In bank-speak, that means raising fees.

As a consumer, I don't like fees any more than anybody else does, but I also recognize that a business is in business to make money. Rather than curse the fates, or fees in this case, I did what I do best...I researched the issue.

mra market research conference 2012Spent a good bit of last week at the MRA conference in San Diego. The weather was overcast and cloudy for the first couple days, a perfect metaphor for the general mood of the industry and uncertain outlook the future holds for us. But as always, I saw a lot to be optimistic about. In particular the first and second to last presentation I watched featured experience researchers who are enthusiastically embracing the opportunities that exist today.

Hal Bloom of Sage Software talked about their satisfaction research using a standard likelihood to recommend approach. They attempt to survey every customer every year and succeed in getting 20% of them to respond. This means tens of thousands of surveys with a multiple of that in terms of open ended responses. Sage makes extensive use of text recognition software to determine sentiment and help sort out who their most vocal promoters and detractors are. A great use of new technology, but what struck me even more was what they do next.

sports and behavioral economicsShane Frederick (Associate Professor at Yale University’s School of Management) did a talk on Behavioral Economics at our recent research conference that got me thinking. But before we tap into the scary place that is my brain, let’s consider what behavioral economics is. Most of us with a formal business education have taken at least one if not several economics classes, during which we were exposed to market theories based on assumptions that sounded reasonable in principle but that really didn’t represent how things worked in real life. Behavioral economics, Shane started, is the study of economics when those assumptions are relaxed, and the relaxation of one of these assumptions, that people act rationally, is what got my attention.

One of the examples Shane used to make his point involved a pivotal point late in a 2009 football game between the New England Patriots and the Indianapolis Colts. Bill Belichick, the coach of the Patriots, decided to go for it on 4th and 2 deep in his own territory. The attempt failed, the Colts scored after the ensuring change of possession and won the game, and nearly everyone in the sports world pointed to Belichicks' seemingly insane decision.  But was it really insane? 

biasLike any research, market research has always recognized that to be certain results of research can be projected to an entire population; you need to eliminate any bias. We worried about things like:

  • Representativeness Effects – Needed to not only make sure we selected a random representative sample, but then do everything possible to maximize the percentage of people who completed the survey.
  • Interviewer Effects – Surveys needed to be done identically.   If one was done by mail, all should be with identical forms. If done by phone interviewers needed to be careful not to lead respondents and to keep pacing at consistent rate.
  • Framing Effects– If responses from one question are going to potentially bias a future response then the order should be changed to reflect it. In cases where changing the order merely changes which question biases which, use rotation or split samples so that bias effects can be measured and softened.

I know this is a simplified view of things, but the above three do get at the major forms of bias that we seek to eliminate in market research. In this blog, I'll focus on representativeness and at some point in the future I'll cover the other two.

what increases attention paid to adsAdvertisers and researchers do a lot of testing to determine how effective their advertising is prior to launching a campaign or message. We look for ways to get inside consumers’ heads, and as technology improves, we are afforded interesting glimpses into how consumers process information and make decisions. As my colleague Rajan pointed out in his blog different areas of the brain lead to different types of decision-making. Nobel Prize winner Daniel Kahneman posits that human thinking can be classified into two forms, System 1, which operates automatically, and System 2, which requires mental effort (I paraphrase). Jonah Lehrer, author of How We Decide asserts in his blog “Our best decisions are a finely tuned blend of both feeling and reason and the precise mix depends on the situation. When buying a house, for example, it’s best to let our unconscious mull over the many variables. But when we’re picking a stock, intuition often leads us astray. The trick is to determine when to use the different parts of the brain, and to do this, we need to think harder (and smarter) about how we think.”

With all of this exciting work being done in the field of neuroscience and behavioral economics, I wondered what kinds of answers we would get if we simply asked consumers directly what they think motivates them in considering advertising. Do they believe they respond to characters like the Geico gecko? Or is it really just a function of what they need at the time?

Want to know more?

Give us a few details so we can discuss possible solutions.

Please provide your Name.
Please provide a valid Email.
Please provide your Phone.
Please provide your Comments.
Enter code below : Enter code below :
Please Enter Correct Captcha code
Our Phone Number is 1-800-275-2827

Our Clients