Welcome visitor you can log in or create an account

800.275.2827

Consumer Insights. Market Innovation.
blog-page
Recent blog posts

Purchase Funnel Measuring AwarenessWe at TRC conduct a lot of choice-based research, with the goal of aligning our studies with real-world decision-making. Lately, though, I’ve been involved in a number of projects in which the primary objective is not to determine choice, but rather awareness. Awareness is the first – and arguably the most critical - part of the purchase funnel. After all, you can’t very well buy or use something if you don’t know it exists. So getting the word out about your brand, a new product or a product enhancement matters.

Awareness research presents several challenges that aren’t necessarily faced in other types of research. Here’s a list of a few items to keep in mind as you embark on an awareness study:

Don’t tip your hand. If you’re measuring awareness of your brand, your ad campaign or one of your products, do not announce at the start of the survey that your company is the sponsor. Otherwise you’ve influenced the very thing you’re trying to measure. You may be required to reveal your identity (if you’re using customer emails to recruit, for example), but you can let participants know up front that you’ll reveal the sponsor at the conclusion of the survey. And do so.

The more surveys the better. Much of awareness research focuses on measuring what happens before and after a specific event or series of events. The most prevalent use of this technique is in ad campaign research. A critical decision factor is how many surveys you should do in each phase. And the answer is, as many as you can afford. The goal is to minimize the margin of error around the results: if your pre-campaign awareness score is 45% and your post-campaign score is 52%, is that a real difference? You can be reasonably assured that it is if you surveyed 500 in each wave, but not if you only surveyed 100. The more participants you survey, the more secure you’ll be that the results are based on real market shifts.

Match your samples. Regardless of how many surveys you do each wave, it’s important that the samples are matched. By that we mean that the make-up of the participants should be as consistent with each other as possible each time you measure. Once again, we want to make certain that results are “real” and aren’t due to methodological choices. You can do this ahead of time by setting quotas, after the fact through weighting, or both. Of course, you can’t control for every single variable. At the very least, you want the key demographics to align.

...

American Healthcare Market Research BlogAs a market researcher who has studied the health insurance industry for over two decades, this year has shown a dramatic shift in consumer thinking; that is, consumers are being forced to think harder than ever before to determine which health insurance plan is best for their family.

While I have the benefit of working directly with health insurance companies on product development, numerous articles are published each week illustrating some of the challenges consumers are facing with their new plans. These experiences make it clear why conjoint (Discrete Choice) is such a strong tool to understand consumer preferences for different health insurance plan components.

As I digest all of this information, a number of themes continue to surface:

High deductibles – consumers need to know what is subject to the deductible and what isn’t (preventive, Rx etc.). It’s unlikely that insurers want consumers to avoid preventive care as a way to manage their costs, and yet that is exactly what some consumers are doing.

Limited Network – consumers are learning the hard way that you get what you pay for. There are many stories of consumers having to drive ridiculous distances to get treatment from an in-network provider, or those who validated that their physician accepts their carrier to later learn that they don’t accept all plans offered by the carrier. Many are also having difficulty getting an appointment within a reasonable period of time, and some have even visited an in-network hospital and received a bill from an out-of-network physician who treated them there.  

...

I begin every weekday by driving through a toll plaza on the Pennsylvania Turnpike to get to work. By this time, I haven’t usually had my morning cup of coffee yet; therefore, my mathematical skills are probably not always up to par. So, I take the easy way out and use my E-ZPass, which saves me the daily burden of counting out change to make my way through the toll booth.

Overall, the E-ZPass system seems relatively straightforward. You use a credit card to open an account and you receive an electronic tag, or transponder, that has your personal billing and vehicle information embedded into it. You put the transponder somewhere on the dashboard or windshield of your vehicle, which then sends a signal to a receiver as you drive through the toll booth that detects your tag, registers your information and charges your account accordingly. When all is said and done, you see the polite green light that says “Thank You” (unless you have a low balance, of course) and you are on your merry way. Quick and simple, right?

Before I began working in market research, I wouldn’t have thought much more about the E-ZPass system other than it gets me to where I need to go quickly. Now that I’m almost a year into my market research career with more of a research-oriented point of view, I got to wondering a little more in depth about the E-ZPass system and how the company conducted its research within the toll-user market to find out if its new toll system would prosper. After a little research, I found that the company used the ever-reliable conjoint analysis method of research.

The scholarly article, Thirty Years of Conjoint Analysis: Reflections and Prospects by Paul Green, Abba Kreiger and Yoram Wind, discusses the use of conjoint analysis in an abundance of studies throughout the past 30 years. One of the studies that this article focuses on is the research done prior to the development and implementation of the E-ZPass system. E-ZPass has been in the works for about 12 years now; the company began its market research in 1992. Two states, New Jersey and New York, had conducted conjoint analysis research using a sample size of about 3,000 to decipher the potential of the system. There were seven attributes used in this conjoint study, such as number of lanes available, tag acquisition, cost, toll prices, invoicing and other potential uses of the transponder. Once the respondents’ data was collected, it was analyzed in total and by region and facility. The study yielded an estimated 49% usage rate, while the actual usage rate seven years later was a close 44%. While both percentages were not extremely high, the company estimated the usage rate would continue to increase in the future.

Green, Kreiger and Wind make a fair point in their article when they say that conjoint analysis has the ability “to lead to actionable findings that provide customer-driven design features and consumer-usage or sales forecasts”. This study serves as a great example to support this statement just by looking at how close the projected usage rate from the data collected ended up being to the actual usage rate. An abundance of the studies that we execute here at TRC use conjoint analysis because of its dependable predictive nature. Whether clients are looking to enter a new product or service into the market, or are looking to improve upon an already existing product or service, conjoint analysis provides them with direction for a successful plan.

...

Truth or Research

Posted by on in New Research Methods

respondents telling truth in surveysI read an interesting story about a survey done to determine if people are honest with pollsters. Of course such a study is flawed by definition (how can we be sure those who say they always tell the truth, are not lying?). Still, the results do back up what I’ve long suspected…getting at the truth in a survey is hard.

The study indicates that most people claim to be honest, even about very personal things (like financing). Younger people, however, are less likely to be honest with survey takers than others. As noted above, I suspect that if anything, the results understate the potential problem.

To be clear, I don’t think that people are just being dishonest for the sake of being dishonest….I think it flows from a few factors.

First, some questions are too personal to answer, even on a web survey. With all the stories of personal financial data being stolen or compromising pictures being hacked, it shouldn’t surprise us that some people might not want to answer some kinds of questions. We should really think about that as we design questions. For example, while it might be easy to ask for a lot of detail, we might not always need it (income ranges for example). To the extent we do need it, finding ways to build credibility with the respondent are critical.

Second, some questions might create a conflict between what people want to believe about themselves and the truth. People might want to think of themselves as being “outgoing” and so if you ask them they might say they are. But their behavior might not line up with reality. The simple solution is to ask questions related to behavior without ascribing a term like “outgoing”. Of course, it is always worth asking it directly as well (knowing the self image AND behavior could make for interesting segmentations variables for example).

...

My daughter was performing in The Music Man this summer and after seeing the show a number of times, I realized it speaks to the perils of poor planning…in forming a boys band and in conducting complex research.  

For those of you who have not seen it, the show is about a con artist who gets a town to buy instruments and uniforms for a boys band in exchange for which he promises he’ll teach them all how to play. When they discover he is a fraud they threaten to tar and feather him, but (spoiler alert) his girl friend gets the boys together to march into town and play. Despite the fact that they are awful, the parents can’t help but be proud and everyone lives happily ever after.

It is to some extent another example of how good we are at rationalizing. The parents wanted the band to be good and so they convinced themselves that they were. The same thing can happen with research…everyone wants to believe the results so they do…even when perhaps they should not.

I’ve spent my career talking about how important it is to know where your data have been. Bias introduced by poor interviewers, poorly written scripts, unrepresentative sample and so on will impact results AND yet these flawed data will still produce cross tabs and analytics. Rarely will they be so far off that the results can be dismissed out of hand.

The problem only gets worse when using advanced methods. A poorly designed conjoint will still produce results. Again, more often than not these results will be such that the great rationalization ability of humans will make them seem reasonable.

...

big league research conjointWhile there is so much bad news in the world of late, here in Philly we’ve been captivated by the success of the Taney Dragons in the Little League World Series. While the team was sadly eliminated, they continue to dominate the local news. It got me thinking about what it is that makes a story like theirs so compelling and of course, how we could employ research to sort it out.

There are any number of reasons why the story is so engrossing (especially here in Philly). Is it the star player Mo’ne Davis, the most successful girl ever to compete in the Little League World Series or perhaps the fact that the Phillies are doing so poorly this year or maybe we just like seeing a team from various ethnicities and socio-economic levels working together and achieving success? Of course it might also be that we are tired of bad news and enjoy having something positive to focus on (even in defeat the team fought hard and exhibited tremendous sportsmanship).

The easiest thing to do is to simply ask people why they find the story compelling. This might get at the truth, but it is also possible that people will not be totally honest (for example, the disgruntled Phillies fan might not want to admit that) or they don’t really know what it is that has drawn them in. It might also identify the most important factor but not make note of other critical factors.

We could employ a technique like Max-Diff and ask them to choose which features of the story they find most compelling. This would provide a fuller picture, but is still open to the kinds of biases noted above.

Perhaps the best method would be to use a discrete choice approach. We take all the features of the story and either include them or don’t include them in a “story description” then ask people which story they would most likely read. We can then use analytics on the back end to sort out what really drove the decision.  

...

I'm a runner and enjoy participating in races. Last May I ran the Delaware Half Marathon and had my worst race ever. What happened? Poor planning. I failed to put together a training plan to prepare me for my race.

This can sometimes happen in Market Research. Poor planning can lead to disastrous results that provide little insight or fail to answer the objectives of the research. Planning is especially important when advanced analytics are used, for example, conjoint that is often used during product development or pricing research. There are many questions to be asked during the planning phase of conjoint design. How should we frame up the exercise? How many features should be evaluated? How many levels for each feature? How many product choices should be presented to a respondent at a time? How should each feature and level be described? Should any prohibitions be used? Sometimes we can lose sight of the research objective amid all the details. A good conjoint plan will keep all parties focused on the end goal. These are all issues I'm contemplating as I design my conjoint exercise (stay tuned for results in my next blog!). I'm taking the time now to properly plan and design my conjoint.

A well thought out plan ensures quality results just as a well thought out running plan ensures a good race! After my half marathon disaster I planned for my next race the same way I would for a conjoint. I considered a number of questions while designing my training plan. How far in advance should I train? How many times a week should I run? Should I enlist a running buddy for the longer runs? My goal was to run a good race. I'm happy to report the planning paid off as I completed the Marine Corps Marathon (my first marathon!) in the time I was hoping for.

Hits: 814 0 Comments

Lilly Allen Market Research representativenessSome months ago, Lily Allen mistakenly received an email containing harsh test group feedback regarding her new album. Select audience members believed the singer to be retired and threw in some comments that I won’t quote. If you are curious, the link to her Popjustice interview will let you see them in a more raw form. Allen returned the favor with some criticism on market research itself:

“The thing is, people who take part in market research: are they really representative of the marketplace? Probably not.” –Lily Allen

The singer brings up a valid concern. One of the many questions I pondered five months ago when I first took my current researcher-in-training position with TRC. Researchers are responsible for engaging a representative sample and delivering insights. How do we uphold those standards to ensure quality? Now that I have put in some time and have a few projects under my belt, I have assembled a starter list to address those concerns:

Communicate: All Hands on Deck

In order to complete any research project, there needs to be a clear objective. What are we measuring? Are we using one of our streamlined products, such a Message Test Express™, or will there be a conjoint involved? This may seem obvious, but it is also critical. A team of people is behind each project at TRC; including account executives, research managers, project directors, and various data experts. More importantly, the client should also be on the same page and kept in the loop. Was the artist the main client for the research done? My best guess is no, the feedback given was not meant to be a tool to rework the album.

Purpose

Was the research done on Lilly Allen’s album even meant to be representative? Qualitative interviews can produce deep insights among a small, non-representative, group of people. This can be done as a starting point or a follow-up to a project, or even stand alone, depending on the project objectives.

...

UFO sighting causation correlation market researchSmallI read a blurb in The Economist about UFO sightings. They charted some 90,000 reports and found that UFO's are, as they put it, "considerate". They tend not to interrupt the work day or sleep. Rather, they tend to be seen far more often in the evening (peaking around 10PM) and more on Friday nights than other nights.
The Economist dubbed the hours of maximum UFO activity to be "drinking hours" and implied that in fact that drinking was the cause of all those sightings.
As researchers, we know that correlation does not mean causation. Of course their analysis is interesting and possibly correct, but it is superficial. One could argue (and I'm sure certain "experts" on the History Channel would) that it is in fact the UFO activity that causes people to want to drink, but by limiting their analysis to two factors (time of day/number of sightings), The Economist ignore other explanations.
For example, the low number of sightings during sleeping hours would make perfect sense (most of us sleep indoors with our eyes closed). The same might be true for the lower number during work hours (many people don't have ready access to a window and those who do are often focused on their computer screen and not the little green men taking soil samples out the window).
As researchers, we need to consider all the possibilities. Questionnaires should be constructed to include questions that help us understand all the factors that drive decision making. Analysis should, where possible, use multivariate techniques so that we can truly measure the impact of one factor over another. Of course, constructing questions that allow respondents to express their thinking is also key...while a long attribute rating battery might seem like it is being "comprehensive" it is more likely mind numbing for the respondent. We of course prefer to use techniques like Max-Diff, Bracket™ or Discrete Choice to figure out what drives behavior.
Hopefully I've given you something to think about tonight when you are sitting on the porch, having a drink and watching the skies.

Hits: 1098 0 Comments

Rita’s Italian Ice is a Pennsylvania-based company that sells its icy treats through franchise locations on the East Coast and several states in the Midwest and West.

Every year on the first day of spring, Rita’s gives away full-size Italian ices to its customers. For free. No coupon or other purchase required. It’s their way of thanking their customers and launching the season (most Rita’s are only open during the spring and summer months).

Wawa, another Pennsylvania company, celebrated 50 years in business with a free coffee day in April.  

Companies are giving their products away for free! What a fantastic development for consumers! I patronize both of these businesses, and yet, on their respective free give-away days, I didn’t participate. I like water ice (Philadelphia’s term for Italian ice) and I really like coffee. So what’s the problem?

In the case of Rita’s, the franchise location near me has about 5 parking spots, which on a normal day is too few. I was concerned about the crowds. On the Wawa give-away day, I forgot about it as the day wore on. That made me wonder what other people do when they learn that retailers are giving away their products. So, having access to a web-based research panel (a huge perk of my job), I asked 485 people about it. And here are the 4 things I learned:

...

In my previous post I applauded Matthew Futterman’s suggestion that two key changes to baseball’s rules will produce a shorter, faster-paced game, one that will attract younger viewers. While I may not be that young, I’m certainly on-board with speeding up the game. I believe that faster-paced play will lead to greater engagement, and greater engagement will lead to greater enjoyment.

In some sense this is similar to our position on marketing research methods. We want to engage our respondents because the more focused on the task they become, the more considered their responses will be. One of our newer tools, Bracket,TM allows respondents to prioritize a long list of items in a tournament-style approach. Bracket™has respondents make choices among items, and as the tournament progresses the choices become more relevant (and hopefully more enjoyable).

Meanwhile, back to baseball. The rule changes Futterman suggests are very simple ones:

Once batters step into the box, they shouldn't be allowed to step out. Otherwise it's a strike.

If no one is on the base, pitchers get seven seconds to throw the next pitch. Otherwise it's a ball.

...

Sandy Hingston wrote an article appearing in the March 2014 Philadelphia Magazine about Milennials’ lack of interest in history, specifically as it relates to baseball (read abridged version here). Later in the article, she quotes Matthew Futterman, who posited in the Wall Street Journal that two key changes to baseball’s rules will produce a shorter, faster-paced game that will attract more youngsters. This notion didn’t sit well with Sandy Hingston.  

But it did sit well with me. Very well, in fact. I’m a Boomer like Hingston, not a Millenial, but I find myself increasingly frustrated by things that, put simply, take too long. Baseball is one of them. In fact, my TV viewing of the Phillies (go Phils!) decreased as my TV viewing of another professional sport was on the rise: golf.

Anybody who watches golf on TV, or attends an event live, will attest that players can take a very long time in between shots, which is essentially the same criticism lobbed at pitchers who take too long between throws. Slow-play in golf is a hot topic, and the golf powers-that-be are quite willing to put players “on the clock” for taking their good sweet time. So to be fair, both sports are grappling with this issue.

A first or second round of professional golf will take the better part of a day to televise. A 9-inning baseball game, in contrast, lasts around 3 hours. Given the disparity between how long each event takes, one would think that I, as someone interested in fast action, would prefer watching baseball. But that’s just not the case.

This got me thinking about an issue that we grapple with in market research: respondent tedium. Long attribute batteries of low personal relevance can tax a respondent’s patience. Even being compensated doesn’t always overcome the glaze that forms over their eyes when faced with mundane, repetitive tasks. That’s why we do our best to keep respondents engaged by having them make choices (our Bracket technique is a good example of this). In bracket, the choices become more relevant as the task progresses – not unlike how play at the end of a close game or match becomes more exciting to the viewer.

...
Recent comment in this post - Show all comments
  • Ed Olesky
    Ed Olesky says #
    Very interesting, Michele. I'll be looking forward to reading Part 2. Hope you are doing well! Maryann

Recently I had lunch with my colleague Michel Pham at Columbia Business School. Michel is a leading authority on the role of affect (emotions, feeling and moods) in decision making. He was telling me about a very interesting phenomenon called the Emotional Oracle Effect – where he and his colleagues had examined whether emotions can help make better predictions. I was intrigued. We tend to think of prediction as a very rational process – collect all relevant information, use some logical model for combining the information, then make the prediction. But Michel and his colleagues were drawing on a different stream of research that showed the importance of feelings. So the question was, can people make better predictions if they trust their feelings more?

To answer this question they ran a series of experiments. As we researchers know, experiments are the best way to establish a causal linkage between two phenomena. To ensure that their findings were solid, they ran eight separate studies in a wide variety of domains. This included predicting a Presidential nomination, movie box-office success, winner of American Idol, the stock market, college football and even the weather. While in most cases they employed a standard approach to manipulate people’s feelings of trust in themselves, in a couple of cases they looked at differences between people who trusted their feelings more (and less).

Across these various scenarios the results were unambiguous. When people trusted their feelings more, they made more accurate predictions. For example, box office showing of three movies (48% Vs 24%), American Idol winner (41% Vs 24%), NCAA BCS Championship (57% Vs 47%) and Democratic nomination (72% Vs 64%), weather (47% Vs 28%) were some of the cases where people who trusted their feelings predicted better than those who did not. This, of course, raises the question of why? What is it about feelings and emotion that allows a person to predict better?

The most plausible explanation they propose (tested in a couple of studies) is what they call the privileged-window hypothesis. This grows off the theoretical argument that “rather than being subjective and incomplete sources of information, feelings instead summarize large amounts of information that we acquire, consciously and unconsciously about the world around us.” In other words, we absorb a huge quantity of information but don’t really know what we know. Thinking rationally about what we know and summarizing it seems less accurate than using our feelings to express that tacit knowledge. So, when someone says that they did something because “it just felt right”, it may not be so much a subjective decision as an encapsulation of acquired knowledge. The affective/emotional system may be better at channeling the information and making the right decision than the cognitive/thinking system.

So, how does this relate to market research? When trying to understand consumer behavior through surveys, we usually try to get respondents to use their cognitive/thinking system. We explicitly ask them to think about questions, consider options and so on, before providing an apparently logical answer. This research would indicate that there is a different way to go. If we can find a way to get consumers to tap into their affective/emotional system we might better understand how they arrived at decisions.

...

As most anyone living on the East Coast can attest, the winter of 2013-2014 was, to put it nicely, crappy. Storms, outages, freezing temperatures…. We had a winter the likes of which we haven’t experienced in a while. And it wasn’t limited to the East Coast – much of the US had harsher conditions than normal.

Here in the office we did a lot of complaining. I mean a lot. Every day somebody would remark about how cold it was, how their kids were missing too much school, how potholes were killing their car’s suspension… if there was a problem we could whine about, we did.

Now that it’s spring and we’re celebrating the return of normalcy to our lives, we wonder… just what was it about this past winter that was the absolute worst part of it? Sure, taken as a whole it was pretty awful, but what was the one thing that was the most heinous?

Fortunately for us, we have a cool tool that we could use to answer this question. We enlisted the aid of our consumer panel and our agile and rigorous product Message Test Express™ to find the answer. MTE™ uses our proprietary Bracket™ tool which takes a tournament approach to prioritizing lists. Our goal; find out which item associated with winter was the most egregious.

Our 200 participants had to live in an area that experiences winter weather conditions, believe that this winter was worse or the same as previous winters, and have hated, disliked or tolerated it (no ski bums allowed).

...
Recent comment in this post - Show all comments
  • Ed Olesky
    Ed Olesky says #
    Now this is a research topic relevant to all!

You may have heard about the spat between Apple and Samsung. Apple is suing Samsung for alleged patent infringements that relate to features of the iphone and ipad. The damages claimed by Apple? North of 2 billion dollars. The obvious question is how Apple came up with those numbers? The non-obvious answer is, partly by using conjoint analysis – the tried and tested approach we often use for product development work at TRC.    

Apple hired John Hauser, Professor of Marketing at MIT’s Sloan School of Management to conduct the research. Prof. Hauser is a very well known expert in the area of product management. He has mentored and coauthored several conjoint related articles with my colleague Olivier Toubia at Columbia University. For this case, Prof. Hauser conducted two online studies (n=507 for phones and n=459 for tablets) to establish that consumers indeed valued the features that Apple was arguing about. Details about the conjoint studies are hard to get, but it appears that he has used Sawtooth Software (which we use at TRC) and used the advanced statistical estimation procedure known as Hierarchical Bayes (HB) (which we also use at TRC) to get the best possible results. It also appears that he may have run a conjoint with seven features, incorporating graphical representations to enhance respondent understanding.

There are several lessons to be learnt here for those interested in conducting a conjoint study. First, conjoint sample sizes do not have to be huge. I suspect they are larger than absolutely necessary here because the studies are being used in litigation. Second, he has wisely confined the studies to just seven attributes. We repeatedly recommend to clients that conjoint studies should not be overloaded with attributes. Conjoint tasks can be taxing for survey respondents, and the more difficult they are, the less attention will be paid. Third, he is using HB estimation to obtain preferences at the individual level, which is the state of the science approach. Last, he is incorporating graphics wherever possible to ensure that respondents clearly understand the features. When designing conjoint studies it is good to take these (and other) lessons into consideration to ensure that we get robust results.

So, what was the outcome?

As a result of the conjoint study, Prof. Hauser was able to determine that consumers would be willing to spend an additional $32 to $102 for features like sliding to unlock, universal search and automatic word correction. Under cross examination he acknowledged that this was stated preference in a survey and not necessarily what Apple could charge in a competitive marketplace. This is another point that we often make to clients both in conjoint and other contexts. There is a big difference between answering a survey and actual real world behavior (where several other factors come into play). While survey results (including conjoint) can be very good comparatively, they may not be especially good absolutely. Apple used the help of another MIT trained economist to bring in outside information and finally ended up with a damage estimate of slightly more than $2 billion.

...
Recent comment in this post - Show all comments
  • Ed Olesky
    Ed Olesky says #
    how interesting! thanks for sharing this, Dr. Sambandam. i wonder how many price points they tested. and was it subsidized price,

Market Researchers are constantly being asked to do “more with less”. Doing so is both practical (budgets and timelines are tight) and smart (the more we ask respondents to do, the less engaged they will be). At TRC we use a variety of ways to accomplish this from basic (eliminate redundancies, limit grids and the use of scales) to advanced (use techniques like Conjoint, Max-Diff and our own Bracket™ to unlock how people make decisions). We are also big believers in using incentives to drive engagement and with more reliable results. That is why a recent article in the Journal of Market Research caught my eye.

The article was about promotional lotteries. The rules tend to be simple, “send in the proof of purchase and we’ll put your name in to a drawing for a brand new car!." The odds of winning are also often very remote which might make some not bother. In theory, you could increase the chances of participation by offering a bunch of consolation prizes (free or discounted product for example). In reality, the opposite is true.

One theory would be that the consolation prizes may not interest the person and thus they are less interested in the contest as a whole.   While this might well be true, the authors (Dengfeng Yan and A.V.Muthukrishnan) found that there was more at work. Consolation prizes offer respondents a means to understand the odds of winning that doesn’t exist without them. Seeing, for example, that you have a one in ten million chance of winning may not really register because you are so focused on the car. But if you are told those odds and also the much better odds of winning the consolation prize you realize right away that at best chances are you will win the consolation prize. Since this prize isn’t likely to be as exciting (for example, an M&M contest might offer a free bag of candy for every 1000 participants), you have less interest in participating.

Since we rely so heavily on incentives to garner participation, it strikes me that these findings are worthy of consideration. A bigger “winner take all” prize drawing might draw in more respondents than paying each respondent a small amount. I can tell you from our own experimentation that this is the case. In some cases we employ a double lottery using our gaming technique Smart Incentives™  tool (including in our new ideation product Idea Mill™ ). In this case, the respondent can win one prize simply by participating and another based on the quality of their answer. Adding the second incentive brings in additional components of gaming (the first being “chance”) by adding a competitive element.

Regardless of this paper, we as an industry should be thinking through how we compensate respondents to maximize engagement.

...

We are about to launch a new product called Idea Mill™ which uses a quantitative system to generate ideas and evaluates those ideas all in one step. Our goal was to create a fast and inexpensive means to generate ideas. Since each additional interview we conduct adds cost, we wondered what the ideal number would be.

To determine that we ran a test in which we asked 400 respondents for an idea. Next, we coded the responses into four categories.  

Unique Ideas – Something that no other previous respondent had generated.

Variations on a Theme – An idea that had previously been generated but this time something unique or different was added to it.

Identical – Ideas that didn’t add anything significantly different from what we’d seen before.

...

What Does the Fox Say?

Posted by on in Market Research

Nate Silver’s much anticipated (at least by some of us) new venture, launched recently. In his manifesto he describes it as a “data journalism” effort, and for those of us who have followed his work over the last five years – from the use of sabermetrics in baseball analysis through the predictions of presidential politics – there is plenty to look forward to. Apart from the above topics, his website is focusing on other interesting areas such as science, economics and lifestyle, bringing data-driven rigor and simple explanation to the understanding of all these fields. It follows the template of the blog he ran for the New York Times as well his bestselling book, The Signal and the Noise: Why So Many Predictions Fail, But Some Don’t. As a market researcher, I found much to like in the basic framework he has laid out for his effort.

In critiquing traditional journalism, Nate describes a quadrant using two axes – Qualitative versus Quantitative, and Rigorous & Empirical versus Anecdotal & Ad-hoc.

qual quant market researchSource:www.fivethirtyeight.com

He is looking to occupy the mostly open top left quadrant, while arguing that opinion columnists too often occupy the bottom right quadrant and traditional journalism generally occupies the bottom left quadrant. For someone with such a quantitative background he is not dismissing the qualitative side at all. On the contrary, he argues that it is possible to be qualitative and rigorous and empirical, if one is careful about the observations made (and cites examples of journalists such as Ezra Klein, who occupy the top right quadrant).

For those of us in market research the qualitative versus quantitative dimension is, of course, very familiar. Somewhat less so is the second dimension – rigorous and empirical versus anecdotal and ad-hoc. But this second dimension is especially important to consider because it directly affects our ability to appropriately generalize the insights we develop. As practicing researchers, we know that qualitative research is excellent for discovery and quantitative is great for generalizations. But we also know that is not always the way things are done in practice.

...

We recently conducted an online survey on behalf of a national food brand in which we displayed various images of a grocery store’s shelf space and asked consumers to select the product they would purchase from among those shown on the shelves. This project was successful at differentiating consumer choice based on how the products were packaged, and gave our client important information on package design direct from their target consumers.

That project got me thinking about how shelf space is a limited resource, and in some cases purchase decisions are influenced as much by what’s not on the shelf as by what’s on it.

For example, my Yoplait Fruplait yogurt has gone missing. And I blame you, Greek yogurt.

Fruplait is a delicious (to me) yogurt-fruit concoction that’s heavy on the fruit. There are four single servings to a pack and there are four fruit flavors from which to choose.

I had a wonderful relationship with Fruplait up until the time Greek yogurt started hitting the shelves. With Greek yogurt muscling in and shelf space at a premium, suddenly, the number of flavors in a given store was reduced. Then some stores stopped carrying Fruplait. Now, none of the four stores at which I typically shop carries it at all (it’s still available at some retailers).

...

I’m happy to work for a research company that embraces the philosophy that the respondent experience should be as close to the consumer experience as possible in order to elicit the most useful and actionable information. To that end, we employ different techniques that allow our survey participants to make choices – similar to what they would do in the real world. In so doing, we can provide results that are informative and actionable.

But enough of the sales pitch. I recently faced a problem that made me think of choice in an entirely new way: what if a consumer has a choice but doesn’t realize it? What are the potential consequences?

In my case, my physician ordered a treatment that required pre-certification by my insurance company. When I called for pre-certification, I inquired about the cost (my doc had warned me that the treatment can be very expensive). I was told it would be covered under a $250 co-pay.

I got the treatment and several months later the facility that administered my treatment sent me a bill for $1,500. After a lot of phone calls to my doctor, the facility and my insurance company, we finally determined what happened: my treatment can be performed either in a physician’s office (subject to the $250 co-pay) or at an outpatient facility (subject to a $1,500 outpatient deductible). Yet when I iniitally asked about the cost, the representative only told me about the in-office cost – without informing me that this cost only applied to in-office treatments. I was never told that where I received the treatment had a bearing on what I would pay. So I blindly made my appointment at the treatment facility recommended by my doctor.

We know that decisions should never be made in a vacuum. As researchers, we need to pay attention not only to the choices that we’re putting in front of our survey participants, but also to their awareness of whether or not these options even exist. For example, we’re about to launch a survey about an add-on to an existing technology. But we need to take into account whether the respondents even know that the existing technology is available to them – let alone the add-on. Defining and describing the existing product will help us put how interestested participants are in the add-on into context for our client. The more our participants know about their choices, the less likely they are to make a “mistake” in the choice task we put in front of them, and the better the data for our clients.  

Hits: 1554

Want to know more?

Give us a few details so we can discuss possible solutions.

Please provide your Name.
Please provide a valid Email.
Please provide your Phone.
Please provide your Comments.
Enter code below : Enter code below :
Please Enter Correct Captcha code
Our Phone Number is 1-800-275-2827

Our Clients