Welcome visitor you can log in or create an account
800.275.2827
Consumer Insights. Market Innovation.
blog-page
Recent blog posts

less is moreMarket Researchers are constantly being asked to do “more with less”. Doing so is both practical (budgets and timelines are tight) and smart (the more we ask respondents to do, the less engaged they will be). At TRC we use a variety of ways to accomplish this from basic (eliminate redundancies, limit grids and the use of scales) to advanced (use techniques like Conjoint, Max-Diff and our own Bracket™ to unlock how people make decisions). We are also big believers in using incentives to drive engagement and with more reliable results. That is why a recent article in the Journal of Market Research caught my eye.

The article was about promotional lotteries. The rules tend to be simple, “send in the proof of purchase and we’ll put your name in to a drawing for a brand new car!." The odds of winning are also often very remote which might make some not bother. In theory, you could increase the chances of participation by offering a bunch of consolation prizes (free or discounted product for example). In reality, the opposite is true.

One theory would be that the consolation prizes may not interest the person and thus they are less interested in the contest as a whole.   While this might well be true, the authors (Dengfeng Yan and A.V.Muthukrishnan) found that there was more at work. Consolation prizes offer respondents a means to understand the odds of winning that doesn’t exist without them. Seeing, for example, that you have a one in ten million chance of winning may not really register because you are so focused on the car. But if you are told those odds and also the much better odds of winning the consolation prize you realize right away that at best chances are you will win the consolation prize. Since this prize isn’t likely to be as exciting (for example, an M&M contest might offer a free bag of candy for every 1000 participants), you have less interest in participating.

Since we rely so heavily on incentives to garner participation, it strikes me that these findings are worthy of consideration. A bigger “winner take all” prize drawing might draw in more respondents than paying each respondent a small amount. I can tell you from our own experimentation that this is the case. In some cases we employ a double lottery using our gaming technique Smart Incentives™  tool (including in our new ideation product Idea Mill™ ). In this case, the respondent can win one prize simply by participating and another based on the quality of their answer. Adding the second incentive brings in additional components of gaming (the first being “chance”) by adding a competitive element.

Regardless of this paper, we as an industry should be thinking through how we compensate respondents to maximize engagement.

...

We are about to launch a new product called Idea Mill™ which uses a quantitative system to generate ideas and evaluates those ideas all in one step. Our goal was to create a fast and inexpensive means to generate ideas. Since each additional interview we conduct adds cost, we wondered what the ideal number would be.

To determine that we ran a test in which we asked 400 respondents for an idea. Next, we coded the responses into four categories.  

Unique Ideas – Something that no other previous respondent had generated.

Variations on a Theme – An idea that had previously been generated but this time something unique or different was added to it.

Identical – Ideas that didn’t add anything significantly different from what we’d seen before.

...

What Does the Fox Say?

Posted by on in Market Research

Nate Silver’s much anticipated (at least by some of us) new venture, launched recently. In his manifesto he describes it as a “data journalism” effort, and for those of us who have followed his work over the last five years – from the use of sabermetrics in baseball analysis through the predictions of presidential politics – there is plenty to look forward to. Apart from the above topics, his website is focusing on other interesting areas such as science, economics and lifestyle, bringing data-driven rigor and simple explanation to the understanding of all these fields. It follows the template of the blog he ran for the New York Times as well his bestselling book, The Signal and the Noise: Why So Many Predictions Fail, But Some Don’t. As a market researcher, I found much to like in the basic framework he has laid out for his effort.

In critiquing traditional journalism, Nate describes a quadrant using two axes – Qualitative versus Quantitative, and Rigorous & Empirical versus Anecdotal & Ad-hoc.

qual quant market researchSource:www.fivethirtyeight.com

He is looking to occupy the mostly open top left quadrant, while arguing that opinion columnists too often occupy the bottom right quadrant and traditional journalism generally occupies the bottom left quadrant. For someone with such a quantitative background he is not dismissing the qualitative side at all. On the contrary, he argues that it is possible to be qualitative and rigorous and empirical, if one is careful about the observations made (and cites examples of journalists such as Ezra Klein, who occupy the top right quadrant).

For those of us in market research the qualitative versus quantitative dimension is, of course, very familiar. Somewhat less so is the second dimension – rigorous and empirical versus anecdotal and ad-hoc. But this second dimension is especially important to consider because it directly affects our ability to appropriately generalize the insights we develop. As practicing researchers, we know that qualitative research is excellent for discovery and quantitative is great for generalizations. But we also know that is not always the way things are done in practice.

...

fruplaitWe recently conducted an online survey on behalf of a national food brand in which we displayed various images of a grocery store’s shelf space and asked consumers to select the product they would purchase from among those shown on the shelves. This project was successful at differentiating consumer choice based on how the products were packaged, and gave our client important information on package design direct from their target consumers.

That project got me thinking about how shelf space is a limited resource, and in some cases purchase decisions are influenced as much by what’s not on the shelf as by what’s on it.

For example, my Yoplait Fruplait yogurt has gone missing. And I blame you, Greek yogurt.

Fruplait is a delicious (to me) yogurt-fruit concoction that’s heavy on the fruit. There are four single servings to a pack and there are four fruit flavors from which to choose.

I had a wonderful relationship with Fruplait up until the time Greek yogurt started hitting the shelves. With Greek yogurt muscling in and shelf space at a premium, suddenly, the number of flavors in a given store was reduced. Then some stores stopped carrying Fruplait. Now, none of the four stores at which I typically shop carries it at all (it’s still available at some retailers).

...

survey questions be clear on choicesI’m happy to work for a research company that embraces the philosophy that the respondent experience should be as close to the consumer experience as possible in order to elicit the most useful and actionable information. To that end, we employ different techniques that allow our survey participants to make choices – similar to what they would do in the real world. In so doing, we can provide results that are informative and actionable.

But enough of the sales pitch. I recently faced a problem that made me think of choice in an entirely new way: what if a consumer has a choice but doesn’t realize it? What are the potential consequences?

In my case, my physician ordered a treatment that required pre-certification by my insurance company. When I called for pre-certification, I inquired about the cost (my doc had warned me that the treatment can be very expensive). I was told it would be covered under a $250 co-pay.

I got the treatment and several months later the facility that administered my treatment sent me a bill for $1,500. After a lot of phone calls to my doctor, the facility and my insurance company, we finally determined what happened: my treatment can be performed either in a physician’s office (subject to the $250 co-pay) or at an outpatient facility (subject to a $1,500 outpatient deductible). Yet when I iniitally asked about the cost, the representative only told me about the in-office cost – without informing me that this cost only applied to in-office treatments. I was never told that where I received the treatment had a bearing on what I would pay. So I blindly made my appointment at the treatment facility recommended by my doctor.

We know that decisions should never be made in a vacuum. As researchers, we need to pay attention not only to the choices that we’re putting in front of our survey participants, but also to their awareness of whether or not these options even exist. For example, we’re about to launch a survey about an add-on to an existing technology. But we need to take into account whether the respondents even know that the existing technology is available to them – let alone the add-on. Defining and describing the existing product will help us put how interestested participants are in the add-on into context for our client. The more our participants know about their choices, the less likely they are to make a “mistake” in the choice task we put in front of them, and the better the data for our clients.  

Hits: 932

Every year, the fall harvest yields tasty pumpkins used in traditional baking and the carved pumpkin has become a symbol of the autumn holidays.

In the past few years, pumpkins have spread beyond the traditional baked goods of pies, loafs and muffins and can now be tasted in just about any type of cuisine imaginable. From beverages to entrees and salads to candy and ice cream, pumpkin flavoring is enjoying its moment in the sun.

But do people really like pumpkin flavored coffees and Pumpkin Spice M&Ms? And if pumpkin is so desirable, should it be available all year round?

We polled our trusty consumer panel, and here’s what we found:

Pumpkin pie isn’t enjoyed by everybody.

...

I attended IIeX (Insight Innovation Exchange) in June 2013 where the message was all about dramatic change coming and coming fast. A sort of “innovate or die” message. I expected CASRO’s annual conference to take almost the opposite view. After the first day I am pleased to say that while the view from CASRO is more measured, there is little doubt that change is coming.

From the opening remarks the focus has been on change. Not how to avoid it, but how to embrace it. IIeX presented the opportunity to see how new methods are being used and lots of sessions on new products and services that offer both opportunity and threat to the status quo. CASRO is less specific and focuses more about how to think differently, how to recognize opportunities and how to innovate to stay relevant. In the end, however, the message is clear…you must innovate.

This should come as no surprise to researchers. Whether you do product development as we do, or virtually any kind of research, we advise our clients on how to change to meet the demands of the market. Why then should we expect to be any different in our own business?

So, while I expected the two conferences to present distinctly different views, I am pleased to say they are presenting complementary views.   I walked away from IIeX with lots of ideas on how to apply some great new tools. Thus far I have grown in confidence that I’m on the right track and I have new ways to look at the innovation process. It has already helped me refine my thinking and caused me to want to accelerate change in my company.

We’ll see what the next two days of CASRO hold in store. Ideally I will be glad to have been at both IIeX and CASRO and have a hard time saying which one was the most valuable. One thing I can say, however, is this: while my friend Lenny Murphy has done an outstanding job leading the call for change in this industry, CASRO still outshines IIeX when it comes to food and drink.  

Hits: 1467 0 Comments

Critics vs. TV Viewers

Posted by on in A Day in a (MR) Life

In the last episode of my blog, we compared the list of best TV shows of 2012 for two groups: 45 TV critics, as compiled by Metacritic, and 542 average TV viewers who ranked shows using our Bracket™ prioritization tool. The two groups had six shows in common on their Top 20 lists, including two from AMC: “Breaking Bad” and “The Walking Dead.”

We wondered whether access to more content (through having basic cable or premium channels) would correlate with viewers’ opinions of the top shows.

Before we get to that, it’s interesting to note that the TV critics didn’t favor premium channel or basic cable programming. In fact, fully half of their 20 “best” shows of 2012 aired on the “standard” networks. TV viewers only had one more network show in their own top 20 than the critics did.

Critic Top 20 List TV Viewer Top 20 List
Standard network shows (ABC, CBS, NBC, Fox, PBS) 10 11
Basic cable shows 6 6
Premium channel shows (HBO, Cinemax, Starz, Showtime) 4 3
Total 20 20

So, do TV viewers with premium channels choose more premium channel shows than those without? Well, that’s a little complicated.

...
Tagged in: Prioritization

TV critics seem to think so. In 2012 they rated the AMC series the best show on television for that year. Now, they didn’t get together for a voting party, nor did they cast secret ballots in order to determine this. Instead, the website Metacritic analyzed the top 10 lists of 45 TV critics from the year 2012 and assigned point values to each show mentioned (see the list and the ranking criteria here). “Breaking Bad” came out on top and a mix of shows from basic cable, premium channels and the standard networks made up the rest of the list.

2012 TV Critic Top 20 as Reported by Metacritic
Breaking Bad (AMC)
Mad Men (AMC)
Homeland (Showtime)
Louie (FX)
Girls (HBO)
The Walking Dead (AMC)
Parks and Recreation (NBC)
Game of Thrones (HBO)
Downton Abbey (PBS)
Justified (NBC)
Parenthood (NBC)
Sherlock (PBS)
The Good Wife (CBS)
Boardwalk Empire (HBO)
Modern Family (ABC)
Community (NBC)
American Horror Story (FX)
New Girl (Fox)
30 Rock (NBC)
Nashville (ABC)

Traditionally we measure what TV viewers think of shows based on what they watch – in other words, the “ratings” or share of audience that are measured and reported. But anybody who’s spent as much time watching made-for-SyFy movies as I have knows that what you watch doesn’t necessarily reflect the quality of what you’re watching. I can attest that those SyFy movies are among the most entertaining shows on TV, but they’re not on anyone’s “best” lists – certainly not mine.

So if we don’t look at ratings, how can we determine what TV viewers think are the best shows on TV? We did some research to find out.

First, we partnered with a reputable national online research panel to recruit our respondents. We invited male and female TV viewers age 18 and older to take the survey on our site. They had to watch at least 2 hours of TV content a day, weekdays or weekends, to qualify.

...

message testingI’ve written before about how much I detest our industry’s aversion to change, but today I’d like to be positive and talk about how we can change while not selling out the principles that should drive market research. Here are five that I’ve used in coming up with new solutions.

  1. Focus on What Is Important and Dump the Rest – I’ve always been a custom researcher and so I tend to want to cover every nuance of an objective before deciding that I’ve done my job. The trouble with this is it can lead to higher budgets and longer schedules. Fine if the issue is a long term strategic goal, but unworkable in a world where clients are making decisions faster than ever before.

  2. Set a Budget – Now this might sound like a cart before the horse issue, but I have found it is easier to be true to point 1 if you start off by establishing your budget. Let’s face it, who has not had a client say that they want to accomplish some set of objectives but only have a very limited amount to spend? When that happens we realize that we’ll have to compromise and we come up with something that might not get into every nuance, but that does help the client make a better decision. In determining your budget you should start by thinking about the cheapest you could imagine doing it for and going well below that (I’d start with half). You might not be able to achieve it but the lower you start the more it will help you to avoid issues discussing in the first point above.
  3. Set a Time Frame – Identical logic to number two. We’ve all had crash projects that had to be achieved in a ridiculously short time frame and we generally figure out a way to accomplish them. Here again, look at the fastest you’ve ever done something in the past and see if you can figure out how to cut that time in half.  
  4. Talk to Clients and Prospects – This is basic. There are unfulfilled needs out there. Some are things the client side researchers can tell you right off (“I’d really like it if you could…”) and some are things they don’t think about because they assume they can’t be done. So have conversations about both. For the things they can articulate, ask them exactly what they would need to fulfill that. For the things they can’t articulate, ask them how a new service would be applied to their business (if at all). The answers here will help you create new ideas and refine the ones you have. Most important it will inform on items 1-4 above.
  5. Never Stop Doing Good Research – Faster and cheaper doesn’t mean bad. Obviously a thoughtful collaborative custom research effort will provide superior market research…but if the time or budget don’t allow for it, then the “superior” research is useless (either too late to help or too expensive to do in the first place). That doesn’t mean you shouldn’t deliver reliable results…just that you need to understand (and make your clients understand) the limitations that result from the compromises you had to make.

At TRC we recently launched Message Test Express™. The product developed out of a phone call with a prospect who complained that he couldn’t do effective quantitative message testing because time did not allow it. From that conversation we set budget and timing criteria and then tried to figure out how we could help him to do effective message testing within those parameters. As we worked through our plan we went back and got feedback from him and other clients to make sure we were on the right track. Finally, we figured out how to include some advanced methods (we used our proprietary Bracket™ prioritization tool to provide individual level utilities for each message) and useful tools (such as a highlighter tool, heat maps and specialized word clouds) that maximized the reliability and usefulness of the results.

Doing all of the above is no guarantee that the product will be a success (too early to know if Message Test Express™ will be), but I believe they are a good foundation for creating one. Of course the alternative (not trying to innovate) will surely lead to failure.

Hits: 687 0 Comments

airline head restI'm 5'5" tall, and my neck hurts. No, not all the time, just when I fly. And why does my neck hurt? Simple. Most economy-class seats have stationary bump-out head rests. Now, I’m not quite sure why these headrests bump out like this, since travelers of differing heights will surely experience them differently. My particular problem is that I’m just tall enough to get the back of my head to the bump-out...which means that when I place my head against it it pushes my entire head forward and down. So my neck hurts.

My best solution so far is to buy a neck pillow and wear it backwards. This helps to prop up my chin, but the bump-out and the neck pillow are in a constant battle for supremacy, and usually the bump-out wins.

So what does this have to do with research? Plenty. I wanted to know if other airline passengers would get excited by the prospect of an adjustable headrest to accommodate their needs. Of course, everyone would be happy with something designed just for them, so we tested it alongside other potential cabin improvements to see where it would land.

CABIN IMPROVEMENTS TESTED:
Adjustable headrest to accommodate any size traveler
Denser seat cushions for added comfort
Folding foot rests to elevate your feet
Lumbar support built into the seat backs
More leg room than in a standard exit row seat
Roomier seats - 2 inches wider than most domestic airlines
Seats recline 5 degrees further than other airlines' seats
Tray tables with non-slip surface - better for gripping beverages

We surveyed our intrepid online research panelists, limited the pool to those who fly, and applied our Message Test Express™ technique, which is a tournament-style method of having respondents make choices from a list of items. MTE delivers rank ordering with a numeric value so you can see not just how they ranked, but how close they were in the order.

So it turns out, adjustable headrests weren’t the number one desired improvement. Or number two. I was dismayed to find the adjustable headrest idea ranked number seven out of eight.

...

Please don’t judge me for this, but I’ve watched at least half a dozen episodes of America’s Got Talent this summer. It is easy viewing with a variety of acts from daredevils to singing and dancing, and features celebrity judges adding sarcastic asides. But what struck me is how the show’s format points to the essential weakness of rating scales and the strength of choice questions.

In the early “audition” shows, acts come on and perform for a few minutes. The judges then critique them and ultimately vote “yes” or “no”. If two judges vote “no” the act is done. Otherwise the contestants go to Las Vegas for the next round.   Now while “yes” or “no” is in fact a choice, it is really nothing more than a disguised rating. The reason is there is no constraint. They don’t have a limit on how many people go forward. This is like reading a list of features and asking respondents which ones are important to them (anyone who has done market research knows the answer to such questions is generally “everything is important”).  

Once in Vegas the hard work begins. This season about 120 acts made it there, but only 60 are needed for the competition. So the judges had to decide which 60 would get to the next stage. To do this they picked 30 acts that they thought were good enough to go on and 60 that they wanted to see again to pick the other 30. The remaining 30 were called in and summarily told that they were done (so yes, they flew them to Vegas just to tell them this). Frankly I’d been surprised by many of the acts that got to go to Vegas, so I wasn’t surprised by the choices.  

The key here was that unlike the early rounds…they now had a constraint. As with Max Diff (where you have to pick winners and losers) and Conjoint (where you are constrained by the mix of features and levels), they now had to make real choices. In this case, many were not hard (though telling 10 year olds they are done can’t be easy…even if they clearly are not good enough).   The 60 remaining acts were not all great (many were not even good in my opinion), but they were far better than the 60 sent packing.

From here the tournament becomes more like our proprietary Bracket™ technique. Performances are compared to each other with some getting to move on (and perform against other winning acts) and some being done. In the end only one act will win…the one that is most popular among the dedicated fans of the show. This is exactly how good market research should work…force hard choices to drive the best product, message, segmentation solution or price using pricing research.

...
Tagged in: Choice Market Research

norm chronicles pricing researchJohn Allen Paulos has written a series of books about how most people have a difficult time understanding the meaning of numbers. Researchers who have relied on numbers to tell a story shouldn’t be surprised by this. Even basic statistics can be hard to grasp let alone the complex Bayesian Math needed for complex efforts like Conjoint. Even though most of our clients are quite numerate, they often present results to those who are not. If we are to play, and help our clients play, an active role in decision-making we have to overcome this problem.

One of the examples that Paulos uses involves our inability to understand risk. In their new book, The Norm Chronicles: Stories and Numbers About Danger, Michael Blastland and David Spiegelhalter have tried to simplify things by boiling risk down to a simple number…the MicroMort. One MicroMort means you have a one in a million chance of death.  

On the one hand, this does seem to simplify often complicated actuarial calculations such that we can see that soldiers in Afghanistan face a danger of 47 MicroMorts daily which is of course far more dangerous than your chances of death in a car crash (about 1 MicroMort per day) but far less than WWII bomber crews who were exposed to 25,000. The use of one number certainly simplifies things, but if someone is not great with numbers it might not resonate.

A second means they use is to convert numbers to “MicroLife” terms. So for example, a smoker’s life is cut short by five hours for each day they smoke. Or my favorite stat that your first alcoholic drink each day adds 30 minutes to your life…sadly a drink every half hour won’t get you immortality since each additional drink deducts 15 minutes. While still using numbers, these do at least present them in a clear relatable way. Of course I wonder how many smokers realize they are deducting a year of life for every five years they smoke?

Finding the right mix between numerical precision and understanding can be tricky and not just for research agencies. The key for us is to find the right mix between numerical precision and a clear message. We can’t get hung up too much on things like “statistical differences” (as our Quirk’s Article pointed out).  Instead we need to focus on the decisions that need to be made and pull together a narrative that helps drive them. This certainly doesn’t mean we shouldn’t use numbers…just that we need to put them in the context of recommendations.

...
The Economy of Food at Sporting Events
Image source: www.sports-management-degrees.com

As we learn to make sense of ever expanding amounts of data into simple recommendations, we would do well to think about presenting data in a better way. People often make the mistake of describing themselves as either a “numbers person” or a “picture person”, but in reality we all possess two sides of the brain. …right (images) and left (analytics). I read an article this week which makes the point that the best way to drive understanding is by presenting analytical data in a visual way. This engages both sides of the brain and thus helps us to quickly internalize what we are seeing.

We might be tempted to say that data visualization is easier said than done (but then what isn’t?). We might also be tempted to say that most market research data isn’t that interesting. I tend to disagree.  

Just last week I exchanged some emails with Sophia Barber of Sports-management-degrees.com. She pointed me to a great info graphic about spending on food at sporting events. It is colorful and comprehensively covers a lot of data. If you are a “numbers person” you might try paging about halfway down where all of the underlying data are presented in stark form. My bet would be that even the staunchest numbers person will get more from the combination than from the dull recitation of facts.  

Of course, both food and sports are relatively interesting topics, but what if the topic isn’t fun and interesting? I still say that results from even highly analytical studies (things like conjoint, discrete choice, pricing studies and so on) can be made more memorable and more interesting through the simple addition of pictures and I mean pictures that go beyond simple graphs and charts (which are often as dull as a list of numbers). Doing so drives the point home faster and with that makes our work more relevant.  

Hits: 1120 0 Comments
dog-food-lo-resMy favorite feature of Quirk's Marketing Research e-newsletter is Research War Stories. In one issue this spring, Arnie Fishman reported that he had an unexpectedly high result when he asked research participants whether they eat dog food "all the time." He framed the question by asking how often they ate each of a variety of "exotic foods," including rattlesnake meat and frog kidneys, among others.

This got us thinking that maybe you'd get a different result if you asked just about dog food rather than about dog food amongst other crazy types of foods. So, being the researchers that we are, we designed a monadic design experiment to see what would happen.

Using Arnie's same framework of exotic foods, we asked one group of our online research panelists how frequently they eat dog food. On the next screen we asked the same question about rattlesnake meat. They always saw dog food first, so they had no other stimulus when they answered the dog food question.

We asked another group of panelists about dog food, rattlesnake meat, frog kidneys, gopher brains, and chocolate covered ants all on the same screen. We hypothesized that this group would be more open to admitting to eat dog food when grouped with these other items rather than just being asked directly about dog food.

Well, we were wrong about that – none of the folks asked about dog food alone admitted to eating dog food all the time, and 1% of those asked about dog food amongst the other exotic items did so (not a statistically significant difference). The percent of folks in both groups saying that they "never" ate dog food was the same as well (96%). So in our experiment, the "framing" of the question had no bearing on the response.

But we also twisted things around a bit, and asked a third group how frequently they feel other people in the US eat dog food. The "all the time" category rose slightly, but not significantly, to 2%. But the big change was in the "never" category – only 44% of the panelists in this group said that people in the US never eat dog food. We saw the same pattern for the other foods as well. In this case they're answering for a group, not for a single person, so clearly there has to be an allowance that some people are eating these types of foods some of the time.

...

market research off the ledgeReally enjoyed the IIeX Greenbook conference. I generally concurred with the opinions expressed and many of the presentations gave me ideas on how we might better serve our clients. Thought I might share some of my reflections here.

In general terms this was a conference that likely scared more than one researcher to jump. For example, Charles Vila the head of Campbell Soup’s Consumer and Customer Insights for North America said that within five years he doesn’t expect to use any survey data.   Personally, I tend to disagree with such sweeping statements (hopefully this won’t prevent me from working with Campbell’s moving forward), but perhaps they are necessary to shake our often complacent industry into thinking differently.

In that regard, Campbell’s is a good example. Their flagship product is soup, a product that has been around forever and sold by them for 100 years. This doesn’t stop them from innovating not just with new products, but in the way they engage the customer. Their staff is immersed in the latest gadgets that consumers are using so they can better understand how they can be employed in Campbell’s marketing efforts.

So, I’d encourage researchers to do the same. Ultimately it doesn’t matter if surveys go away or simply cease to be the primary form of data collection. If we allow ourselves to be defined by how we acquire data then we deserve to go the way of the proverbial buggy whip manufactures did at the turn of the last century.

The great news is that many of the new technologies being shown off are not really competing with us. Most seek to provide new tools for traditional research companies to use.   Some might replace surveys and others augment them. Some are really just surveys in another form (such as Google’s) and there are new ways to design and implement surveys to better get at the truth (my partner Rajan Sambandam’s presentation on “Behavioral Conjoint” being one self-serving example). The possibility of improving our ability to guide product development, pricing research and marketing is one we should embrace.

At one of the last presentations, Simon Chadwick talked about what investors are looking for and he noted that research firms that refuse to get on the band wagon are dead, but investors who don’t understand what we are all about are going to miss the boat too. In other words, we need to define ourselves as providers of insight that allow our clients to make good business decisions AND we should utilize all the tools at our disposal to accomplish that. So come in off the ledge and get to work…

Hits: 957 0 Comments

I was treated to a presentation given by Professor Joydeep Srivastava from the University of Maryland at our Frontiers of Research market research conference in May. Joydeep’s discussion focused on pricing research and perceptions of what consumers are willing to pay based on the way the prices are presented to them – whether prices for the components are bundled together or shown apart.

One point he touched on almost as an afterthought is that no one wants to pay for installation. I must agree with him that no one wants to agree to a price only to find out a few moments later that something essential (such as installation) isn’t included. This seems to break the contract, and can lead to feelings of resentment – and, as he pointed out, lost sales. On the other hand, presenting installation costs separately as an option can be enticing to the Do-It-Yourselfers who would want to be able to weigh the pros and cons of tackling that step themselves.

I was reminded of all of this when I ordered a map update to my car’s navigation system. When I received the jewel case in the mail I assumed it contained a CD which I could pop into my car CD player and install the update on my own. Only the jewel case didn’t contain a CD, it contained a memory card, and there were no accompanying instructions – not even a phone number. After popping it in my computer to look for a read-me file, I was still at a loss. So I gave my car dealer a call and they told me to bring it in for installation. When I arrived, the service technician told me I could have saved myself the trip and done it myself by inserting it in the card slot. I told him I didn’t know I had a card slot, and if he told me where to find it, I’d be happy to go do it on my own. A senior technician intervened, and taking pity on me he asked a tech install the maps and then told me there would be no service charge.

By the way, I finally found the card slot after searching for it for about 15 minutes.

There was no mention of installation in the up-front sales process whatsoever. So my first assumption was the correct one, that I should be able to do it myself. But that wasn’t addressed in the sales process nor in the product packaging. Not addressing installation up-front can lead to very different outcomes:

  • The manufacturer can keep the cost low and potentially sell more updates by not having to create detailed installation instructions which can vary by model and year. But even if professional installation was not required, leaving the consumer confused after a purchase is never a good idea and no doubt leaves consumers with some ill will.
  • In my case, the dealer took the view that my purchase of the vehicle (and the update) gave them the opportunity to help me out in situations like this.... And I could view it either as an extension of their awesome service or that their service was “bundled” into the original price of the vehicle. Either way, the dealer comes out looking good – so much so that perhaps they can charge a premium for this all-inclusive service “bundle” the next time around.
Hits: 1358 0 Comments

research conference may13 2013We just wrapped up another of our client conferences and it was another successful day for all concerned. This conference stood out for the level of interaction between the speakers and the audience, a testament to the speakers, their topics, and the keen interest that practitioners have in these topics.  

The first speaker was Olivier Toubia from Columbia University. Olivier is a true leader in the area of innovation research and teaches an MBA course called Customer Centric Innovation. He gave a quick round up of four important questions that he has been able to address through his research – how to motivate consumers to generate ideas, how to structure the idea generation process, how to screen and evaluate the ideas and how to find consumers who have good ideas. By taking us through a variety of studies (including surveys and experiments) he was able to answer these questions and provoke a lot of interesting thoughts from the audience.

Next up was Vicki Morwitz from New York University. She uses surveys extensively in her research and is a leader in understanding the impact that survey responses have on subsequent behavior. She was able to present evidence about the unintended effect that surveys have on respondents, something that should be of interest to all marketing research firms and indeed all marketers. In some cases surveys have a positive impact in that they increase future purchasing behavior, but said Vicki, should be used with caution as overt efforts to influence consumers do not seem to work.

Vicki’s presentation was followed by TRC’s own Michael Sosnowski who discussed the idea of doing more with less in a mobile world. He talked about the increasing numbers of survey respondents who are attempting to get at surveys using their smartphones and why we as researchers should be aware of that. He questioned the conventional wisdom that mobile phone surveys should be short and simple and showed examples of more complex choice based surveys (using TRC’s Bracket) can be conducted on mobile phones and how it provides results similar to an online survey. We may not be ready to do conjoint studies on mobile phones, he said, but neither should we artificially constrain ourselves to extremely simple data collection. Using good design and sophisticated analysis it is possible to get good quality information from mobile surveys.

Following Michael was Joydeep Srivastava from the University of Maryland an old friend of mine from my graduate school days. He is now a leading consumer behavior researcher who has done especially interesting work in the area of pricing. His specific interest is in partitioned pricing (such as charging a separate price for shipping) and he was able to enlighten the audience with the results of his experiments. For example, he was able to counter the myth that charging a separate shipping price and then providing a price discount to offset it would stave off any damage to the company. On the contrary, it actually reduced the purchase likelihood compared to not providing a discount. This, he said, was because of people’s unwillingness to pay for shipping in the first place and the explicit reminder of it with the offsetting charge.      

...

Pricing Research in Context

Posted by on in New Product Research

pricing research federal donutsMy last blog about pricing research was still fresh in my mind when I read an excerpt of Craig LaBan’s recent online chat. LaBan is the Philadelphia Inquirer’s restaurant critic and offers insightful reviews and information for foodies in the region. I was intrigued by the discussion of how Federal Donuts charges different prices at the ballpark than in their stand-alone restaurant locations.

Our clients typically look for answers to how to price their products either alone or bundled. But I personally have yet to have a client ask me how to price a product differently based on the situation or context. There is good information to be had on this topic: in “Contextual Pricing: The Death of List Price and the New Market Reality” the authors point out that the pricing scheme for Coca Cola includes air temperature at the point of sale. But what tools are available to the market researcher for exploring situation-based pricing?

At its simplest level, we can ask consumers what they’d be willing to pay given a certain situation (such as in an airport or on an airplane). By using a monadic design in which similar groups of respondents are asked about a single price point, we can compare across the groups to see what the various “take-rates” would be.

Discrete choice could be employed to vary both the context and the pricing – in that way multiple situations could be tested along with multiple price points. (My colleague, Rajan Sambandam will be speaking about Behavioral Conjoint at the Insight Innovation Exchange NA event in Philadelphia in June.)

I’m not sure how Federal Donuts arrived at their pricing decision – it could very well be that the ballpark charges more rent and that factor alone determined their pricing. But when all other factors are equal, determining how much to charge can have important financial consequences.

Hits: 1200 0 Comments

I was in a meeting last week about pricing research and we talked about how far it's come from the days of simply asking people what they'd pay for something. From laddering to Van Westendorp's Price Sensitivity Meter to Discrete Choice modeling, the research industry has grown in sophistication in addressing this very crucial aspect of product development and marketing.

I started thinking back over some of the pricing research I've been involved with over the years, and I realized that at times our clients come to us without the information they'll need to make the project a success. That's not to say they're not doing their job -- but pricing research does have a few requisites. Here are 3 keys to effective pricing research:

  • Know what it costs to produce. This can be tricky for a start-up service or for a physical product that hasn't been manufactured yet. But we need a basic understanding of what the minimum price should be -- anything below that would be unprofitable, so there's no sense including extremely low price points. The sky's the limit on the maximum, but we need the minimum in order to anchor the study design in reality.
  • Know the competition. Speaking of reality, we can design pricing research with or without factoring in competitive products. But if you're going to include your competitors, we need an understanding of what their products are and how they're priced. We want to construct choices that are as close to reality as possible. Premium-priced brands should reflect premium prices, or your results could skew in a strange direction.  
  • Know your pricing objective. What are you trying to maximize: unit sales? revenue? profit? Of course, everyone wants all of these. But in laying out a pricing strategy, it helps to understand how the trade-offs will impact your bottom line: is it more desirable to sell more units at a lower price or fewer units at a higher price?  

This list is by no means exhaustive -- I welcome your additions!

Hits: 1339 0 Comments

Want to know more?

Give us a few details so we can discuss possible solutions.

Please provide your Name.
Please provide a valid Email.
Please provide your Phone.
Please provide your Comments.
Enter code below : Enter code below :
Please Enter Correct Captcha code
Our Phone Number is 1-800-275-2827

Our Clients