Welcome visitor you can log in or create an account

800.275.2827

Consumer Insights. Market Innovation.

blog-page
Rich Raquet

Rich Raquet

President, TRC


Rich brings a passion for quantitative data and the use of choice to understand consumer behavior to his blog entries. His unique perspective has allowed him to muse on subjects as far afield as Dinosaurs and advanced technology with insight into what each can teach us about doing better research.  

Curious mind new product ResearchI recently finished Brian Grazer’s book A Curious Mind and I enjoyed it immensely. I was attracted to the book both because I have enjoyed many of the movies he made with Ron Howard (Apollo 13 being among my favorites) and because of the subject…curiosity.

I have long believed that curiosity is a critical trait for a good researcher. We have to be curious about our clients’ needs, new research methods and most important the data itself. While a cursory review of cross tabs will produce some useful information, it is digging deeper that allows us to make the connections that tell a coherent story. Without curiosity analytical techniques like conjoint or max diff don’t help.

The book shows how Mr. Grazer’s insatiable curiosity had brought him into what he calls “curiosity conversations” with a wide array of individuals from Fidel Castro to Jonas Salk. He had these conversations not because he thought there might be a movie in it, but because he wanted to know more about these individuals. He often came out of the conversations with a new perspective and yes, sometimes even ideas for a movie.

One example was with regards to Apollo 13. He had met Jim Lovell (the commander of that fateful mission) and found his story to be interesting, but he wasn’t sure how to make it into a movie. The technical details were just too complicated.

Later he was introduced by Sting to Veronica de Negri.  If you don’t know who she is (I didn’t), she was a political prisoner in Chile for 8 months during which she was brutally tortured. To survive she had to create for herself an alternate reality. In essence by focusing on the one thing she still had control of (her mind) she was able to endure the things she could not control. Mr. Grazer used that logic to help craft Apollo 13. Instead of being a movie about technical challenges it became a movie about the human spirit and its ability to overcome even the most difficult circumstances.

...

bias in market research two soccer playersIn new product market research we often discuss the topic of bias, though typically these discussions revolve around issues like sample selection (representativeness, non-response, etc.) but what about methodological or analysis bias? Is it possible that we impact results by choosing the wrong market research methods to collect the data or to analyze the results?


A recent article in the Economist presented an interesting study in which the same data set and the same objective was given to 29 different researchers. The objective was to determine if dark skinned soccer players were more likely to get a red card than light skinned players. Each researcher was free to use whatever methods they thought best to answer the question.


Both statistical methods (Bayesian Clustering, logistic regression, linear modeling...) and analysis techniques (some for example considered that some positions might be more likely to get red cards and thus data needed to be adjusted for that) differed from one researcher to the next. No surprise then that results varied as well. One found that dark skinned players were only 89% as likely to get a red card as light skinned players while another found dark skinned players were three times MORE likely to get the red card. So who is right?


There is no easy way to answer that question. I'm sure some of the analysis can easily be dismissed as too superficial, but in other cases the "correct" method is not obvious. The article suggests that when important decisions regarding public policy are being considered the government should contract with multiple researchers and then compare and contrast their results to gain a fuller understanding of what policy should be adapted. I'm not convinced this is such a great idea for public policy (seems like it would only lead to more polarization as groups pick the results they most agreed with going in), but the more important question is, what can we as researchers learn from this?


In custom new product market research the potential for different results is even greater. We are not limited to existing data. Sure we might use that data (customer purchase behavior for example), but we can and will supplement it with data that we collect. These data can be gathered using a variety of techniques and question types. Once the data are collected we have the same potential to come up with different results as the study above.

...

Catalog Cover TestingVery few clients will go to market with a new concept without some form of market research to test it first. Others will use some real world substitutes (such as A/B Mail tests) to accomplish the same end. No one would argue against the effectiveness of things like this...they provide a scientific basis for making the right decision. Why is it then that in early stage decision-making science is often replaced with their gut?

Consider this...an innovation department cooks up a dozen or more ideas for new or improved products and services. At this point they are nothing more than ideas with perhaps some crude mock-ups to go along with them. Doing full out concept testing would be costly for this number of ideas and a real world test is certainly not in the cards. Instead, a "team" which might include product managers, marketing folks, researchers and even some of the innovation people who came up with the concepts are brought together to wean the ideas down to a more manageable level.

The team carefully evaluates each concept, perhaps ranks them and provides their thinking on why they liked certain ones. These "independent' evaluations are tallied and the dozen concepts are reduced to two or three. These two or three are then developed further and put through a more rigorous and costly process - in-market testing. The concept or concepts that score best in this process are then launched to the entire market.

This process produces a result, but also some level of doubt. Perhaps the concept that the team thought was best scored badly in the more rigorous research or the winning concept just didn't perform as well as the team thought it would. Does anyone wonder if perhaps some of the ideas that the team weaned out might have performed even better than the "winners" they picked? What opportunities might have been lost if the best ideas were left on the drawing board?

The initial weaning process is susceptible to various forms of error including group think. The less rigorous process is used not because it is seen as best, but because the rigorous methods normally used are too costly to employ on a large list of items. Does that mean going with your gut is the only option?

...

Market Research Event conjoint AnalysisLast week we held an event in New York in which Mark Broadie from Columbia University talked about his book “Every Shot Counts”. The talk and the book detail his analysis of a very large and complex data set…specifically the “ShotLine” data collected for over a decade by the PGA. It details every shot taken by every pro at every PGA tournament. He was able to use it to challenge some long held assumptions about golf…such as “Do you drive for show and putt for dough?”

On the surface the data set was not easy to work with. Sure it had numbers like how long the hole was, how far the shot went, how far it was from the hole and so on. It also had data like whether it ended up in the fairway, on the green, in the rough, in a trap or the dreaded out of bounds. Every pro has a different set of skills and there were a surprising range of abilities even in this set, but he added the same data on tens of thousands of amateur golfers of various skill levels. So how can anyone make sense of such a wide range of data and do it in a way that the amateur who scores 100 can be compared to the pro who frequently scores n the 60’-s?

You might be tempted to say that he would use a regression analysis, but he did not. You might assume he used Hierarchical Bayesian estimation as it has become more commonplace (it drives discrete choice conjoint, Max Diff and our own Bracket™), he didn’t use it here either.

Instead, he used simple arithmetic. No HB, no calculus, no Greek letters, just simple addition, subtraction, multiplication and division. At the base level, he simply averaged similar scores. Specifically he determined how many strokes it took on average for players to go from where they were to the hole. These averages were further broken down to account for where the ball started (not just distance, but rough, sand, fairway, etc) and how good the golfer was.

These simple averages allow him to answer any number of “what if” questions. For example, he can see on average how many strokes are saved by going an extra 50 yards off the tee (which turns out to be more than for being better at putting). He can also show that in fact neither driving nor putting is as important as the approach shot (the last full swing before putting the ball on the green). The ability to put the ball close to the hole on this shot is the biggest factor in scoring low.

...

An issue that comes up quite a bit when doing research is the proper way to frame questions. In my last blog I reported on our Super Bowl ad test in which we surveyed viewers to rank 36 ads based on their “entertainment value”. We did a second survey that framed the question differently to see if we could determine which ads were most effective at driving consideration of the product…in other words, the ads that did what ads are supposed to do!

As with life, framing, or context, is critical in research. First off, the nature of questions is important. Where possible the use of choice questions will work better than say rating scales. The reason is that consumers are used to making choices...ratings are more abstract. Techniques like Max-Diff, Conjoint (typically Discrete Choice these days) or our own proprietary new product research technique Bracket™ get at what is important in a way that ratings can’t.

Second, the environment you create when asking the question must seek to put consumers in the same mindset they would be in when they make decisions in real life. For example, if you are testing slogans for the outside of a direct mail envelope, you should show the slogans on an envelope rather than just in text form.

Finally, you need to frame the question in a way that matches the real world result you want. In the case of a direct mail piece, you should frame the question along the lines of “which of these would you most likely open?” rather than “which of these slogans is most important?”. In the case of a Super Bowl ad (or any ad for that matter), asking about entertainment value is less important than asking about things like “consideration” or even “likelihood to tell others about it”.  

So, we polled a second group of people and asked them “which one made you most interested in considering the product as advertised?” The results were quite interesting.

...
Recent comment in this post - Show all comments
  • Dave
    Dave says #
    The consideration vs entertainment angle is an interesting take.

Budweiser puppyWell, it is the time of year when America’s greatest sporting event takes place. I speak of course about the race to determine which Super Bowl ad is the best. Over the years there have been many ways to accomplish this, but like so often happens in research today, the methods are flawed.

First there is the “party consensus method”. Here people gathered to watch the big game call out their approval or disapproval of various ads. Beyond the fact that the “sample” is clearly not representative, this method has other flaws. At the party I was at we had a Nationwide agent, so criticism of the “dead kid” ad was muted. This is just one example of how people in the group can influence each other (anyone who has watched a focus group has seen this in action). The most popular ad was the Fiat ad with the Viagra pill…not because it was perhaps the favorite, but because parties are noisy and this ad was largely a silent picture.

Second, there is the “opinion leaders” method. The folks who have a platform to spout their opinion (be it TV, YouTube, Twitter or Facebook) tell us what to think. While certainly this will influence opinions, I don’t think tallying up their opinions really gets at the truth. They might be right some of the time, but listening to them is like going with your gut…likely you are missing something.

Third, there is the “focus group” approach. In this method a group of typical people is shuffled off to a room to watch the game and turn dials to rate the commercials they see.   So, like any focus group, these “typical” people are of course atypical. In exchange for some money they were willing to spend four hours watching the game with perfect strangers. Further, are focus groups really the way to measure something like which is best? Focus groups can be outstanding at drawing out ideas, providing rich understandings of products and so on, but they are not (nor are they intended to be) quantitative measures.

The use of imperfect means to measure quantitative problems is not unique to Super Bowl ads. I’ve been told by many clients that budget and timing concerns require that they answer some quantitative questions with the opinions of their internal team, or their own gut or qualitative research. That is why we developed our agile and rigorous tools, including Message Test Express™ (MTE™).

...

conjoint analysis blizzardHere in Philly we are recovering from the blizzard that wasn’t. For days we’d been warned of snow falling multiple inches per hour, winds causing massive drifts and the likelihood of it taking days to clear out. The warnings continued right up until we were just hours away from this weather Armageddon. In the end, only New England really got the brunt of the storm. We ended up with a few inches. So how could the weather forecasters have been this wrong?

The simple answer is of course that weather forecasting is complicated. There are so many factors that impact the weather…in this case an “inverted trough” caused the storm to develop differently than expected. So even with the massive historical data available and the variety of data points at their disposal the weather forecasters can be surprised.  

At TRC we do an awful lot of conjoint research…a sort of product forecast if you will. It got me thinking about some keys to avoiding making the same kinds of mistakes as the weather forecasters made on this storm:

  1. Understand the limitations of your data. A conjoint or discrete choice conjoint can obviously only inform on things included in the model. It should be obvious that you can’t model features or levels you didn’t test (such as say a price that falls outside the range tested). Beyond that however, you might be tempted to infer things that are not true. For example, if you were using the conjoint to test a CPG package and one feature was “health benefits” with levels such as “Low in Fat”, “Low in carbs” and so on you might be tempted to assume that the two levels with the highest utilities should both be included on the package since logically both benefits were positive. The trouble is that you don’t know if some respondents prefer high fat and low carbs and others the complete opposite. You can only determine the impact of combinations of a single level of each feature so you must make sure that anything you want to combine are in separate features. This might lead to a lot of “present/not present” features which might overcomplicate the respondent’s choices. In the end you may have to compromise, but best to make those compromises in a thoughtful and informed way.
  2. Understand that the data were collected in an artificial framework. The respondents are fully versed on the features and product choices…in the market that may or may not be the case. The store I go to may not offer one or more of the products modeled or I may not be aware of the unique benefits one product offers because advertising and promotion failed to get the message to me. Conjoint can tell you what will succeed and why but the hard work of actually delivering on those recommendations still has to be done. Failing to recognize that is no better than recognizing the possibility of an inverted trough.
  3. Understand that you don’t have all the information. Consumer decisions are complex. In a conjoint analysis you might test 7 or 8 product features but in reality there are dozens more that consumers will take into account in their decision making. As noted in number 1, the model can’t account for what is not tested. I may choose a car based on it having adaptive cruise control, but if you didn’t test that feature my choices will only reflect other factors in my decision. Often we test a hold out card (a choice respondents made that is not used in calculating the utilities, but rather to see how well our predictions do) and in a good result we find we are right about 60% of the time (This is good because if a respondent has four choices random chance would dictate being right just 25% of the time). Weather forecasters are not pointing out that they probably should have explained their level of certainty about the storm (specifically that they knew there was a decent chance they would be wrong).

So, with all these limitations is conjoint worth it? Well, I would suggest that even though the weather forecasters can be spectacularly wrong, I doubt many of us ignore them. Who sets out for work when snow is falling without checking to see if things will improve? Who heads off on a winter business trip without checking to see what clothes to pack? The same is true for conjoint. With all the limitations it has, a well executed model (and executing well takes knowledge, experience and skill) will provide clear guidance on marketing decisions.  

Hits: 5644 0 Comments

Last year Time Magazine featured a cover story about fat…specifically that fat has been unfairly vilified and that in fact carbs and sugars are the real danger. They were not the first with the story nor will they be the last. The question is, how will this impact the food products on the market?

The idea that carbs and sugar were the worst things you could eat would not have surprised a dieter in say 1970. It was in the 1980’s that conventional wisdom moved toward the notion that fat caused weight gain and with that heart disease and thus should be avoided. Over time the public came to accept this wisdom (after all the idea that fat causes fat isn’t hard to accept) and the market responded with a bunch of low fat products. Unfortunately those products were higher in sugar and carbs and the net result is that Americans have grown heavier.  

If the public buys into this new thinking we should expect the market to respond. To see how well the message has gotten out, we conducted a national survey with two goals in mind:

  • Determine awareness of the sugar/carbs being worse than fat thinking.
  • Determine if it would change behavior.

About a third of respondents said they were aware of the new dietary thinking. While still a minority, a third is nothing to be sneezed at. Especially when you consider that the vast majority of advertising still focus on the low fat message and food nutrition labels still highlight fat calories at the top. It took time for the “low fat” message to take hold and clearly it will take time for this to take hold as well.

Already there is evidence of change. Those aware of the message prior to the survey were far more likely to recommend changes to people’s diets (38%) than those who were not aware prior to the survey (11%). Clearly it takes more than being informed in a survey to change 30 years of conventional wisdom, but once the message takes hole, expect changes. In fact, two thirds of those aware of the message before doing the survey have already made changes to behavior:

...

Truth or Research

Posted by on in New Research Methods

respondents telling truth in surveysI read an interesting story about a survey done to determine if people are honest with pollsters. Of course such a study is flawed by definition (how can we be sure those who say they always tell the truth, are not lying?). Still, the results do back up what I’ve long suspected…getting at the truth in a survey is hard.

The study indicates that most people claim to be honest, even about very personal things (like financing). Younger people, however, are less likely to be honest with survey takers than others. As noted above, I suspect that if anything, the results understate the potential problem.

To be clear, I don’t think that people are just being dishonest for the sake of being dishonest….I think it flows from a few factors.

First, some questions are too personal to answer, even on a web survey. With all the stories of personal financial data being stolen or compromising pictures being hacked, it shouldn’t surprise us that some people might not want to answer some kinds of questions. We should really think about that as we design questions. For example, while it might be easy to ask for a lot of detail, we might not always need it (income ranges for example). To the extent we do need it, finding ways to build credibility with the respondent are critical.

Second, some questions might create a conflict between what people want to believe about themselves and the truth. People might want to think of themselves as being “outgoing” and so if you ask them they might say they are. But their behavior might not line up with reality. The simple solution is to ask questions related to behavior without ascribing a term like “outgoing”. Of course, it is always worth asking it directly as well (knowing the self image AND behavior could make for interesting segmentations variables for example).

...

My daughter was performing in The Music Man this summer and after seeing the show a number of times, I realized it speaks to the perils of poor planning…in forming a boys band and in conducting complex research.  

For those of you who have not seen it, the show is about a con artist who gets a town to buy instruments and uniforms for a boys band in exchange for which he promises he’ll teach them all how to play. When they discover he is a fraud they threaten to tar and feather him, but (spoiler alert) his girl friend gets the boys together to march into town and play. Despite the fact that they are awful, the parents can’t help but be proud and everyone lives happily ever after.

It is to some extent another example of how good we are at rationalizing. The parents wanted the band to be good and so they convinced themselves that they were. The same thing can happen with research…everyone wants to believe the results so they do…even when perhaps they should not.

I’ve spent my career talking about how important it is to know where your data have been. Bias introduced by poor interviewers, poorly written scripts, unrepresentative sample and so on will impact results AND yet these flawed data will still produce cross tabs and analytics. Rarely will they be so far off that the results can be dismissed out of hand.

The problem only gets worse when using advanced methods. A poorly designed conjoint will still produce results. Again, more often than not these results will be such that the great rationalization ability of humans will make them seem reasonable.

...

big league research conjointWhile there is so much bad news in the world of late, here in Philly we’ve been captivated by the success of the Taney Dragons in the Little League World Series. While the team was sadly eliminated, they continue to dominate the local news. It got me thinking about what it is that makes a story like theirs so compelling and of course, how we could employ research to sort it out.

There are any number of reasons why the story is so engrossing (especially here in Philly). Is it the star player Mo’ne Davis, the most successful girl ever to compete in the Little League World Series or perhaps the fact that the Phillies are doing so poorly this year or maybe we just like seeing a team from various ethnicities and socio-economic levels working together and achieving success? Of course it might also be that we are tired of bad news and enjoy having something positive to focus on (even in defeat the team fought hard and exhibited tremendous sportsmanship).

The easiest thing to do is to simply ask people why they find the story compelling. This might get at the truth, but it is also possible that people will not be totally honest (for example, the disgruntled Phillies fan might not want to admit that) or they don’t really know what it is that has drawn them in. It might also identify the most important factor but not make note of other critical factors.

We could employ a technique like Max-Diff and ask them to choose which features of the story they find most compelling. This would provide a fuller picture, but is still open to the kinds of biases noted above.

Perhaps the best method would be to use a discrete choice approach. We take all the features of the story and either include them or don’t include them in a “story description” then ask people which story they would most likely read. We can then use analytics on the back end to sort out what really drove the decision.  

...

UFO sighting causation correlation market researchSmallI read a blurb in The Economist about UFO sightings. They charted some 90,000 reports and found that UFO's are, as they put it, "considerate". They tend not to interrupt the work day or sleep. Rather, they tend to be seen far more often in the evening (peaking around 10PM) and more on Friday nights than other nights.
The Economist dubbed the hours of maximum UFO activity to be "drinking hours" and implied that in fact that drinking was the cause of all those sightings.
As researchers, we know that correlation does not mean causation. Of course their analysis is interesting and possibly correct, but it is superficial. One could argue (and I'm sure certain "experts" on the History Channel would) that it is in fact the UFO activity that causes people to want to drink, but by limiting their analysis to two factors (time of day/number of sightings), The Economist ignore other explanations.
For example, the low number of sightings during sleeping hours would make perfect sense (most of us sleep indoors with our eyes closed). The same might be true for the lower number during work hours (many people don't have ready access to a window and those who do are often focused on their computer screen and not the little green men taking soil samples out the window).
As researchers, we need to consider all the possibilities. Questionnaires should be constructed to include questions that help us understand all the factors that drive decision making. Analysis should, where possible, use multivariate techniques so that we can truly measure the impact of one factor over another. Of course, constructing questions that allow respondents to express their thinking is also key...while a long attribute rating battery might seem like it is being "comprehensive" it is more likely mind numbing for the respondent. We of course prefer to use techniques like Max-Diff, Bracket™ or Discrete Choice to figure out what drives behavior.
Hopefully I've given you something to think about tonight when you are sitting on the porch, having a drink and watching the skies.

Hits: 5605 0 Comments

Market Researchers are constantly being asked to do “more with less”. Doing so is both practical (budgets and timelines are tight) and smart (the more we ask respondents to do, the less engaged they will be). At TRC we use a variety of ways to accomplish this from basic (eliminate redundancies, limit grids and the use of scales) to advanced (use techniques like Conjoint, Max-Diff and our own Bracket™ to unlock how people make decisions). We are also big believers in using incentives to drive engagement and with more reliable results. That is why a recent article in the Journal of Market Research caught my eye.

The article was about promotional lotteries. The rules tend to be simple, “send in the proof of purchase and we’ll put your name in to a drawing for a brand new car!." The odds of winning are also often very remote which might make some not bother. In theory, you could increase the chances of participation by offering a bunch of consolation prizes (free or discounted product for example). In reality, the opposite is true.

One theory would be that the consolation prizes may not interest the person and thus they are less interested in the contest as a whole.  While this might well be true, the authors (Dengfeng Yan and A.V.Muthukrishnan) found that there was more at work. Consolation prizes offer respondents a means to understand the odds of winning that doesn’t exist without them. Seeing, for example, that you have a one in ten million chance of winning may not really register because you are so focused on the car. But if you are told those odds and also the much better odds of winning the consolation prize you realize right away that at best chances are you will win the consolation prize. Since this prize isn’t likely to be as exciting (for example, an M&M contest might offer a free bag of candy for every 1000 participants), you have less interest in participating.

Since we rely so heavily on incentives to garner participation, it strikes me that these findings are worthy of consideration. A bigger “winner take all” prize drawing might draw in more respondents than paying each respondent a small amount. I can tell you from our own experimentation that this is the case. In some cases we employ a double lottery using our gaming technique Smart Incentives™  tool (including in our new ideation product Idea Mill™ ). In this case, the respondent can win one prize simply by participating and another based on the quality of their answer. Adding the second incentive brings in additional components of gaming (the first being “chance”) by adding a competitive element.

Regardless of this paper, we as an industry should be thinking through how we compensate respondents to maximize engagement.

...

We are about to launch a new product called Idea Mill™ which uses a quantitative system to generate ideas and evaluates those ideas all in one step. Our goal was to create a fast and inexpensive means to generate ideas. Since each additional interview we conduct adds cost, we wondered what the ideal number would be.

To determine that we ran a test in which we asked 400 respondents for an idea. Next, we coded the responses into four categories.  

Unique Ideas – Something that no other previous respondent had generated.

Variations on a Theme – An idea that had previously been generated but this time something unique or different was added to it.

Identical – Ideas that didn’t add anything significantly different from what we’d seen before.

...

I attended IIeX (Insight Innovation Exchange) in June 2013 where the message was all about dramatic change coming and coming fast. A sort of “innovate or die” message. I expected CASRO’s annual conference to take almost the opposite view. After the first day I am pleased to say that while the view from CASRO is more measured, there is little doubt that change is coming.

From the opening remarks the focus has been on change. Not how to avoid it, but how to embrace it. IIeX presented the opportunity to see how new methods are being used and lots of sessions on new products and services that offer both opportunity and threat to the status quo. CASRO is less specific and focuses more about how to think differently, how to recognize opportunities and how to innovate to stay relevant. In the end, however, the message is clear…you must innovate.

This should come as no surprise to researchers. Whether you do product development as we do, or virtually any kind of research, we advise our clients on how to change to meet the demands of the market. Why then should we expect to be any different in our own business?

So, while I expected the two conferences to present distinctly different views, I am pleased to say they are presenting complementary views.   I walked away from IIeX with lots of ideas on how to apply some great new tools. Thus far I have grown in confidence that I’m on the right track and I have new ways to look at the innovation process. It has already helped me refine my thinking and caused me to want to accelerate change in my company.

We’ll see what the next two days of CASRO hold in store. Ideally I will be glad to have been at both IIeX and CASRO and have a hard time saying which one was the most valuable. One thing I can say, however, is this: while my friend Lenny Murphy has done an outstanding job leading the call for change in this industry, CASRO still outshines IIeX when it comes to food and drink.  

Hits: 5931 0 Comments

message testingI’ve written before about how much I detest our industry’s aversion to change, but today I’d like to be positive and talk about how we can change while not selling out the principles that should drive market research. Here are five that I’ve used in coming up with new solutions.

  1. Focus on What Is Important and Dump the Rest – I’ve always been a custom researcher and so I tend to want to cover every nuance of an objective before deciding that I’ve done my job. The trouble with this is it can lead to higher budgets and longer schedules. Fine if the issue is a long term strategic goal, but unworkable in a world where clients are making decisions faster than ever before.

  2. Set a Budget – Now this might sound like a cart before the horse issue, but I have found it is easier to be true to point 1 if you start off by establishing your budget. Let’s face it, who has not had a client say that they want to accomplish some set of objectives but only have a very limited amount to spend? When that happens we realize that we’ll have to compromise and we come up with something that might not get into every nuance, but that does help the client make a better decision. In determining your budget you should start by thinking about the cheapest you could imagine doing it for and going well below that (I’d start with half). You might not be able to achieve it but the lower you start the more it will help you to avoid issues discussing in the first point above.
  3. Set a Time Frame – Identical logic to number two. We’ve all had crash projects that had to be achieved in a ridiculously short time frame and we generally figure out a way to accomplish them. Here again, look at the fastest you’ve ever done something in the past and see if you can figure out how to cut that time in half.  
  4. Talk to Clients and Prospects – This is basic. There are unfulfilled needs out there. Some are things the client side researchers can tell you right off (“I’d really like it if you could…”) and some are things they don’t think about because they assume they can’t be done. So have conversations about both. For the things they can articulate, ask them exactly what they would need to fulfill that. For the things they can’t articulate, ask them how a new service would be applied to their business (if at all). The answers here will help you create new ideas and refine the ones you have. Most important it will inform on items 1-4 above.
  5. Never Stop Doing Good Research – Faster and cheaper doesn’t mean bad. Obviously a thoughtful collaborative custom research effort will provide superior market research…but if the time or budget don’t allow for it, then the “superior” research is useless (either too late to help or too expensive to do in the first place). That doesn’t mean you shouldn’t deliver reliable results…just that you need to understand (and make your clients understand) the limitations that result from the compromises you had to make.

At TRC we recently launched Message Test Express™. The product developed out of a phone call with a prospect who complained that he couldn’t do effective quantitative message testing because time did not allow it. From that conversation we set budget and timing criteria and then tried to figure out how we could help him to do effective message testing within those parameters. As we worked through our plan we went back and got feedback from him and other clients to make sure we were on the right track. Finally, we figured out how to include some advanced methods (we used our proprietary Bracket™ prioritization tool to provide individual level utilities for each message) and useful tools (such as a highlighter tool, heat maps and specialized word clouds) that maximized the reliability and usefulness of the results.

Doing all of the above is no guarantee that the product will be a success (too early to know if Message Test Express™ will be), but I believe they are a good foundation for creating one. Of course the alternative (not trying to innovate) will surely lead to failure.

Hits: 4857 0 Comments

Please don’t judge me for this, but I’ve watched at least half a dozen episodes of America’s Got Talent this summer. It is easy viewing with a variety of acts from daredevils to singing and dancing, and features celebrity judges adding sarcastic asides. But what struck me is how the show’s format points to the essential weakness of rating scales and the strength of choice questions.

In the early “audition” shows, acts come on and perform for a few minutes. The judges then critique them and ultimately vote “yes” or “no”. If two judges vote “no” the act is done. Otherwise the contestants go to Las Vegas for the next round.   Now while “yes” or “no” is in fact a choice, it is really nothing more than a disguised rating. The reason is there is no constraint. They don’t have a limit on how many people go forward. This is like reading a list of features and asking respondents which ones are important to them (anyone who has done market research knows the answer to such questions is generally “everything is important”).  

Once in Vegas the hard work begins. This season about 120 acts made it there, but only 60 are needed for the competition. So the judges had to decide which 60 would get to the next stage. To do this they picked 30 acts that they thought were good enough to go on and 60 that they wanted to see again to pick the other 30. The remaining 30 were called in and summarily told that they were done (so yes, they flew them to Vegas just to tell them this). Frankly I’d been surprised by many of the acts that got to go to Vegas, so I wasn’t surprised by the choices.  

The key here was that unlike the early rounds…they now had a constraint. As with Max Diff (where you have to pick winners and losers) and Conjoint (where you are constrained by the mix of features and levels), they now had to make real choices. In this case, many were not hard (though telling 10 year olds they are done can’t be easy…even if they clearly are not good enough).   The 60 remaining acts were not all great (many were not even good in my opinion), but they were far better than the 60 sent packing.

From here the tournament becomes more like our proprietary Bracket™ technique. Performances are compared to each other with some getting to move on (and perform against other winning acts) and some being done. In the end only one act will win…the one that is most popular among the dedicated fans of the show. This is exactly how good market research should work…force hard choices to drive the best product, message, segmentation solution or price using pricing research.

...
Tagged in: Choice Market Research

norm chronicles pricing researchJohn Allen Paulos has written a series of books about how most people have a difficult time understanding the meaning of numbers. Researchers who have relied on numbers to tell a story shouldn’t be surprised by this. Even basic statistics can be hard to grasp let alone the complex Bayesian Math needed for complex efforts like Conjoint. Even though most of our clients are quite numerate, they often present results to those who are not. If we are to play, and help our clients play, an active role in decision-making we have to overcome this problem.

One of the examples that Paulos uses involves our inability to understand risk. In their new book, The Norm Chronicles: Stories and Numbers About Danger, Michael Blastland and David Spiegelhalter have tried to simplify things by boiling risk down to a simple number…the MicroMort. One MicroMort means you have a one in a million chance of death.  

On the one hand, this does seem to simplify often complicated actuarial calculations such that we can see that soldiers in Afghanistan face a danger of 47 MicroMorts daily which is of course far more dangerous than your chances of death in a car crash (about 1 MicroMort per day) but far less than WWII bomber crews who were exposed to 25,000. The use of one number certainly simplifies things, but if someone is not great with numbers it might not resonate.

A second means they use is to convert numbers to “MicroLife” terms. So for example, a smoker’s life is cut short by five hours for each day they smoke. Or my favorite stat that your first alcoholic drink each day adds 30 minutes to your life…sadly a drink every half hour won’t get you immortality since each additional drink deducts 15 minutes. While still using numbers, these do at least present them in a clear relatable way. Of course I wonder how many smokers realize they are deducting a year of life for every five years they smoke?

Finding the right mix between numerical precision and understanding can be tricky and not just for research agencies. The key for us is to find the right mix between numerical precision and a clear message. We can’t get hung up too much on things like “statistical differences” (as our Quirk’s Article pointed out).  Instead we need to focus on the decisions that need to be made and pull together a narrative that helps drive them. This certainly doesn’t mean we shouldn’t use numbers…just that we need to put them in the context of recommendations.

...
The Economy of Food at Sporting Events
Image source: www.sports-management-degrees.com

As we learn to make sense of ever expanding amounts of data into simple recommendations, we would do well to think about presenting data in a better way. People often make the mistake of describing themselves as either a “numbers person” or a “picture person”, but in reality we all possess two sides of the brain. …right (images) and left (analytics). I read an article this week which makes the point that the best way to drive understanding is by presenting analytical data in a visual way. This engages both sides of the brain and thus helps us to quickly internalize what we are seeing.

We might be tempted to say that data visualization is easier said than done (but then what isn’t?). We might also be tempted to say that most market research data isn’t that interesting. I tend to disagree.  

Just last week I exchanged some emails with Sophia Barber of Sports-management-degrees.com. She pointed me to a great info graphic about spending on food at sporting events. It is colorful and comprehensively covers a lot of data. If you are a “numbers person” you might try paging about halfway down where all of the underlying data are presented in stark form. My bet would be that even the staunchest numbers person will get more from the combination than from the dull recitation of facts.  

Of course, both food and sports are relatively interesting topics, but what if the topic isn’t fun and interesting? I still say that results from even highly analytical studies (things like conjoint, discrete choice, pricing studies and so on) can be made more memorable and more interesting through the simple addition of pictures and I mean pictures that go beyond simple graphs and charts (which are often as dull as a list of numbers). Doing so drives the point home faster and with that makes our work more relevant.  

Hits: 5562 0 Comments

Really enjoyed the IIeX Greenbook conference. I generally concurred with the opinions expressed and many of the presentations gave me ideas on how we might better serve our clients. Thought I might share some of my reflections here.

In general terms this was a conference that likely scared more than one researcher to jump. For example, Charles Vila the head of Campbell Soup’s Consumer and Customer Insights for North America said that within five years he doesn’t expect to use any survey data.   Personally, I tend to disagree with such sweeping statements (hopefully this won’t prevent me from working with Campbell’s moving forward), but perhaps they are necessary to shake our often complacent industry into thinking differently.

In that regard, Campbell’s is a good example. Their flagship product is soup, a product that has been around forever and sold by them for 100 years. This doesn’t stop them from innovating not just with new products, but in the way they engage the customer. Their staff is immersed in the latest gadgets that consumers are using so they can better understand how they can be employed in Campbell’s marketing efforts.

So, I’d encourage researchers to do the same. Ultimately it doesn’t matter if surveys go away or simply cease to be the primary form of data collection. If we allow ourselves to be defined by how we acquire data then we deserve to go the way of the proverbial buggy whip manufactures did at the turn of the last century.

The great news is that many of the new technologies being shown off are not really competing with us. Most seek to provide new tools for traditional research companies to use.   Some might replace surveys and others augment them. Some are really just surveys in another form (such as Google’s) and there are new ways to design and implement surveys to better get at the truth (my partner Rajan Sambandam’s presentation on “Behavioral Conjoint” being one self-serving example). The possibility of improving our ability to guide product development, pricing research and marketing is one we should embrace.

...

Want to know more?

Give us a few details so we can discuss possible solutions.

Please provide your Name.
Please provide a valid Email.
Please provide your Phone.
Please provide your Comments.
Enter code below : Enter code below :
Please Enter Correct Captcha code
Our Phone Number is 1-800-275-2827
 Find TRC on facebook  Follow us on twitter  Find TRC on LinkedIn

Our Clients