My friend and I don’t share the same definition of what it means to be on-time. I don’t necessarily subscribe to the “early is on-time, on-time is late, late is unacceptable” theory, but I do try to arrive at or before an agreed upon time. She thinks there is wiggle room surrounding any appointment time – 5 or 10 minutes – and doesn’t seem concerned that I’ve been waiting for her to arrive. The good news is, if I’m running behind schedule, it doesn’t bother her that I arrive late. But if I’m going to be 5 to 10 minutes late, I’ll notify her. She would never think to do the same – because in her mind she’s on-time.
Perhaps I have too strict a definition of what it means to be on-time. Is 5 minutes considered late to everyone or just to me? We surveyed TRC’s online consumer panel to get an answer.
We used 5 minutes as our test case. If an appointment time is at 9:00 and actual arrival is 9:05, do you consider yourself on-time or late (or early)? To make things interesting, we asked about a variety of scenarios, since it’s possible that definitions may change based on the social situation.
If your boss calls an urgent meeting and you arrive 5 minutes past the start time, 2/3 of our participants consider that to be “late”. When I saw that, at first I felt vindicated. But then I realized that if 2/3 are saying they’re late, that means 1/3 say it’s okay – 5 minutes is on-time or even early. Then I looked at the rest of the scenarios: 2/3 consider 5 minutes as “late” for babysitting or for a weekly religious service. If you show up 5 minutes after your reservation time at a restaurant, only 57% consider that to be late. And if you’re meeting a friend for casual dinner (no reservations), only 47% -- less than half of the adults we surveyed -- believe that 5 minutes off-schedule is actually “late”. What’s this world coming to?
Check out the infographic below to see how others learned another language.
Presenting data in an infographic format is like speaking another language. People who didn't understand you before, now can. All of a sudden, they can so clearly see the data points you had been trying to communicate. And just like learning a new language, converting data into infographics can be daunting - yet the benefits are endless. Mainly, they open up new perspectives. At TRC we can help you overcome this hurdle. We produce infographics as part of our project deliverables.
If you open your mailbox today, chances are that there will be a catalog in it. Even with the explosion in online purchasing, paper catalogs continue to be an important part of the retail marketing mix. Whether they spur traditional mail- or telephone-ordering or, more often now, online purchasing and even foot traffic in brick and mortar stores, catalogs remain critical for retailers. They not only show consumers what is available, but they also serve as an important branding tool.
Even if the recipient does not open or thoroughly review a catalog, its cover, its size and the kind of paper it is printed on can all telegraph meaning about the sender's brand.
But isn't there much more to be gained if the consumer does open the catalog?
Based on an online survey among a panel of consumers nationwide, TRC estimates that the average household receives 3.7 catalogs per week. That is nearly 200 in the course of a year!
So how can catalog marketers break through the mailbox clutter and inspire consumers to look at what is actually inside their materials? We asked our national panel about some factors that influence their decisions to open (or not open) a catalog they receive. A key learning is something catalog marketers would certainly confirm: targeting is critical. Product interest and perceived need account for a large share of the decision to open a catalog, so getting the catalog to the right person is of course essential.
But once the catalog is in the right mailbox, it is clear that what the recipient sees on its cover will be important in whether or not the catalog is opened. First and foremost is the specific offer (sale, percent off, etc.) highlighted on that cover. Cover imagery also plays a role, particularly if the brand is familiar to the recipient.
Take a look at the accompanying chart, and note that we asked some respondents to think about catalogs they might receive from familiar companies, while others considered catalogs from companies they had not heard of before. All of those answering had indicated earlier in the survey that they receive and open/look through catalogs in a typical week.
Knowing that the cover can be so important in whether a catalog is opened, TRC believes it is well worth it to devote resources to ensure that the right cover is used. While some catalog marketers will test multiple covers prior to full mail launches, it is impractical to test more than just a few. Those few are typically selected from among a broader set – based on “gut feel” or simple preferences on the part of the design team.
But what if there was an efficient, consumer data driven method to select a “winning” cover from among a broad set of candidates? TRC has developed just that method: our approach leverages our proprietary Bracket™ survey technology to submit a large number of cover designs to a tournament-type evaluation that yields rankings and relative distance across the entire set of designs. An even more streamlined approach, Message Test Express™ or MTE™, can provide similar insights for up to 16 cover designs – in around a week and for a cost of approximately $10,000.
Considering the volume that any catalog must compete against in the typical recipient’s mailbox, isn’t it practical to maximize the likelihood that the catalog will be opened? Concise, consumer-driven metrics on likely success have been shown in our experience to be superior to “gut feel” evaluations and are certainly more affordable than in-market testing of even a small number of options. Why risk missing a great opportunity by overlooking an optimal cover execution?
Fitness and health have always been important to me, but as I’ve gotten older I’ve become even more self-aware of what I eat and where my food comes from. A key turning point was a year and a half ago when I watched the documentary “Food, Inc.” by filmmaker Robert Kenner. After watching it I literally was on the fence for a month contemplating becoming vegan. But alas, my love for a good piece of steak won out. However, it did leave an imprint on where and what type of food I buy. My fiancé is of the same mind so when he moved in we started searching out ways to buy locally sourced food and meat from animals that are treated humanely. Many of our friends, especially those with kids, tend to be food aware as well. My parents on the other hand, though health and wellness is important to them, think “organic” is a big grocery money scheme. This got me thinking…who are the most food aware? Is there an age difference?
Using our online panel of consumers I asked a series of questions to find out. When looking at health and wellness attitudes, eating well is important to both young and old. Where we do see differences are those 44 or younger are more motivated to improve their health and wellness and like dining at restaurants that specialize in farm-to-table. Bob and I are huge fans of farm-to-table restaurants and have been excited by the recent addition of a few establishments near us.
|Top-2-Box: Strongly agree||44 or younger||45 or older|
|Improve health and wellness||70%↑||46%|
|Dine at restaurants that specialize in farm-to-table||46%↑||26%|
|Up arrow indicates significantly higher value at 95% confidence level.|
Across the board, younger consumers are more likely to buy organic products. I think the only time my parents buy organic is when my brother comes to town with his little ones as he and my sister-in-law insist on organic only.
|Buy Organic Always / Usually||44 or younger
||45 or older
|Vegetables and fruit||69%↑||32%|
|Bath and Body Care||58%↑||20%|
|Up arrow indicates significantly higher value at 95% confidence level.|
Now, when asking about participation in various “green” activities (i.e., recycling, composting, and gardening) we see no difference by age. However, younger consumers are more likely to participate in farm co-ops and raise chickens.
|Yes %||44 or younger
||45 or older|
|Participate in Farm Co-op||19%↑||2%|
|Up arrow indicates significantly higher value at 95% confidence level.|
From our research, it appears that younger consumers are more engaged in wellness activities related to food than older consumers, even though both groups believe health and wellness to be important. Buying organic can be expensive – so the question becomes how much are people willing to pay for organic products or meat from animals that are treated humanely. This might be a good topic for a conjoint study which would pit various product options against one another to see how price comes into play when grocery shopping.
So I certainly do not follow politics closely, even during a presidential election year, which I guess could also be read as I don’t know very much about politics. But that small disclaimer aside, watching the news coverage of the recently passed Iowa Caucuses and upcoming New Hampshire primary, something struck me as peculiar in this process. These events happen in succession, not simultaneously. So first is the Iowa Caucus, then the New Hampshire primary, followed by the Nevada and South Carolina primaries, and so on with the other states. And after each event is held the results are (almost) immediately known. So the folks in New Hampshire know the outcome from Iowa. The folks in Nevada and South Carolina know the outcomes from Iowa and New Hampshire.
Doesn’t this lead to inherent and obvious bias? That’s the market researcher side talking. In implementing questionnaires we wouldn’t typically make known the results from previous respondents to those taking the survey later. This would surely have some influence on their answers that we wouldn’t want. We need a clean, pure read (as best as we can with surveys) as to consumer opinions and attitudes. Any deviation from this would surely compromise our data.
But then again, is this always the case? Could there be situations in which some purposely predisposed informational bias is beneficial? I say yes! Granted one needs to be cautious and thoughtful when exposing respondents to prior information, but sometimes in order to get the specific type of response we want, a little bias is helpful. If asking about a particular product or product function, we may provide an example or guide so they can fully understand the product. E.g. 10 GB of storage is good for X number of movies and X number of songs.
But circling back to the notion of letting respondents see the answers from previous respondents, even within the same survey, this could be quite helpful in priming folks to start thinking creatively. If we wish to gather creative ideas from consumers, it’s easy enough to ask them outright to jot something down. But it’s difficult to come up with new and creative ideas on the fly without much help. And responses we get from such tasks validate that point as many are nonsense, or short dull answers. So instead, we could show a respondent several ideas that have come up previously, either internally or from previous respondents, to jumpstart the thinking process and either edit/add onto an existing idea, or be stimulated enough to come up with their own unique idea. And truth is, it works! We at TRC implement this exact new product research technique with great success in our Idea Mill™ solution, and end up with many creative and unique ideas that our client companies use to move forward.
So while the presidential process strikes me as odd since any votes cast in other states following the Iowa Caucus may be inherently biased, there are opportunities where this sort of predisposition to information can work in our favor.
December and January are full of articles that tell us what to expect in the New Year. There is certainly nothing wrong with thinking about the future (far from it), but it is important that we do so with a few things in mind. Predications are easy to make, but hard to get right, at least consistently.
First, to some extent we all suffer from the “past results predict the future” model. We do so because quite often they do, but there is no way to know when they no longer will. As such, be wary of predictions that say something like “last year neuro research was used by 5% of fortune 500 companies…web panels hit the 5% mark and then exploded to more than 50% within three years.” It might be right to assume the two will have similar outcomes, or it might be that the two situations (both in terms of the technique and in terms of the market at the time) are quite different.
Second, we all bring a bias to our thinking. We have made business decisions based on where we think the market is going and so it is only natural that our predictions might line up with that. At TRC we’ve invested in agile products to aid in the early stage product development process. I did so because I believe the market is looking for rigorous, fast and inexpensive ways to solve problems like ideation, prioritization and concept evaluation. Quite naturally if I’m asked to predict the future I’ll tend to see these as having great potential.
Third, some people will be completely self-serving in their predictions. So, for example, we do a tremendous amount of discrete choice conjoint work. I certainly would like to think that this area will grow in the next year so I might be tempted to make the prediction in the hopes that readers will suddenly start thinking about doing a conjoint study.
Fourth, an expert isn’t always right. Hearing predictions is useful, but ultimately you have to consider the reasoning behind them, seek out your own sources of information and consider things that you already know. Just because someone has a prediction published, doesn’t mean they know the future any better than you do.
I recently finished Brian Grazer’s book A Curious Mind and I enjoyed it immensely. I was attracted to the book both because I have enjoyed many of the movies he made with Ron Howard (Apollo 13 being among my favorites) and because of the subject…curiosity.
I have long believed that curiosity is a critical trait for a good researcher. We have to be curious about our clients’ needs, new research methods and most important the data itself. While a cursory review of cross tabs will produce some useful information, it is digging deeper that allows us to make the connections that tell a coherent story. Without curiosity analytical techniques like conjoint or max diff don’t help.
The book shows how Mr. Grazer’s insatiable curiosity had brought him into what he calls “curiosity conversations” with a wide array of individuals from Fidel Castro to Jonas Salk. He had these conversations not because he thought there might be a movie in it, but because he wanted to know more about these individuals. He often came out of the conversations with a new perspective and yes, sometimes even ideas for a movie.
One example was with regards to Apollo 13. He had met Jim Lovell (the commander of that fateful mission) and found his story to be interesting, but he wasn’t sure how to make it into a movie. The technical details were just too complicated.
Later he was introduced by Sting to Veronica de Negri. If you don’t know who she is (I didn’t), she was a political prisoner in Chile for 8 months during which she was brutally tortured. To survive she had to create for herself an alternate reality. In essence by focusing on the one thing she still had control of (her mind) she was able to endure the things she could not control. Mr. Grazer used that logic to help craft Apollo 13. Instead of being a movie about technical challenges it became a movie about the human spirit and its ability to overcome even the most difficult circumstances....
In new product market research we often discuss the topic of bias, though typically these discussions revolve around issues like sample selection (representativeness, non-response, etc.) but what about methodological or analysis bias? Is it possible that we impact results by choosing the wrong market research methods to collect the data or to analyze the results?
A recent article in the Economist presented an interesting study in which the same data set and the same objective was given to 29 different researchers. The objective was to determine if dark skinned soccer players were more likely to get a red card than light skinned players. Each researcher was free to use whatever methods they thought best to answer the question.
Both statistical methods (Bayesian Clustering, logistic regression, linear modeling...) and analysis techniques (some for example considered that some positions might be more likely to get red cards and thus data needed to be adjusted for that) differed from one researcher to the next. No surprise then that results varied as well. One found that dark skinned players were only 89% as likely to get a red card as light skinned players while another found dark skinned players were three times MORE likely to get the red card. So who is right?
There is no easy way to answer that question. I'm sure some of the analysis can easily be dismissed as too superficial, but in other cases the "correct" method is not obvious. The article suggests that when important decisions regarding public policy are being considered the government should contract with multiple researchers and then compare and contrast their results to gain a fuller understanding of what policy should be adapted. I'm not convinced this is such a great idea for public policy (seems like it would only lead to more polarization as groups pick the results they most agreed with going in), but the more important question is, what can we as researchers learn from this?
In custom new product market research the potential for different results is even greater. We are not limited to existing data. Sure we might use that data (customer purchase behavior for example), but we can and will supplement it with data that we collect. These data can be gathered using a variety of techniques and question types. Once the data are collected we have the same potential to come up with different results as the study above.
About a decade ago, if someone would have mentioned the words "mobile app", anyone would have looked at them with a very puzzled expression. Nowadays, we hear about these apps everywhere. There are commercials for them on television, ads in magazines, billboard posts, etc. It's truly amazing to see how advanced technology has become and what can be accomplished by using it.
In this technology-based era, the smartphone is becoming increasingly popular among a wide variety of ages. In my opinion, the biggest perk of smartphones is that we almost always have access to the Internet. Being that the Internet is one of the most efficient tools that retailers and businesses use to create, retain, and obtain business, why wouldn't they capitalize on the popularity and functionality of smartphones and use it to their advantage to do even more creating, obtaining and refining of their business? One of the best ways for a company to remain competitive in this smartphone era is to create a mobile app specific to the company.
Take Wawa for example. For those who are not on the East coast and may be unfamiliar with Wawa, it is a wonderful place that offers gasoline, freshly prepared foods, snacks, coffee and more. Okay, yes, ultimately it's a convenience store/gas station. However, to many of us on the East coast, it's much more. Anyway, if you download the Wawa app, you can link it up with your credit card or a Wawa gift card, which means you don't even have to bring your wallet into the store. The app includes a rewards system, in which you receive points for your purchases, which can be used to receive a free coffee or tea, or something of similar value. While Wawa offers many benefits to its customers through its mobile app, such as locating a nearby Wawa, checking gasoline prices or having easy access to nutrition info, it also gives app users the chance to provide feedback by means of an open-end suggestion form. It would benefit the company to implement a survey within the app instead of an open-end feedback form to gain insights about customers' transactions, experiences, and their overall opinions.
Fielding surveys within mobile apps provides a quick and easy way to reach customers and gain useful feedback. So, how do you get app users to actually participate in the survey? Simple. When the app is first opened or closed, add a pop-up message with a link to the survey that encourages the user to take the survey. Also, go ahead and add the survey as an item on the app's navigation menu. While it's not ideal to conduct surveys on mobile devices that contain something as intricate as conjoint analysis, companies can still create a simple survey that can be used to gain valuable insights about current products, potential products, customer satisfaction and an abundance of other consumer-related topics.
In order to create the best experience for the app user and get the most out of the data that is collected, companies should consider these five tips when developing a mobile survey:...
During my recent first time home buying experience I learned there are many, often competing, factors to consider. My last blog discussed how I used Bracket™, a tournament-based analytic approach, to determine what homebuyers find most important when considering a home. My list of 13 items did not include standard house stats like # of bedrooms, # of baths, etc. To measure preference for those items I used a conjoint design.
I framed up the conjoint exercise by asking homebuyers to imagine they were shopping for a home and to assume it is located in their ideal location. Using our online panel of consumers, we showed recent or soon-to-be homebuyers 2 house listings side by side, plus an “I wouldn’t choose either of these” option. Each listing included the following:
I felt a conjoint was best suited here, because in addition to importance, I wanted to see what trade-offs homebuyers were willing to make between these 5 items that are highly important in home buying. Are homebuyers willing to give up a bedroom to get the right price? Are they willing to do some sweat equity to get the number of bedrooms and/or bathrooms they want?
We found the top three most important factors are # of bedrooms, price and house condition. This made perfect sense to me as I would not consider any house with less than 3 bedrooms. Price and house condition were the next two key pieces. Was the house in my price range? How much work was needed? Did the price give me enough wiggle room for repairs? I was curious to see the play between price and house condition among the recent and soon-to-be homebuyers we interviewed.
Using the simulator I selected a 3 bedroom , 2 full baths, Single Family home. I picked 3 price points ($150,000, $300,000, $450,000) and then varied the house condition. Overall, homebuyers are less interested in a "gut job" compared to "move-in-ready". However, at the $150,000 price point, share of preference drops more drastically going from "move-in-ready/some work required" to "gut job" compared to higher price points....
The weather is starting to warm up and more of us are venturing outside, myself included. Walking my dog around the neighborhood I’ve noticed a number of for-sale signs and it reminds me of my own recent home buying experience. It was exciting and at the same time stressful. Once I made the decision to buy I started watching all the home buying shows and attending open houses to figure out my list of must-haves and nice to haves. I wondered how my list stacked up against others who went through or are going through the home buying process.
Using our online panel of consumers, I employed TRC’s proprietary Bracket™ exercise to find out what homebuyers find most important when considering buying a home. Bracket™ is a tournament-based analytic approach to understanding priorities. For each participant, Bracket™ randomly assigns the items being evaluated into pairs. Participants choose the winning item from each pair; that item moves on to the next round. Rounds continue until there is one “winner” per participant. Bracket™ uses this information to prioritize the remaining items, and calculate the relative distance between them.
I created a list of 13 things to consider. I didn’t include standard house stats: # of bedrooms, # of baths, etc. as I tested those separately using a conjoint analysis (my next blog will dive into what I did there).
Proximity to work
Proximity to family...
Very few clients will go to market with a new concept without some form of market research to test it first. Others will use some real world substitutes (such as A/B Mail tests) to accomplish the same end. No one would argue against the effectiveness of things like this...they provide a scientific basis for making the right decision. Why is it then that in early stage decision-making science is often replaced with their gut?
Consider this...an innovation department cooks up a dozen or more ideas for new or improved products and services. At this point they are nothing more than ideas with perhaps some crude mock-ups to go along with them. Doing full out concept testing would be costly for this number of ideas and a real world test is certainly not in the cards. Instead, a "team" which might include product managers, marketing folks, researchers and even some of the innovation people who came up with the concepts are brought together to wean the ideas down to a more manageable level.
The team carefully evaluates each concept, perhaps ranks them and provides their thinking on why they liked certain ones. These "independent' evaluations are tallied and the dozen concepts are reduced to two or three. These two or three are then developed further and put through a more rigorous and costly process - in-market testing. The concept or concepts that score best in this process are then launched to the entire market.
This process produces a result, but also some level of doubt. Perhaps the concept that the team thought was best scored badly in the more rigorous research or the winning concept just didn't perform as well as the team thought it would. Does anyone wonder if perhaps some of the ideas that the team weaned out might have performed even better than the "winners" they picked? What opportunities might have been lost if the best ideas were left on the drawing board?
The initial weaning process is susceptible to various forms of error including group think. The less rigorous process is used not because it is seen as best, but because the rigorous methods normally used are too costly to employ on a large list of items. Does that mean going with your gut is the only option?...
Last week we held an event in New York in which Mark Broadie from Columbia University talked about his book “Every Shot Counts”. The talk and the book detail his analysis of a very large and complex data set…specifically the “ShotLine” data collected for over a decade by the PGA. It details every shot taken by every pro at every PGA tournament. He was able to use it to challenge some long held assumptions about golf…such as “Do you drive for show and putt for dough?”
On the surface the data set was not easy to work with. Sure it had numbers like how long the hole was, how far the shot went, how far it was from the hole and so on. It also had data like whether it ended up in the fairway, on the green, in the rough, in a trap or the dreaded out of bounds. Every pro has a different set of skills and there were a surprising range of abilities even in this set, but he added the same data on tens of thousands of amateur golfers of various skill levels. So how can anyone make sense of such a wide range of data and do it in a way that the amateur who scores 100 can be compared to the pro who frequently scores n the 60’-s?
You might be tempted to say that he would use a regression analysis, but he did not. You might assume he used Hierarchical Bayesian estimation as it has become more commonplace (it drives discrete choice conjoint, Max Diff and our own Bracket™), he didn’t use it here either.
Instead, he used simple arithmetic. No HB, no calculus, no Greek letters, just simple addition, subtraction, multiplication and division. At the base level, he simply averaged similar scores. Specifically he determined how many strokes it took on average for players to go from where they were to the hole. These averages were further broken down to account for where the ball started (not just distance, but rough, sand, fairway, etc) and how good the golfer was.
These simple averages allow him to answer any number of “what if” questions. For example, he can see on average how many strokes are saved by going an extra 50 yards off the tee (which turns out to be more than for being better at putting). He can also show that in fact neither driving nor putting is as important as the approach shot (the last full swing before putting the ball on the green). The ability to put the ball close to the hole on this shot is the biggest factor in scoring low....
Lansdale Farmers Market is a nice little market in the Philadelphia outskirts, but is it truly the best in the entire county? Possibly, but you can’t tell from this poll. Lansdale Farmers Market solicited my participation by directing me to a site that would register my vote for them (Heaven only knows how much personal information “The Happening List” gains access to). I’m sure that the other farmers markets solicited their voters in the same or similar ways. This amounts to little more than a popularity contest. Therefore, the only “best” that my market can claim is that it is the best in the county at getting its patrons to vote for it.
But if you have more patrons voting for you, shouldn’t that mean that you truly are the best? Not necessarily. It’s possible that the “best” market serves a smaller geographic area, doesn’t maintain a customer list, or isn’t as good at using social media, to name a few.
The market opens for the summer season in a few weeks, and you can bet that I’ll be there. But I won’t stop to admire the inevitable banner touting their victory.
We marketing research types like to think of the purchase funnel in terms of brand purchase. A consumer wants to purchase a new tablet. What brands is he aware of? Which ones would he consider? Which would he ultimately purchase? And would he repeat that purchase the next time?
Some products have a more complex purchase funnel, one in which the consumer must first determine whether the purchase itself – regardless of brand – is a “fit” for him. One such case is solar home energy.
Solar is a really great idea, at least according to our intrepid research panelists. Two-thirds of them say they would be interested in installing solar panels on their home to help offset energy costs. There are a lot of different ways that consumers can make solar work for them – and conjoint analysis would be a terrific way to design optimal products for the marketplace.
But getting from “interest” to “consideration” to “purchase” in the solar arena isn’t as easy as just deciding to purchase. Anyone in the solar business will tell you there are significant hurdles, not the least of which is that a consumer needs to be free and clear to make the purchase – renters, condo owners, people with homeowners associations or strict local ordinances may be prohibited from installing them.
Even if you’re a homeowner with no limitations on how you can manage your property, there are physical factors that determine whether your home is an “ideal” candidate for solar. They vary by region and different installers have different requirements, but here’s a short list:...
An issue that comes up quite a bit when doing research is the proper way to frame questions. In my last blog I reported on our Super Bowl ad test in which we surveyed viewers to rank 36 ads based on their “entertainment value”. We did a second survey that framed the question differently to see if we could determine which ads were most effective at driving consideration of the product…in other words, the ads that did what ads are supposed to do!
As with life, framing, or context, is critical in research. First off, the nature of questions is important. Where possible the use of choice questions will work better than say rating scales. The reason is that consumers are used to making choices...ratings are more abstract. Techniques like Max-Diff, Conjoint (typically Discrete Choice these days) or our own proprietary new product research technique Bracket™ get at what is important in a way that ratings can’t.
Second, the environment you create when asking the question must seek to put consumers in the same mindset they would be in when they make decisions in real life. For example, if you are testing slogans for the outside of a direct mail envelope, you should show the slogans on an envelope rather than just in text form.
Finally, you need to frame the question in a way that matches the real world result you want. In the case of a direct mail piece, you should frame the question along the lines of “which of these would you most likely open?” rather than “which of these slogans is most important?”. In the case of a Super Bowl ad (or any ad for that matter), asking about entertainment value is less important than asking about things like “consideration” or even “likelihood to tell others about it”.
So, we polled a second group of people and asked them “which one made you most interested in considering the product as advertised?” The results were quite interesting....
Well, it is the time of year when America’s greatest sporting event takes place. I speak of course about the race to determine which Super Bowl ad is the best. Over the years there have been many ways to accomplish this, but like so often happens in research today, the methods are flawed.
First there is the “party consensus method”. Here people gathered to watch the big game call out their approval or disapproval of various ads. Beyond the fact that the “sample” is clearly not representative, this method has other flaws. At the party I was at we had a Nationwide agent, so criticism of the “dead kid” ad was muted. This is just one example of how people in the group can influence each other (anyone who has watched a focus group has seen this in action). The most popular ad was the Fiat ad with the Viagra pill…not because it was perhaps the favorite, but because parties are noisy and this ad was largely a silent picture.
Second, there is the “opinion leaders” method. The folks who have a platform to spout their opinion (be it TV, YouTube, Twitter or Facebook) tell us what to think. While certainly this will influence opinions, I don’t think tallying up their opinions really gets at the truth. They might be right some of the time, but listening to them is like going with your gut…likely you are missing something.
Third, there is the “focus group” approach. In this method a group of typical people is shuffled off to a room to watch the game and turn dials to rate the commercials they see. So, like any focus group, these “typical” people are of course atypical. In exchange for some money they were willing to spend four hours watching the game with perfect strangers. Further, are focus groups really the way to measure something like which is best? Focus groups can be outstanding at drawing out ideas, providing rich understandings of products and so on, but they are not (nor are they intended to be) quantitative measures.
The use of imperfect means to measure quantitative problems is not unique to Super Bowl ads. I’ve been told by many clients that budget and timing concerns require that they answer some quantitative questions with the opinions of their internal team, or their own gut or qualitative research. That is why we developed our agile and rigorous tools, including Message Test Express™ (MTE™)....
Here in Philly we are recovering from the blizzard that wasn’t. For days we’d been warned of snow falling multiple inches per hour, winds causing massive drifts and the likelihood of it taking days to clear out. The warnings continued right up until we were just hours away from this weather Armageddon. In the end, only New England really got the brunt of the storm. We ended up with a few inches. So how could the weather forecasters have been this wrong?
The simple answer is of course that weather forecasting is complicated. There are so many factors that impact the weather…in this case an “inverted trough” caused the storm to develop differently than expected. So even with the massive historical data available and the variety of data points at their disposal the weather forecasters can be surprised.
At TRC we do an awful lot of conjoint research…a sort of product forecast if you will. It got me thinking about some keys to avoiding making the same kinds of mistakes as the weather forecasters made on this storm:
So, with all these limitations is conjoint worth it? Well, I would suggest that even though the weather forecasters can be spectacularly wrong, I doubt many of us ignore them. Who sets out for work when snow is falling without checking to see if things will improve? Who heads off on a winter business trip without checking to see what clothes to pack? The same is true for conjoint. With all the limitations it has, a well executed model (and executing well takes knowledge, experience and skill) will provide clear guidance on marketing decisions.
As anyone with experience with pets will tell you, no two are alike. If you tune in to Animal Planet’s series “Too Cute,” about the first few months of the lives of litters of puppies and kittens, you’ll find evidence of siblings’ behavior differences. One is reticent, another is always hungry, one sleeps a lot while his sibling is bouncing off the walls.
In my household, our two cats are no exception. They are very different from one another. You can categorize them by saying one is alpha, the other omega (or dominant/submissive, leader/follower). Alpha cat is a bully. He struts around like he owns the place and pushes Omega cat off her perch in the sun so he can claim her spot. She allows him to do this with little to no resistance.
The moment the doorbell rings, Alpha hides under the bed while Omega rushes to the door. Alpha is afraid of the vacuum cleaner, strangers and loud noises, and he rolls over in a submissive pose when the neighbor’s dogs are around. Omega, on the other hand, is food-obsessed and gives Alpha the evil eye when he approaches “her” food dishes. And she’s fearless when encountering new people and strange objects.
So we have a dominant cat and a submissive cat, but those labels don’t really tell the whole story....