Welcome visitor you can log in or create an account

800.275.2827

Consumer Insights. Market Innovation.

blog-page
Recent blog posts

new product pricing research ebayI’ve become a huge fan of podcasts, downloading dozens every week and listening to them on the drive to and from work. The quantity and quality of material available is incredible. This week another podcast turned me on to eBay’s podcast “Open for Business”. Specifically the title of episode three “Price is Right” caught my ear.   
While the episode was of more use to someone selling a consumer product than to someone selling professional services, I got a lot out of it.
First off, they highlighted their “Terapeak” product which offers free information culled from the massive data set of eBay buyers and sellers. For this episode they featured how you can use this to figure out how the market values products like yours. They used this to demonstrate the idea that you should not be pricing on a “cost plus” basis but rather on a “value” basis.
From there they talked about how positioning matters and gave a glimpse of a couple market research techniques for pricing. In one case, it seemed like they were using the Van Westendorp. The results indicated a range of prices that was far below where they wanted to price things. This led to a discussion of positioning (in this case, the product was an electronic picture frame which they hoped to be positioned not as a consumer electronic product but as home décor). The researchers here didn’t do anything to position the product and so consumers compared it to an iPad which led to the unfavorable view of pricing.  
Finally, they talked to another researcher who indicated that she uses a simple “yes/no” technique…essentially “would you buy it for $XYZ?” She said that this matched the marketplace better than asking people to “name their price”.  
Of the two methods cited I tend to go with the latter. Any reader of this blog knows that I favor questions that mimic the market place vs. asking strange questions that you wouldn’t consider in real life (what’s the most you would pay for this?”). Of course, there are a ton of choices that were not covered including conjoint analysis which I think is often the most effective means to set prices (see our White Paper - How to Conduct Pricing Research for more).
Still there was much that we as researchers can take from this. As noted, it is important to frame things properly. If the product will be sold in the home décor department, it is important to set the table along those lines and not allow the respondent to see it as something else. I have little doubt if the Van Westendorp questions were preceded by proper framing and messaging the results would have been different.
I also think the use of big data tools like Terapeak and Google analytics are something we should make more use of.  Secondary research has never been easier!  In the case of pricing research, knowing the range of prices being paid now can provide a good guide on what range of prices to include in, say, a Discrete Choice exercise. This is true even if the product has a new feature not currently available. Terapeak allows you to view prices over time so you can see the impact of the last big innovation, for example.
Overall, I commend eBay for their podcast. It is quite entertaining and provides a lot of useful information…especially for someone starting a new business.

Hits: 129 0 Comments

storytelling market researchMany researchers are by nature math geeks. We are comfortable with numbers and statistical methods like regression or max-diff. Some find the inclusion of fancy graphics as just being a distraction...just wasted space on the page that could be used to show more numbers! I've even heard infographics defined as "information lite". Surely top academics think differently!
No doubt if you asked top academics they might well tell you that they prefer to see the formulas and the numbers and not graphics. This is no different than respondents who tend to tell us that things like celebrity endorsements don't matter until we use an advanced method like discrete choice conjoint to prove otherwise.
Bill Howe and his colleagues at the University of Washington in Seattle, figured out a way to test the power of graphics without asking. They built an algorithm that could distinguish, with a high degree of success, between diagrams, equations, photographs and plots (bar charts for example) and tables. They then exposed the algorithm to 650,000 papers with over 10 Million figures in them.
For each paper they also calculated an Eigenfactor score (similar to what Google uses for search) to rate the importance of each paper (by looking at how often the paper is cited).
On average papers had 1 diagram for every three pages and 1.67 citations. Papers with more diagrams per page tended to get 2 extra citations for every additional diagram per page. So clearly, even among academics, diagrams seemed to increase the chances that the papers were read and the information was used.
Now we can of course say that this is "correlation" and not "causation" and that would be correct. It will take further research to truly validate the notion that graphics increase interest AND comprehension.
I'm not waiting for more research. These findings validate where the industry has been going. Clients are busy and their stakeholders are not as engaged as they might have been in the past. They don't care about the numbers or the formulas (by the way, formulas in academic papers reduced the frequency with which they were cited)...they care about what the data are telling them. If we can deliver those results in a clear graphical manner it saves them time, helps them internalize the results and because of that increases the likelihood that the results will be used.

So while graphics might not make us feel smart...they actually should.

Hits: 334 0 Comments

conjoint-vs-configuratorWe at TRC conduct a lot of marketing research projects using Conjoint Analysis. Conjoint is a very powerful tool for determining preferences for the various components that make up a product or service. The power of Conjoint comes from having consumers make mental trade-offs in evaluating products against each other. Do they prefer a lower cost product that contains few features, or a higher priced product that provides many benefits? How willing are they to choose a product that meets 2 or 3 of their criteria, but not all? Conjoint forces consumers to make these decisions, and the results can then be simulated to determine purchase preferences in a variety of scenarios.
But not all product development problems can be solved with Conjoint. Conjoint requires certain steps in the development cycle to have already been taken (defined features, some idea of pricing – see my previous blog on the topic.) In some cases, though, you may be at a stage in which Conjoint is feasible, but a different approach may be more appropriate, such as a Configurator. In a Configurator, otherwise known as a "Build-Your-Own" approach, you would use the same product features as in a Conjoint, but instead of pitting potential products against one another, the consumer "builds their own" ideal product.
So why choose one technique over the other? There are many reasons, but here are a few:
1. If determining overall product price sensitivity is the goal – Choose Conjoint. Conjoint will produce scores that assess both the importance of price overall as well as price tolerance for the product as features are included or excluded.
2. If you just want to know which features are the most popular, or which ones are selected when choosing or not choosing other features – Choose Configurator. In an a la carte scenario, respondents can choose which items to throw in their shopping cart and which ones to leave on the shelf. Getting simple counts on which features are popular and which ones are not – and in what combinations – can be very useful information, and it's an easier task for respondents. Keep in mind though that the Configurator works best if each feature is pre-assigned a price (to keep respondents from piling on).
3. If understanding competitive advantage/disadvantage is paramount – Choose Conjoint. Conjoint allows you to include "Brand" as a feature, and the results will link brand to the product price to see if respondents are willing to pay more (or less) for your product vs. that of a key competitor. You can also simulate competitive market scenarios. While you can include Brand in a Configurator, modeling the trade-off between brand and product price is far less robust.
4. If you have a lot of features, or complex relationships between the features - Choose Configurator. It's much easier for a respondent to sift through a long list of features and build their ideal product just once than to choose between products with a gigantic feature list multiple times. Conjoint works best when the features are not dependent on one another; a long list of restrictions on the features can disqualify Conjoint as a viable solution from a design perspective.
There are plenty of times when a technique may present itself as an obvious choice, and other times when the choice may be more subtle. And in those cases, we turn to our senior analysts who use their expertise and understanding of the research objectives to make sound recommendations.

Hits: 394 0 Comments

when not to use conjointAt the beginning of my research career I grew accustomed to clients asking us for proposals using a methodology that they had pre-selected. In many cases, the client would send us the specs of the entire job, (this many completes, that length of survey) and just ask us for pricing. While this is certainly an efficient way for a client to compare bids across vendors, it didn’t allow for any discussion as to the appropriateness of the method being proposed.  
Today most research clients are looking for their research suppliers to be more actively involved in formulating the research plan. That said, we are often asked to bid on a “conjoint study.”  Our clients who’ve commissioned conjoint work in the past are usually knowledgeable about when a conjoint is appropriate, but sometimes there is a better method out there. And sometimes the product simply isn’t at the right place in the development “chain” to warrant conjoint.
Conjoint, for the uninitiated, is a useful research tool in product development. It is a choice-based method that allows participants to make choices between different products based on the product’s make-up. Each product comprises various features and levels within those features. What keeps respondents from choosing only products made up of the “best” features and levels is some type of constraint – usually price.   
We look to conjoint to help determine an optimal or ideal product scenario, to help price a product given its features, or to suggest whether a client could charge a premium or require a discount.  It has a wide range of uses, but it isn’t always a good fit:  

  1.  When the features haven’t been defined yet. One problem product developers face is having to “operationalize” something that the market hasn’t seen yet. You need to be able to describe a feature, what its benefits are, and its associated levels in layman’s terms. We can’t recommend conjoint if the features are still amorphous.   
  2. When there are a multitude of features with many levels or complex relationships between the features. The respondent needs to be able to absorb and understand the make-up of the products in order to choose between them. If the product is so complex that it requires varying levels of a lot of different features, it’s probably too taxing for the respondents (and may tax the design and resulting analysis as well). Conjoint could be the answer – but the task may need to be broken up into pieces.   
  3. When there are a limited number of features with few levels. In this case, Conjoint may be overkill. A simple monadic concept test or price laddering exercise may suffice.   
  4. When pricing is important, but you have absolutely no idea what the price will be. Conjoint works best when the product’s price levels range from slightly below how you want to price it to slightly above how you want to price it.  If your range is huge, respondents will gravitate toward the lower priced product scenarios and you won’t get much data on the higher end. It may also confuse respondents that similar products would be available at such large price differences.
Hits: 482 0 Comments

When working with clients on parameters for a conjoint design, there is often an assumption that the design includes a current product configuration, or base case. This base case provides a benchmark against which new configurations can be compared.  
Having a benchmark can be both useful and comforting when analyzing the conjoint results. Replicating a base case allows us to reference important metrics that are known for that product (for example, market share, CPU, revenue, etc.). As we configure new products and compare their appeal to our base case, we can gain insight into how these key metrics might be impacted.
Aside from establishing a benchmark, having a base case is also critical if there is concern about cannibalization.  If the expectation for the new product is that it will compete in the market with a current configuration it is critical to understand what impact the new product will have on the current landscape.
However, allowing for a base case in the conjoint design is not always warranted. As products become more dissimilar from current offerings it can become difficult to include a base case. Trying to integrate the components of a current and new product that don’t share many characteristics can lead to conjoint parameters that are too complex to administer, or create apples to oranges comparisons. It is not wrong to leave out a base case as long as it is understood there will be no benchmark comparison.
One hybrid solution to consider is to allow for a set choice that reads something like “None of these, prefer the PRODUCTS currently available”. This is similar to a typical “none” option in the conjoint but provides a bit more information; specifically, that they would not leave the category but are not interested in the new, very different product configuration. Of course this solution would not be appropriate in all instances but does provide a good compromise.
Ultimately, the extent to which “real products” are modeled with a conjoint study’s parameters is a function of the specific information needs and the complexity of the design. Most of the time we want to include that dose of “reality” in our design but don’t be afraid to leave it behind if warranted.

conjoint analysis design

Hits: 595 0 Comments

GRIT-50-LogoTRC is proud to announce that it was voted as one of the top 50 innovative firms on the market research supplier side. We’re big believers in trying to advance the business of research and we’re excited to see that the GRIT study recognized that.

Our philosophy is to engage respondents using a combination of advanced techniques and better interfaces. Asking respondents what they want or why without context leads to results that overstate real preferences (consumers, after all, want “everything”) and often miss what is driving those decisions (Behavioral Economics tells us that we often don’t know why we buy what we buy).

Through the use of off-the-shelf tools like Max-Diff or the entire family of conjoint methods, we can better engage respondents AND gather much more actionable data. Through these tools and some of our own innovations like Bracket™ we can efficiently understand real preference and use analytics to tell us what is driving them.

Our ongoing long-terms partnerships with top academics at universities throughout the country also help us stay innovative. By collaborating with them we are able to drive new innovations that better unlock what drives consumers.

The GRIT study tracks which supplier firms are perceived as most innovative within the global market research industry. It’s a brand tracker using the attribute of ‘innovation’ as the key metric. The answers are gathered on an unaided basis. The survey asks to list top 3 research companies respondents consider innovative, then asks to rank the companies from least to most innovative and finally asks for explanation why they think they are innovative. Given the unaided nature of the study, it is quite an achievement for a firm like TRC to make the same list as firms hundreds of times our size.

...

My friend and I don’t share the same definition of what it means to be on-time. I don’t necessarily subscribe to the “early is on-time, on-time is late, late is unacceptable” theory, but I do try to arrive at or before an agreed upon time. She thinks there is wiggle room surrounding any appointment time – 5 or 10 minutes – and doesn’t seem concerned that I’ve been waiting for her to arrive. The good news is, if I’m running behind schedule, it doesn’t bother her that I arrive late. But if I’m going to be 5 to 10 minutes late, I’ll notify her. She would never think to do the same – because in her mind she’s on-time.

Perhaps I have too strict a definition of what it means to be on-time. Is 5 minutes considered late to everyone or just to me? We surveyed TRC’s online consumer panel to get an answer.

We used 5 minutes as our test case. If an appointment time is at 9:00 and actual arrival is 9:05, do you consider yourself on-time or late (or early)? To make things interesting, we asked about a variety of scenarios, since it’s possible that definitions may change based on the social situation.

If your boss calls an urgent meeting and you arrive 5 minutes past the start time, 2/3 of our participants consider that to be “late”.  When I saw that, at first I felt vindicated. But then I realized that if 2/3 are saying they’re late, that means 1/3 say it’s okay – 5 minutes is on-time or even early. Then I looked at the rest of the scenarios: 2/3 consider 5 minutes as “late” for babysitting or for a weekly religious service. If you show up 5 minutes after your reservation time at a restaurant, only 57% consider that to be late. And if you’re meeting a friend for casual dinner (no reservations), only 47% -- less than half of the adults we surveyed -- believe that 5 minutes off-schedule is actually “late”. What’s this world coming to?

being late infographics TRC

...

Check out the infographic below to see how others learned another language.

Presenting data in an infographic format is like speaking another language. People who didn't understand you before, now can. All of a sudden, they can so clearly see the data points you had been trying to communicate. And just like learning a new language, converting data into infographics  can be daunting - yet the benefits are endless. Mainly, they open up new perspectives. At TRC we can help you overcome this hurdle. We produce infographics as part of our project deliverables.

learning language conjoint analysis

 

Hits: 668 0 Comments

Catalog coverIf you open your mailbox today, chances are that there will be a catalog in it. Even with the explosion in online purchasing, paper catalogs continue to be an important part of the retail marketing mix. Whether they spur traditional mail- or telephone-ordering or, more often now, online purchasing and even foot traffic in brick and mortar stores, catalogs remain critical for retailers. They not only show consumers what is available, but they also serve as an important branding tool.
Even if the recipient does not open or thoroughly review a catalog, its cover, its size and the kind of paper it is printed on can all telegraph meaning about the sender's brand.
But isn't there much more to be gained if the consumer does open the catalog?

How Can Marketers Maximize the Likelihood that a Catalog Is Opened?

Based on an online survey among a panel of consumers nationwide, TRC estimates that the average household receives 3.7 catalogs per week.  That is nearly 200 in the course of a year!
So how can catalog marketers break through the mailbox clutter and inspire consumers to look at what is actually inside their materials? We asked our national panel about some factors that influence their decisions to open (or not open) a catalog they receive. A key learning is something catalog marketers would certainly confirm: targeting is critical. Product interest and perceived need account for a large share of the decision to open a catalog, so getting the catalog to the right person is of course essential.
But once the catalog is in the right mailbox, it is clear that what the recipient sees on its cover will be important in whether or not the catalog is opened. First and foremost is the specific offer (sale, percent off, etc.) highlighted on that cover. Cover imagery also plays a role, particularly if the brand is familiar to the recipient.  
Take a look at the accompanying chart, and note that we asked some respondents to think about catalogs they might receive from familiar companies, while others considered catalogs from companies they had not heard of before. All of those answering had indicated earlier in the survey that they receive and open/look through catalogs in a typical week.

Catalog cover testing2

Leveraging Consumer Research in Catalog Cover Selection

Knowing that the cover can be so important in whether a catalog is opened, TRC believes it is well worth it to devote resources to ensure that the right cover is used. While some catalog marketers will test multiple covers prior to full mail launches, it is impractical to test more than just a few. Those few are typically selected from among a broader set – based on “gut feel” or simple preferences on the part of the design team.
But what if there was an efficient, consumer data driven method to select a “winning” cover from among a broad set of candidates? TRC has developed just that method: our approach leverages our proprietary Bracket™ survey technology to submit a large number of cover designs to a tournament-type evaluation that yields rankings and relative distance across the entire set of designs. An even more streamlined approach, Message Test Express™ or MTE™, can provide similar insights for up to 16 cover designs – in around a week and for a cost of approximately $10,000.  
Considering the volume that any catalog must compete against in the typical recipient’s mailbox, isn’t it practical to maximize the likelihood that the catalog will be opened? Concise, consumer-driven metrics on likely success have been shown in our experience to be superior to “gut feel” evaluations and are certainly more affordable than in-market testing of even a small number of options. Why risk missing a great opportunity by overlooking an optimal cover execution?

Hits: 1075 0 Comments

Research new food products organicFitness and health have always been important to me, but as I’ve gotten older I’ve become even more self-aware of what I eat and where my food comes from.  A key turning point was a year and a half ago when I watched the documentary “Food, Inc.” by filmmaker Robert Kenner.  After watching it I literally was on the fence for a month contemplating becoming vegan.  But alas, my love for a good piece of steak won out.  However, it did leave an imprint on where and what type of food I buy.  My fiancé is of the same mind so when he moved in we started searching out ways to buy locally sourced food and meat from animals that are treated humanely.  Many of our friends, especially those with kids, tend to be food aware as well.  My parents on the other hand, though health and wellness is important to them, think “organic” is a big grocery money scheme.  This got me thinking…who are the most food aware?  Is there an age difference?
Using our online panel of consumers I asked a series of questions to find out.  When looking at health and wellness attitudes, eating well is important to both young and old.  Where we do see differences are those 44 or younger are more motivated to improve their health and wellness and like dining at restaurants that specialize in farm-to-table.  Bob and I are huge fans of farm-to-table restaurants and have been excited by the recent addition of a few establishments near us.

 Top-2-Box: Strongly agree 44 or younger 45 or older
Improve health and wellness 70%↑ 46%
Dine at restaurants that specialize in farm-to-table 46%↑ 26%
Up arrow indicates significantly higher value at 95% confidence level.

Across the board, younger consumers are more likely to buy organic products.  I think the only time my parents buy organic is when my brother comes to town with his little ones as he and my sister-in-law insist on organic only.

 Buy Organic Always / Usually 44 or younger
45 or older
Vegetables and fruit 69%↑ 32%
Meat 58%↑ 22%
Bath and Body Care 58%↑ 20%
Cleaning Products 53%↑ 19%
Up arrow indicates significantly higher value at 95% confidence level.


Now, when asking about participation in various “green” activities (i.e., recycling, composting, and gardening) we see no difference by age.  However, younger consumers are more likely to participate in farm co-ops and raise chickens.

 Yes % 44 or younger
45 or older
Participate in Farm Co-op 19%↑ 2%
Raise Chickens 16%↑ 3%
Up arrow indicates significantly higher value at 95% confidence level.


From our research, it appears that younger consumers are more engaged in wellness activities related to food than older consumers, even though both groups believe health and wellness to be important.  Buying organic can be expensive – so the question becomes how much are people willing to pay for organic products or meat from animals that are treated humanely.  This might be a good topic for a conjoint study which would pit various product options against one another to see how price comes into play when grocery shopping.

Hits: 1024 0 Comments

election bias new product researchSo I certainly do not follow politics closely, even during a presidential election year, which I guess could also be read as I don’t know very much about politics. But that small disclaimer aside, watching the news coverage of the recently passed Iowa Caucuses and upcoming New Hampshire primary, something struck me as peculiar in this process. These events happen in succession, not simultaneously. So first is the Iowa Caucus, then the New Hampshire primary, followed by the Nevada and South Carolina primaries, and so on with the other states.  And after each event is held the results are (almost) immediately known. So the folks in New Hampshire know the outcome from Iowa. The folks in Nevada and South Carolina know the outcomes from Iowa and New Hampshire. 

Doesn’t this lead to inherent and obvious bias? That’s the market researcher side talking. In implementing questionnaires we wouldn’t typically make known the results from previous respondents to those taking the survey later. This would surely have some influence on their answers that we wouldn’t want. We need a clean, pure read (as best as we can with surveys) as to consumer opinions and attitudes. Any deviation from this would surely compromise our data. 

But then again, is this always the case? Could there be situations in which some purposely predisposed informational bias is beneficial? I say yes! Granted one needs to be cautious and thoughtful when exposing respondents to prior information, but sometimes in order to get the specific type of response we want, a little bias is helpful. If asking about a particular product or product function, we may provide an example or guide so they can fully understand the product. E.g. 10 GB of storage is good for X number of movies and X number of songs. 

But circling back to the notion of letting respondents see the answers from previous respondents, even within the same survey, this could be quite helpful in priming folks to start thinking creatively. If we wish to gather creative ideas from consumers, it’s easy enough to ask them outright to jot something down. But it’s difficult to come up with new and creative ideas on the fly without much help. And responses we get from such tasks validate that point as many are nonsense, or short dull answers. So instead, we could show a respondent several ideas that have come up previously, either internally or from previous respondents, to jumpstart the thinking process and either edit/add onto an existing idea, or be stimulated enough to come up with their own unique idea. And truth is, it works! We at TRC implement this exact new product research technique with great success in our Idea Mill™ solution, and end up with many creative and unique ideas that our client companies use to move forward.

So while the presidential process strikes me as odd since any votes cast in other states following the Iowa Caucus may be inherently biased, there are opportunities where this sort of predisposition to information can work in our favor.

Hits: 1713

Future new product researchDecember and January are full of articles that tell us what to expect in the New Year. There is certainly nothing wrong with thinking about the future (far from it), but it is important that we do so with a few things in mind. Predications are easy to make, but hard to get right, at least consistently.


First, to some extent we all suffer from the “past results predict the future” model. We do so because quite often they do, but there is no way to know when they no longer will. As such, be wary of predictions that say something like “last year neuro research was used by 5% of fortune 500 companies…web panels hit the 5% mark and then exploded to more than 50% within three years.” It might be right to assume the two will have similar outcomes, or it might be that the two situations (both in terms of the technique and in terms of the market at the time) are quite different.


Second, we all bring a bias to our thinking. We have made business decisions based on where we think the market is going and so it is only natural that our predictions might line up with that. At TRC we’ve invested in agile products to aid in the early stage product development process. I did so because I believe the market is looking for rigorous, fast and inexpensive ways to solve problems like ideation, prioritization and concept evaluation. Quite naturally if I’m asked to predict the future I’ll tend to see these as having great potential.


Third, some people will be completely self-serving in their predictions. So, for example, we do a tremendous amount of discrete choice conjoint work. I certainly would like to think that this area will grow in the next year so I might be tempted to make the prediction in the hopes that readers will suddenly start thinking about doing a conjoint study.   


Fourth, an expert isn’t always right. Hearing predictions is useful, but ultimately you have to consider the reasoning behind them, seek out your own sources of information and consider things that you already know. Just because someone has a prediction published, doesn’t mean they know the future any better than you do. 

...

Curious mind new product ResearchI recently finished Brian Grazer’s book A Curious Mind and I enjoyed it immensely. I was attracted to the book both because I have enjoyed many of the movies he made with Ron Howard (Apollo 13 being among my favorites) and because of the subject…curiosity.

I have long believed that curiosity is a critical trait for a good researcher. We have to be curious about our clients’ needs, new research methods and most important the data itself. While a cursory review of cross tabs will produce some useful information, it is digging deeper that allows us to make the connections that tell a coherent story. Without curiosity analytical techniques like conjoint or max diff don’t help.

The book shows how Mr. Grazer’s insatiable curiosity had brought him into what he calls “curiosity conversations” with a wide array of individuals from Fidel Castro to Jonas Salk. He had these conversations not because he thought there might be a movie in it, but because he wanted to know more about these individuals. He often came out of the conversations with a new perspective and yes, sometimes even ideas for a movie.

One example was with regards to Apollo 13. He had met Jim Lovell (the commander of that fateful mission) and found his story to be interesting, but he wasn’t sure how to make it into a movie. The technical details were just too complicated.

Later he was introduced by Sting to Veronica de Negri.  If you don’t know who she is (I didn’t), she was a political prisoner in Chile for 8 months during which she was brutally tortured. To survive she had to create for herself an alternate reality. In essence by focusing on the one thing she still had control of (her mind) she was able to endure the things she could not control. Mr. Grazer used that logic to help craft Apollo 13. Instead of being a movie about technical challenges it became a movie about the human spirit and its ability to overcome even the most difficult circumstances.

...

bias in market research two soccer playersIn new product market research we often discuss the topic of bias, though typically these discussions revolve around issues like sample selection (representativeness, non-response, etc.) but what about methodological or analysis bias? Is it possible that we impact results by choosing the wrong market research methods to collect the data or to analyze the results?


A recent article in the Economist presented an interesting study in which the same data set and the same objective was given to 29 different researchers. The objective was to determine if dark skinned soccer players were more likely to get a red card than light skinned players. Each researcher was free to use whatever methods they thought best to answer the question.


Both statistical methods (Bayesian Clustering, logistic regression, linear modeling...) and analysis techniques (some for example considered that some positions might be more likely to get red cards and thus data needed to be adjusted for that) differed from one researcher to the next. No surprise then that results varied as well. One found that dark skinned players were only 89% as likely to get a red card as light skinned players while another found dark skinned players were three times MORE likely to get the red card. So who is right?


There is no easy way to answer that question. I'm sure some of the analysis can easily be dismissed as too superficial, but in other cases the "correct" method is not obvious. The article suggests that when important decisions regarding public policy are being considered the government should contract with multiple researchers and then compare and contrast their results to gain a fuller understanding of what policy should be adapted. I'm not convinced this is such a great idea for public policy (seems like it would only lead to more polarization as groups pick the results they most agreed with going in), but the more important question is, what can we as researchers learn from this?


In custom new product market research the potential for different results is even greater. We are not limited to existing data. Sure we might use that data (customer purchase behavior for example), but we can and will supplement it with data that we collect. These data can be gathered using a variety of techniques and question types. Once the data are collected we have the same potential to come up with different results as the study above.

...

wawa app market research surveyAbout a decade ago, if someone would have mentioned the words "mobile app", anyone would have looked at them with a very puzzled expression. Nowadays, we hear about these apps everywhere. There are commercials for them on television, ads in magazines, billboard posts, etc. It's truly amazing to see how advanced technology has become and what can be accomplished by using it.

In this technology-based era, the smartphone is becoming increasingly popular among a wide variety of ages. In my opinion, the biggest perk of smartphones is that we almost always have access to the Internet. Being that the Internet is one of the most efficient tools that retailers and businesses use to create, retain, and obtain business, why wouldn't they capitalize on the popularity and functionality of smartphones and use it to their advantage to do even more creating, obtaining and refining of their business? One of the best ways for a company to remain competitive in this smartphone era is to create a mobile app specific to the company.

Take Wawa for example. For those who are not on the East coast and may be unfamiliar with Wawa, it is a wonderful place that offers gasoline, freshly prepared foods, snacks, coffee and more. Okay, yes, ultimately it's a convenience store/gas station. However, to many of us on the East coast, it's much more. Anyway, if you download the Wawa app, you can link it up with your credit card or a Wawa gift card, which means you don't even have to bring your wallet into the store. The app includes a rewards system, in which you receive points for your purchases, which can be used to receive a free coffee or tea, or something of similar value. While Wawa offers many benefits to its customers through its mobile app, such as locating a nearby Wawa, checking gasoline prices or having easy access to nutrition info, it also gives app users the chance to provide feedback by means of an open-end suggestion form. It would benefit the company to implement a survey within the app instead of an open-end feedback form to gain insights about customers' transactions, experiences, and their overall opinions.

Fielding surveys within mobile apps provides a quick and easy way to reach customers and gain useful feedback. So, how do you get app users to actually participate in the survey? Simple. When the app is first opened or closed, add a pop-up message with a link to the survey that encourages the user to take the survey. Also, go ahead and add the survey as an item on the app's navigation menu. While it's not ideal to conduct surveys on mobile devices that contain something as intricate as conjoint analysis, companies can still create a simple survey that can be used to gain valuable insights about current products, potential products, customer satisfaction and an abundance of other consumer-related topics.

In order to create the best experience for the app user and get the most out of the data that is collected, companies should consider these five tips when developing a mobile survey:

...

Conjoint Analysis Home buyingDuring my recent first time home buying experience I learned there are many, often competing, factors to consider.   My last blog discussed how I used Bracket™, a tournament-based analytic approach, to determine what homebuyers find most important when considering a home. My list of 13 items did not include standard house stats like # of bedrooms, # of baths, etc. To measure preference for those items I used a conjoint design.

I framed up the conjoint exercise by asking homebuyers to imagine they were shopping for a home and to assume it is located in their ideal location. Using our online panel of consumers, we showed recent or soon-to-be homebuyers 2 house listings side by side, plus an “I wouldn’t choose either of these” option. Each listing included the following:

        • Number of bedrooms: 1, 2, 3 or 4
        • Number of bathrooms: 1 full, 1 full/1 half, 2 full, 2 full/1 half or 3 full
        • House style: Single Family, Townhouse, Condominium, or Multi-Family
        • House condition: Move-in ready, Some work required or Gut job
        • Price: $150,000, $200,000, $250,000, $350,000 or $450,000

I felt a conjoint was best suited here, because in addition to importance, I wanted to see what trade-offs homebuyers were willing to make between these 5 items that are highly important in home buying. Are homebuyers willing to give up a bedroom to get the right price? Are they willing to do some sweat equity to get the number of bedrooms and/or bathrooms they want?

We found the top three most important factors are # of bedrooms, price and house condition. This made perfect sense to me as I would not consider any house with less than 3 bedrooms. Price and house condition were the next two key pieces. Was the house in my price range? How much work was needed? Did the price give me enough wiggle room for repairs? I was curious to see the play between price and house condition among the recent and soon-to-be homebuyers we interviewed.

Using the simulator I selected a 3 bedroom , 2 full baths, Single Family home. I picked 3 price points ($150,000, $300,000, $450,000) and then varied the house condition. Overall, homebuyers are less interested in a "gut job" compared to "move-in-ready". However, at the $150,000 price point, share of preference drops more drastically going from "move-in-ready/some work required" to "gut job" compared to higher price points.

...

whats important homebuying market researchThe weather is starting to warm up and more of us are venturing outside, myself included. Walking my dog around the neighborhood I’ve noticed a number of for-sale signs and it reminds me of my own recent home buying experience. It was exciting and at the same time stressful. Once I made the decision to buy I started watching all the home buying shows and attending open houses to figure out my list of must-haves and nice to haves. I wondered how my list stacked up against others who went through or are going through the home buying process.

Using our online panel of consumers, I employed TRC’s proprietary Bracket™ exercise to find out what homebuyers find most important when considering buying a home. Bracket™ is a tournament-based analytic approach to understanding priorities. For each participant, Bracket™ randomly assigns the items being evaluated into pairs. Participants choose the winning item from each pair; that item moves on to the next round. Rounds continue until there is one “winner” per participant. Bracket™ uses this information to prioritize the remaining items, and calculate the relative distance between them.

I created a list of 13 things to consider. I didn’t include standard house stats: # of bedrooms, # of baths, etc. as I tested those separately using a conjoint analysis (my next blog will dive into what I did there).

Proximity to work

Proximity to family

...

Catalog Cover TestingVery few clients will go to market with a new concept without some form of market research to test it first. Others will use some real world substitutes (such as A/B Mail tests) to accomplish the same end. No one would argue against the effectiveness of things like this...they provide a scientific basis for making the right decision. Why is it then that in early stage decision-making science is often replaced with their gut?

Consider this...an innovation department cooks up a dozen or more ideas for new or improved products and services. At this point they are nothing more than ideas with perhaps some crude mock-ups to go along with them. Doing full out concept testing would be costly for this number of ideas and a real world test is certainly not in the cards. Instead, a "team" which might include product managers, marketing folks, researchers and even some of the innovation people who came up with the concepts are brought together to wean the ideas down to a more manageable level.

The team carefully evaluates each concept, perhaps ranks them and provides their thinking on why they liked certain ones. These "independent' evaluations are tallied and the dozen concepts are reduced to two or three. These two or three are then developed further and put through a more rigorous and costly process - in-market testing. The concept or concepts that score best in this process are then launched to the entire market.

This process produces a result, but also some level of doubt. Perhaps the concept that the team thought was best scored badly in the more rigorous research or the winning concept just didn't perform as well as the team thought it would. Does anyone wonder if perhaps some of the ideas that the team weaned out might have performed even better than the "winners" they picked? What opportunities might have been lost if the best ideas were left on the drawing board?

The initial weaning process is susceptible to various forms of error including group think. The less rigorous process is used not because it is seen as best, but because the rigorous methods normally used are too costly to employ on a large list of items. Does that mean going with your gut is the only option?

...

Market Research Event conjoint AnalysisLast week we held an event in New York in which Mark Broadie from Columbia University talked about his book “Every Shot Counts”. The talk and the book detail his analysis of a very large and complex data set…specifically the “ShotLine” data collected for over a decade by the PGA. It details every shot taken by every pro at every PGA tournament. He was able to use it to challenge some long held assumptions about golf…such as “Do you drive for show and putt for dough?”

On the surface the data set was not easy to work with. Sure it had numbers like how long the hole was, how far the shot went, how far it was from the hole and so on. It also had data like whether it ended up in the fairway, on the green, in the rough, in a trap or the dreaded out of bounds. Every pro has a different set of skills and there were a surprising range of abilities even in this set, but he added the same data on tens of thousands of amateur golfers of various skill levels. So how can anyone make sense of such a wide range of data and do it in a way that the amateur who scores 100 can be compared to the pro who frequently scores n the 60’-s?

You might be tempted to say that he would use a regression analysis, but he did not. You might assume he used Hierarchical Bayesian estimation as it has become more commonplace (it drives discrete choice conjoint, Max Diff and our own Bracket™), he didn’t use it here either.

Instead, he used simple arithmetic. No HB, no calculus, no Greek letters, just simple addition, subtraction, multiplication and division. At the base level, he simply averaged similar scores. Specifically he determined how many strokes it took on average for players to go from where they were to the hole. These averages were further broken down to account for where the ball started (not just distance, but rough, sand, fairway, etc) and how good the golfer was.

These simple averages allow him to answer any number of “what if” questions. For example, he can see on average how many strokes are saved by going an extra 50 yards off the tee (which turns out to be more than for being better at putting). He can also show that in fact neither driving nor putting is as important as the approach shot (the last full swing before putting the ball on the green). The ability to put the ball close to the hole on this shot is the biggest factor in scoring low.

...
  • market research philadelphia farmersA recent post on my Facebook timeline boasted that Lansdale Farmers Market was voted the Best of Montgomery County, PA two years in a row. That’s the market I patronize, and I’d like to feel a bit of pride for it. But I’m a researcher and I know better.

Lansdale Farmers Market is a nice little market in the Philadelphia outskirts, but is it truly the best in the entire county? Possibly, but you can’t tell from this poll. Lansdale Farmers Market solicited my participation by directing me to a site that would register my vote for them (Heaven only knows how much personal information “The Happening List” gains access to).  I’m sure that the other farmers markets solicited their voters in the same or similar ways. This amounts to little more than a popularity contest. Therefore, the only “best” that my market can claim is that it is the best in the county at getting its patrons to vote for it.

But if you have more patrons voting for you, shouldn’t that mean that you truly are the best? Not necessarily. It’s possible that the “best” market serves a smaller geographic area, doesn’t maintain a customer list, or isn’t as good at using social media, to name a few.

  • A legitimate research poll would seek to overcome these biases. So what are the markers of a legitimate research poll? Here are a few:
  1. You’re solicited by a neutral third party. Sometimes the survey sponsors identify themselves up front and that’s okay. But usually if a competitive assessment is being conducted, the sponsor remains anonymous so as not to bias the results.
  2. You’re given competitive choices, not just a plea to “vote for me”.  
  3. You may not be able to tell this, but there should be some attempt to uphold scientific sampling rigor. For example, if the only people included in the farmers market survey were residents of Lansdale, you could see how the sampling method would introduce an insurmountable bias.

The market opens for the summer season in a few weeks, and you can bet that I’ll be there. But I won’t stop to admire the inevitable banner touting their victory.

Hits: 2181 0 Comments

Want to know more?

Give us a few details so we can discuss possible solutions.

Please provide your Name.
Please provide a valid Email.
Please provide your Phone.
Please provide your Comments.
Enter code below : Enter code below :
Please Enter Correct Captcha code
Our Phone Number is 1-800-275-2827
 Find TRC on facebook  Follow us on twitter  Find TRC on LinkedIn

Our Clients