Welcome visitor you can log in or create an account

800.275.2827

Consumer Insights. Market Innovation.

blog-page
Recent blog posts
statistics-in-market-research-
Last week I was watching the CBS morning news and they had a story about a new study that indicated that your attitude toward gym class as a child shaped your attitude toward exercise your entire life. After watching the story I am convinced that it is another case of causation being confused for correlation.
 
The basics were that kids who reported loving gym class were far more active decades later than kids who reported finding it stressful (“I was always picked last”).  I don’t doubt this correlation. My problem is they spoke of a need to make gym class more inclusive so that all kids grow up to exercise more. In other words, if we can take the stress out of gym for kids who are not good at sports, we can get them to love exercise more. I’m not sure achieving the first part of this is possible and I’m even more certain that even if we do it will not alter future behavior. 
 
I don’t see how you can eliminate the anxiety about gym class without eliminating the physical activity. Sure you could eliminate picking teams and save that humiliation, but once the games begin the kids who are poor at sports will continue to feel anxiety. Even if you simply make it an exercise class, the kids who are out of shape will stand out. Short of one-on-one classes I don’t see how you can fix the problem.
 
Doing so is also not likely to make us more active as adults. The kids who were not good at gym were not good for a variety of reasons but likely they either lacked the natural talent OR more likely the interest in sports that the athletes had. Gym class wasn’t the cause of this! If there had been no gym class I would bet that the kids who didn’t like gym class would still be less active than those that did (those who loved it and/or were good at it). I’d point the “causation arrow” backwards….if you love sports as an adult you probably liked gym class.  
 
It is easy to forget that we have to play a role in explaining statistical principles in our reporting and not just when doing sophisticated work like Discrete Choice Conjoint, Max-Diff, Segmentations and Regressions. Our direct clients likely understand causation and correlation issues, but it is important to know that their internal clients may not. Clear justification for pointing the “causation arrow” must be provided in reports and presentations. Just as important is knocking down attempts to point the arrow based solely on correlation. Otherwise they may walk away with a completely false assumption and not double back with researchers to validate it.  
 
This study is also useful in highlighting another common mistake made by internal clients. I was telling an old friend about the study and he said “that can’t be right, you hated gym class and you are far more active now than when we were kids”. Imagine me in a focus group telling my story of how much I hated waiting to be picked for a team and how my memory of that humiliation caused me to exercise more and more as an adult. The internal client stands up and says “That’s how we make people healthier…more humiliation in gym class!” In that case, someone will be in the room to point out that one person’s story is not projectable to the population…but that’s another blog. 
 
Hits: 704 0 Comments
Callmebymyname market-research-conjointIt’s well known that humans respond to personalization. But, as consumers do we respond more when our name is used when we are being sold to, and if so, why? Specifically, are we more likely to react positively to marketing emails that include our name in it? It turns out that we do indeed, as revealed by an interesting new study to be published in the INFORMS journal Marketing Science (authored by Navdeep Sahni, Christian Wheeler (both from Stanford) and Pradeep Chintagunta (Univ of Chicago)).
 
The researchers were specifically interested in understanding whether including a consumer’s name in the subject line of an email had a positive effect – in terms of the number of emails opened as well as subsequent conversion into sales leads. They ran a classic A/B test where everything was controlled to be the same except the inclusion of the consumer’s name in the subject line. This one tweak was sufficient to increase the probability of opening the email by 20%, which then translated to a 31% increase in sales leads and a 17% reduction in those who wanted to unsubscribe
 
What is interesting here is the nature of the manipulated content. It is non-informative about the product and its benefits, yet still has a significant impact on the consumer’s behavior. This would seem to imply that the effect should be generalizable to other products and contexts as well. To test this they ran two more studies where the products differed as well as the relationship of the consumers to the particular companies. The results were consistent with the first study, establishing the generalizability of the results. “Aspects of the advertising message that are seemingly unrelated to the product can affect how consumers process the message, and significantly change outcomes,” said lead author Navdeep Sahni. 
 
There is then the question of why this occurs. While there are competing theories the best one (message elaboration) seems to be that once their attention is drawn using their name, consumers process the information more carefully. This, of course, has a potential downside in that if the message is not relevant to the consumer then the more careful processing could translate into fewer sales leads and more people unsubscribing.   
 
A rather clever 2x2 design was used to tease out this effect – the recipient’s name was included in the body of the email (or not) and a relevant piece of information in the form of a product discount was included in the email (or not). By including the name in the body of the email, the chances of the recipient processing the message increases. By including the discount the relevance of the message itself becomes higher (or not). So if the psychological mechanism at play is message elaboration, then the condition where attention is drawn and a relevant message is presented should provide the most leads – and that is precisely what they find.  
 
Additional (regression) analysis showed how the pieces fit together. Seeing the name increases the likelihood of the message being read and processed, and increases the chance of a positive outcome – if the message is compelling. By itself, the personalization still has an effect but not as much as it otherwise could with a relevant message.      
 
This research does not tell us what happens when more and more marketers start using email personalization. Will consumers get desensitized to the effect? What if the domain is sensitive? Would consumers get offended resulting in a backlash? The answers are not available in this research as the datasets examined here do not fall into these categories. 
 
But, for now, we can say that email marketers could benefit from including the recipient’s name, and can enhance the effect by having a relevant message in the body of the email.     
 
Tagged in: Consumer Behavior
Hits: 423 0 Comments
advanced-market-research-methods-and-candyAt TRC, the most popular spot in the office is our snack shelf. It features an array of sugary, salty and carb heavy treats. The contents vary and are determined by one person (Ruth, who stocks the shelf) with influence from the rest of us (based on past usage and suggestions). Sometimes the shelf has exactly what you’re looking for. Other times, not so much. But what if instead of relying on Ruth’s powers of deduction we were to use research to figure out the optimal shelf configuration?  We’re researchers, after all. 
 
We would start out by using our Idea Mill™ product to generate ideas on which snacks people want to have. It uses incentive alignment and gamification to bring out the most creative ideas and provide direction on the favorites. It is likely that this will create too long a list of ideas (the candy shelf is only so large) and while we can toss out ideas that are not feasible, we believe it is best not to toss out ideas just because you personally don’t like them (I’m looking at you Mr. Goodbar). Far better to get more consumer input…this time to narrow the list. 
 
We could ask our folks to rate all the suggested snacks and then use that to figure out which ones should make the cut. Ratings might be good enough to eliminate some things (my guess is that despite what people claim, healthy snacks would bite the dust), but among popular snacks (like different types of pretzels) we are not likely to see clear differentiation.
 
A choice method like Max-Diff could help but if the list was long it would require a lot of work on the part of our employee respondents. A method like our proprietary Bracket™  would do the job in a faster and more engaging fashion while still finding clear winners and losers.  
 
Stocking the winners would therefore make the most sense…but would it please the most people?
Currently the shelf features five types of M&M’s (original, almond, caramel, dark and strawberry nut). If dark chocolate was the least preferred it might get cut. But what if those who like almond, caramel and strawberry nut also liked original, but those who like dark only liked it. For situations like this we can take the results of the Bracket™ (or Max-Diff) and use TURF  to find the combination that would please the most people.   
 
Of course, another factor is positioning. The shelf is only so large. M&M’s can be dispensed from any size canister (in fact Ruth has one that spins so that it can dispense three types) while Pretzels tend to come in large bins that take up a lot of room. In addition, not all of the snacks cost the same. In an effort to keep our expenses and waistline under control we follow a strict budget. Might I trade off having greater quantity of a lesser snack in exchange for an expensive favorite? 
 
For these kinds of questions a discrete choice conjoint is the answer. We can include a variety of candy types and constraints related to the room they take up as well as cost. Simulations can then optimize how to spend our candy budget.  
Despite our love of research and wide array of tools though, I think in this case they would be overkill (we have a very small population of around 40 employees). So I think we’ll stick with Ruth’s instincts. I never go wanting….
 
Hits: 432 0 Comments

Half life-Market-Research

I heard a great episode of the “You Are Not so Smart” podcast in which Sam Arbesman talked about his book called “The Half Life of Facts”. This book has nothing to do with “truthiness”, “fake news” or any accusation that someone is or is not a liar, but it does provide some context for the world we live in.
 
The book’s title is taken from a scientific term (the time it takes an isotope to lose half of its radioactivity) and the notion that as we learn more, some things we took as “fact” will turn out to be wrong. Newton’s laws, for example, were supplanted by Einstein. The point of the book is not that we shouldn’t bother learning facts, but rather that we should be open to the possibility that they might be wrong. Modern medicine acknowledges that they don’t know everything and that some things they “know” will prove to be false. At the same time, they must treat patients based on what is known or thought to be known.
 
It got me thinking about our business. What is the half-life of facts here? You might be tempted to take comfort in the fact that things like margin of error have not changed. While technically true, this ignores that academia is facing a crisis of confidence over statistically significant findings that don’t hold up in subsequent studies. One cause for this is they run lots of cuts of the data and look for anything statistically significant and then build a rationale for that finding. They ignore that with so many cuts of the data they are likely to find some statistical noise. Don’t we run the same risk with each additional banner we run?
 
There is a known problem with Discrete Choice Conjoint that is often ignored. If you have a product made up of say 8 features each with three levels and 1 with 150 the importance of the feature with 150 levels will be overstated by the model. Still, the model will run, utilities will be calculated and a simulator can be constructed…all of which provide a sense of precision that is not warranted. A researcher who knows about it will guide the client either by changing the design OR by putting the results into their proper perspective. There are many other ways that a complex model like this can produce skewed results and I have little doubt more will be found in the future. 
 
This is not to say that we can’t trust results. Doctors have to treat patients based on what is known today and we must do the same for our clients. The important thing is that we have to acknowledge we have things to learn. As researchers that should be easy for us…
 
Hits: 537 0 Comments
hp-trivia-pricing-research-monetizing
In my previous blog about HQ Trivia I pondered how the creators of HQ were planning to make money.  Right now there is no advertising; venture capital funds the app and the jackpots. Apart from occasional sponsorships, there appears to be no immediate source of additional funding.
 
HQ could do many different things to achieve financial success – content sponsorships, jackpot sponsorships, advertising, product placement, buying ‘lives’ by watching a 15-second spot  – even sponsor logos on host apparel. In fact, there are probably different ways to monetize HQ Trivia that we haven’t even thought of yet – making this a perfect research case for TRC’s Idea Mill™.
 
Idea Mill™ is our method that employs Smart Incentives™ – harnessing the principles of crowd-sourcing to ask respondents for their best idea, and the ideas are then voted on by other respondents within the same research survey. The respondents with the best ideas as judged by their peers are rewarded with prizes. This is a great technique to use when you’re in the idea generation phase of product development.  
 
Once we get a list of potential ways to monetize HQ, we could then winnow the list to the ones that would be feasible to implement, and narrow the list using a prioritization-based research method such as Idea Magnet™. Results can be generated quickly.  
 
Before implementing the winning ideas, we could further explore options by building various scenarios of the sponsored game, and asking HQers to weigh in on which one would be most acceptable to them. Through a choice-based research tool such as discrete choice conjoint, we could vary HQ’s potential features, such as:
 
      • •  Number of ads or sponsorships per game
      • •  Where the ads appear (between rounds, upon game entry) 
      • •  Prize pool
      • •  Having sponsor-related questions
      • •  Getting bonus ‘lives’ for watching sponsor videos
 
All of these techniques employ strategies we use in pricing and product development research to include the consumer in the decision-making process. HQ’s creators are good at asking questions – I hope they do the same in further developing their product.
 
Hits: 634 0 Comments

GRIT-TOP-50-report

I appreciate that we are once again in the GRIT 50 Most Innovative Research Agencies. Innovation has always been important to me and so I am quite gratified when I see our efforts being recognized. What I don't know is how people are defining innovation.

I think as an industry we sometimes label things as innovative that are not while failing to recognize some things that are genuinely innovative. In my view, innovation requires that we provide something of value that wasn't available before. Anything short of that may be 'interesting' but not 'innovative'.

I would put things like neuroscience or most AI into the "interesting" category. There is a lot of potential but so far little so show in terms of tangible benefits. Over the years at TRC we've had many ideas that showed promise, but ultimately didn't prove out (my favorite being "Conjoint Poker"). Ultimately it is the nature of innovation that some things will never leave the drawing board or 'laboratory', but without them there would be no innovation.

On the other side, I think ideas that save time and money are often not viewed as innovative unless they involve something totally new. I disagree. If I can figure out a way to do the same process faster and/or cheaper then I'm innovating. It may not look flashy, but if it allows clients to do something they couldn't otherwise do it is innovation.

...
Tagged in: Market Research
hq-pricing-research
 A bunch of us here at TRC enjoy trivia, so we’ve been playing HQ Trivia using their online app for the past few months. HQ is a 12-question multiple choice quiz that requires a correct answer to move on to the next question. As a group, we have yet to get through all 12 questions and win our share of the prize pool. But it’s a nice team-building exercise and we like learning new things (who knew that 2 US Presidents were born in Vermont).  
 
Given the fun we have playing it, I can understand HQ’s success from the player perspective. Where I am a bit confused is the value proposition for its creators. Venture capital funding provides the prize money.  But there are no ads, so I’m not sure how anybody’s actually making money. There are occasional tie-in partnerships (The awesome Dwayne Johnson hosted one of the gaming sessions to promote his newest movie release, “Rampage”.)  But I suppose the biggest question is, will interest in HQ still be there when they’ve finally signed on enough sponsors to be profitable?  
 
We do a lot of pricing research at TRC, and can model on a variety of variables. But predicting the direction of demand is nearly impossible for certain products. For consumables and many services, product demand is predictable. How your product fares compared to the competition may have its ups and downs, but you can assume that people who bought toilet paper 2 weeks ago will be in the market for toilet paper again soon.
 
But with something like HQ Trivia, product demand is much more difficult to determine in advance, especially more than a few weeks from now. Right now it’s still hot – routinely attracting 700,000 – 1,000,000+ players (HQers) in a given game. How do the creators – and investors and potential sponsors – know whether it’s a good investment?  What if interest suddenly declines, either because the novelty has worn off or because something better comes along?  
 
One way to find out is through longitudinal research. Routinely check in with HQers over time to determine their likelihood to play the next week, their likelihood to recommend to their friends, and their attitudes toward the game itself. This information can be overlaid with the raw data HQ collects through game play every day – number of players, number of referrals, and number of first-time players. This information can not only help shed light on player interest, but players could also weigh in on changes the creators are considering to keep the game fresh.
 
HQers are engaging in a free activity which gives them the opportunity to win cash prizes.  But just because it’s free to play doesn’t mean the HQ powers-that-be couldn’t do pricing research (more on that in a future blog).  
 
For now, I’ll keep on playing HQ hoping I can answer all the questions, not the least of which is: when will I – and the other million HQers – no longer care? 
 
 
Tagged in: Pricing Resarch
Hits: 723 0 Comments

nouns-vs-verbs-in-market-research

I’ve written many times about the importance of “knowing where your data has been”. The most advanced discrete choice conjoint, segmentation or regression is only as good as the data it relies on.  In the past I’ve written about many ways that we can bias respondents from question ordering to badly worded questions and even to push polling techniques. A new study published in Psychological Science would seem to indicate that bias can be created much more subtly than that.
 
Dr. Michael Reifen-Tagar and Dr. Orly Idan determined that you can reduce tension by relying on nouns rather than verbs. They are from Israel so they were not lacking in “high tension” things to ask. For example, half of respondents were asked their level of agreement (on a six point scale) with the “noun focused” statement “I support the division of Jerusalem” and the other half with the “verb focused” statement “I support dividing Jerusalem”.   
 
Consistent and statistically significant differences were found with the verb form garnering less support than the noun form. Follow-up questions also indicated that those who saw the verb form were angrier and showed less support for concessions toward the Palestinians.  
 
Is this a potential problem for researchers? My answer would be “potentially”. 
 
The obvious example might be in published opinion polls. One can imagine a crafty person creating a questionnaire in which issues they agree with are presented in noun form (thus garnering higher agreement from the general public) and ones they disagree with in verb forms (thus garnering lower agreement). It is unlikely that anyone would challenge those results (except for those of you clever enough to read my blog).   
It might also be the case on more consumer-oriented studies, though it is unclear whether the same effect would be felt in situations where tension levels are not so high. In our clients’ best interest, however, it makes sense to be consistent and with that eliminate another form of bias.  
 
Tagged in: Consumer Behavior
Hits: 826 0 Comments

Market-Research-Prioritization-email-violations

I work in a business that depends heavily on email. We use it to ask and answer questions, share work product, and engage our clients, vendors, co-workers and peers on a daily basis. When email goes down – and thankfully it doesn't happen that often – we feel anything from mildly annoyed to downright panic-stricken.

So business email is ubiquitous. But not everyone follows the same rules of engagement – which can make for some very frustrating exchanges.

We assembled a list of 21 "violations" we experienced (or committed) and set out to find out which ones are considered the most bothersome.

Research panelists who say they use email for business purposes were administered our Bracket™ prioritization exercise to determine which email scenario is the "most irritating".

...

Should Hotels Respond to Online Reviews?

Posted by on in Consumer Behavior

Online reviews pricing Market researchYou are planning to take a trip to the city of brotherly love to visit the world famous Philadelphia Flower Show, and would like to book a hotel near the Convention Center venue. If you’re like most people, you go online, perhaps to TripAdvisor or Expedia and look for a hotel. In a few clicks you find a list of hotels with star ratings, prices, amenities, distance to destination – everything you need to make a decision. Quickly you narrow your choice down to two hotels within walking distance of the Flower Show, and conveniently located near the historic Reading Terminal Market.

But how to choose between the two that seem so evenly matched? Perhaps you can take a look at some review comments that might provide more depth? There are hundreds of comments which is more than you have time for, but you quickly read a few on the first page. You are about to close the browser when you notice something. One of the hotels has responses to some of the negative comments. Hmmm…interesting. You decide to read the responses, and see some apologies, a few explanations and general earnestness. No such response for the other hotel, which now begins to seem colder and more distant. What do you do?

In effect, that’s the question Davide Proserpio and Georgios Zervas seek to answer in a recent article in the INFORMS journal Marketing Science. And it’s not hard to see why it’s an important question. Online reviews can have significant impact on a business, and unlike word of mouth they tend to stick around for years (just take a look at the dates on some reviews). Companies can’t do much to stop reviews (especially negative), and so they often try to coopt them by providing responses to selected reviews. It is a manual task, but the idea seems sound. By responding, perhaps they can take the sting out of negative reviews, appear contrite, promise to do better, or just thank the reviewer for the time they took to write the feedback – all with the objective of getting prospective customers to give them a fair chance. The question then is whether such efforts are useful or just more online clutter.

It turns out that’s not an easy question to answer, and as Proserpio and Zervas document in the article, there are several factors that first need to be controlled. But their basic approach is easy enough to understand – they examine whether TripAdvisor ratings for hotels tend to go up after management responds to online reviews. An immediate problem to overcome, ironically enough, is management response. That is, in reaction to bad reviews a hotel may actually make changes that then increases future ratings. That’s great for the hotel, but not so much for the researcher who is trying to study if the response to the online review had an impact, not whether the hotel is willing to make changes in response to the review. So, that’s an important factor that needs to be controlled. How to do that?

Enter Expedia. As it happens, hotels frequently respond to TripAdvisor reviews while they almost never do so on Expedia. So, they use Expedia as a control cell and compare the before-after difference in ratings on TripAdvisor and Expedia (the difference-in-difference approach). Hence they are able to tease out if the improvement in ratings was because of responding to reviews or real changes. Another check they use is to compare the ratings of guests who left a review shortly before a hotel began responding with those who did so shortly after the hotel began responding. Much of the article is actually devoted to several more clever and increasingly complex maneuvers they use to finally tease out just the impact of management responses. What do they find? 

...
Tagged in: Consumer Behavior
conjoint-modern-market-research-In my last blog I referenced an article about design elements that no longer serve a purpose and I argued that techniques like Max-Diff and conjoint can help determine whether these elements are really necessary or not. Today I’d like to ask the question “What do we as researchers use that are still useless?”
 
For many years the answer would have been telephone interviewing. We continued to use telephone interviewing long after it became clear that web was a better answer. The common defense was “it is not representative”, which was true, but telephone data collection was no longer representative either. I’m not saying that we should abandon telephone interviewing…there are certainly times when it is a better option (for example, when talking to your clients customers and you don’t have email addresses). I’m just saying that the notion that we need to have a phone sample to make it representative is unfounded.
 
I think though we need to go further. We still routinely use cross tabs to ferret out interesting information. The fact that these interesting tidbits might be nothing more than noise doesn’t stop us from doing so. Further, the many “significant differences” we uncover are often not significant at all…they are statistically discernable, but not significant from a business decision making standpoint. Still the automatic sig testing makes us pause to think about them.
 
Wouldn’t it be better to dig into the data and see what it tells us about our starting hypothesis? Good design means we thought about the hypothesis and the direction we needed during the questionnaire development process so we know what questions to start with and then we can follow the data wherever it leads. While in the past this was impractical, we not live in a world where analysis packages are easy to use. So why are we wasting time looking through decks of tables?
 
There are of course times when having a deck of tables could be a time saver, but like telephone interviewing, I would argue we should limit their use to those times and not simply produce tables because “that’s the way we have always done it”.  
Hits: 1701 0 Comments
new-product-research-car-grilleI read an interesting article about design elements that no longer serve a purpose, but continue to exist. One of the most interesting one is the presence of a grille on electric cars. 
 
Conventional internal combustion engine cars need a grille because the engine needs air to flow over the radiator which cools the engine. No grille would mean the car would eventually overheat and stop working. Electric cars, however, don’t have a conventional radiator and don’t need the air flow. The grille is there because designers fear that the car would look too weird without it.  It is not clear from the article if that is just a hunch or if it has been tested.
   
It would be easy enough to test this out. We could simply show some pictures of cars and ask people which design they like best. A Max-Diff approach or an agile product like Idea Magnet™ (which uses our proprietary Bracket™ prioritization tool) could handle such a task. If the top choices were all pictures that did not include a grille we might conclude that this is the design we should use. There is a risk in this conclusion.
 
To really understand preference, we need to use a discrete choice conjoint. The exercise I envision would combine the pictures with other key features of the car (price, gas mileage, color…). We might include several pictures taken from different angles that highlight other design features (being careful to not have pictures that contradict each other…for example, one showing a spoiler on the back and another not). By mixing up these features we can determine how important each is to the purchase decision.  
It is possible that the results of the conjoint would indicate that people prefer not having a grille AND that the most popular models always include a grille. How?
 
Imagine a situation in which 80% of people prefer “no grille” and 20% prefer “grille”. The “no grille” people prefer it, but it is not the most important thing in their decision. They are more interested in gas mileage and car color than anything else. The “grille” folks, however, are very strong in their belief. They simply won’t buy a car if it doesn’t have one. As such, cars without a grille start with 20% of the market off limits. Cars with a grille, however, attract a good number of “no grille” consumers as well as those for whom it is non-negotiable.
 
Conjoint might also find that the size of the grille or alternatives to it can overcome even hard core “grille” loving consumers. Also worth consideration that preferences will change over time. For example, it isn’t hard to imagine that early automobiles (horseless carriages as they were called originally) had a place to hold a buggy whip (common on horse drawn carriages), but over time, consumers determined they were not necessary (or perhaps that is how the cup holder was born :)).
 
In short, conjoint is a critical tool to insure that new technologies have a chance to take hold. 
 
Hits: 1698 0 Comments

market-research-without-biasThe Economist Magazine did an analysis of political book sales on Amazon to see if there were any patterns. Anyone who uses social media will not be surprised that readers tended to buy books from either the left or the right...not both. This follows an increasing pattern of people looking for validation rather than education and of course it adds to the growing divide in our country. A few books managed a good mix of readers from both sides, though often these were books where the author found fault with his or her own side (meaning a conservative trashing conservatives or a liberal trashing liberals).

I love this use of big data and hopefully it will lead some to seek out facts and opinions that differ from their own. These facts and opinions need not completely change an individual's own thinking, but at the very least they should give one a deeper understanding of the issue, including an understanding of what drives others' thinking.

In other words, hopefully the public will start thinking more like effective market researchers.

We could easily design research that validates the conventional wisdom of our clients.

• We can frame opinions by the way we ask questions or by the questions we asked before.
• We can omit ideas from a max-diff exercise simply because our "gut" tells us they are not viable.
• We can design a discrete choice study with features and levels that play to our client's strengths.
• We can focus exclusively on results that validate our hypothesis.

...

how-to-green-marketingDo people buy green products? Yes, of course. The real question for green marketers is whether they buy enough. In other words, are green sales in line with pro-green attitudes? Not really, as huge majorities of consumers show at least some green tendencies while purchases lag far behind. Why is that? Economics tells us that consumers buy based on value (trading off cost and benefits). Since eco-friendly products are seen as being more expensive, higher prices can lower the value of a green product enough to make a conventional alternative more attractive.

While the cost trade-off is clear, it is not the only one. The benefit side has at least two major components. One is the environmental benefit, which may or may not seem tangible enough to make a difference. For instance, a dozen eggs at Acme goes for less than a dollar, while some cage-free varieties can run north of $4 at Whole Foods. So, an environmentally conscious consumer has to make a trade-off at the time of purchase – is the product worth the additional cost? For items like food, the benefits may seem small enough, and far enough out, that many may decide the value proposition does not work for them. In other product categories (say, green laundry detergent), the benefits may seem both long term and impersonal, making the trade-off even harder.

The second major component is the effectiveness of the product in performing its basic function. If consumers perceive green products as inherently inferior (in terms of conventional attributes like performance), they are less likely to buy them. So a green laundry detergent (that uses less harsh chemicals) could be seen as more expensive and less effective in cleaning clothes, further dropping its overall value. (A complicating issue is that the lack of effectiveness itself could be a perceptual rather than real problem). Unless the company is able to offset these disadvantages, the product is unlikely to succeed.

A direct way to increase demand is to offer higher performance on a compensatory attribute. In the case of LED TVs, for example, newer technology consumes less power and provides better picture quality. (Paradoxically, this can sometimes lead to the Rebound Effect, whereby greener technologies encourage higher use, thus clawing back some of the benefits). But in reality, most products are not in a position where green attributes offer performance boosts.

And of course, as it is with every other market, there are segments in this market as well. Consumers who are highly committed (dark green) are willing to buy, as the value they place on the longer term environmental benefits is high enough. And, often they are affluent enough to afford the price. But a product looking for mainstream success cannot succeed only with dark green consumers (who rarely account for more than 20% of the market). Other shades of green will also need to buy. Short of government subsidies and mandates, green marketers have to find ways to balance out the components of the value proposition for the bulk of the market.

...
3 mistakes-conjoint-in-new-product-research-pricing
Discrete Choice Conjoint is a powerful tool for among other things conducting pricing and product development research. It is flexible and can handle even the most complex of products. With that said, it requires thoughtful design with an understanding of how design will impact results. Here are three mistakes that often lead to flawed design:

 

Making the exercise too complex

The flexibility of conjoint means you can include large numbers of features and levels. The argument for doing so is a strong one…including everything will ensure the choices being made are as accurate as possible. In reality, however, respondents are consumers and consumers don’t like complexity. Walk down the isle of any store and note that the front of the package doesn’t tell you everything about a product…just the most important things.   Retailers know that too much complexity actually lowers sales. Our own research shows that as you add complexity, the importance of the easiest to evaluate feature (normally price) rises…in other words, respondents ignore the wealth of information and focus more on price. 
 

What to do:

Limit the conjoint to the most critical features needed to meet the objectives of the research. If you can’t predict those in advance, then do research to figure it out. A custom Max-Diff to prioritize features or a product like our Idea Magnet (which uses Bracket) will tell you what to include. Other features can be asked about outside the conjoint.  

 

Having unbalanced numbers of levels

Some features only have two levels (for example, on a car conjoint we might have a feature for “Cruise Control” that is either present or not present). Others, however have many levels (again on a car conjoint we might offer 15 different color choices). Not only can including too many levels increase complexity (see point 1), but it can actually skew results. If one feature has many more levels than the rest, the importance of that feature will almost certainly be overstated.  
 

What to do:

As with point one, try to limit the levels to those most critical to the research.  For example, if you are using conjoint to determine brand value you don’t need to include 15 colors…five or six will do the job.  If you can’t limit things, then at least understand that the importance of the feature is being overstated and consider that as you make decisions.  
 

Not focusing on what the respondent sees

Conjoint requires a level of engagement that most questions do not. The respondent has to consider multiple products, each with multiple features and make a reasoned choice. Ultimately they will make choices, but without engagement we can’t be sure those choices represent anything more than random button pushing. Limiting complexity (point 1 again) helps, but it isn’t always enough.   
 

What to do:

Bring out your creative side…make the exercise look attractive. Include graphics (logos for example). If you can make the choice exercise look more like the real world then do so. For example, if the conjoint is about apparel, present the choices on simulated “hang tags”, so consumers see something like they would see in a store. As long as your presentation is not biasing results (for example, making one product look nicer than another) then anything goes. 
 
These are three of the most common design errors, but there are of course many more. I’m tempted to offer a fourth, “Not working with an experienced conjoint firm”, but that of course would be too self-serving!
 
Hits: 1898 0 Comments

3-tips-for-30-in-new-product-research

TRC is celebrating 30 years in business…a milestone to be sure.  

Being a numbers guy, I did a quick search to see how likely it is for a business to survive 30 years. Only about 1 in 5 make it to 15 years, but there isn’t much data beyond that. Extrapolation beyond the available data range is dangerous, but it seems likely that less than 10% of businesses ever get to where we are. To what do I owe this success then?  

It goes without saying that building strong client relationships and having great employees are critical. But I think there are three things that are key to having both those things:

Remaining Curious

I’ve always felt that researchers need to be curious and I’d say the same for Entrepreneurs. Obviously being curious about your industry will bring value, but even curiosity about subjects that have no obvious tie in can lead to innovation. For example, by learning more about telemarketing I discovered digital recording technology and applied it to our business to improve quality.

...

new-product-research-and-conjoint

So much has been written about conducting research for new product development. Not surprisingly, as this is an area of research almost every organization, new or old, has to face day in and day out. As market research consultants, we deal with it all the time and thought it would be beneficial to provide our audience with our own recommendations for some useful sources that explain conjoint analysis – a method most often used when researching new products and conducting pricing research.

Recommendation #1: In 15 Minutes

Understanding Conjoint Analysis in 15 Minutes

This is a relatively brief article from Sawtooth Software, the makers of software used for conjoint, that provides an explanation of the basics of conjoint. The paper uses a specific example of golf balls to make it easy to understand.

Recommendation #2: For Managers

Managerial Overview of Conjoint Analysis 

...
new-product-resesarch-development-inventorA few times a week I get the privilege of talking to an inventor/entrepreneur. The products they call about range from pet toys to sophisticated electronic devices, but they all have one thing in common…they want a proof of concept for their invention. In most cases they want it in order to attract investors or to sell their invention to corporate entities.   
 
Of course, unlike our fortune 500 clients, they also have limited budgets. They’ve often tapped their savings testing prototypes and trying get a patent so they are weary of spending a lot to do consumer research. Even though only about a third of these conversations end up in our doing work for them, I enjoy them all.
 
First off, it is fun educating people on the various tools available for studying concepts. I typically start off telling them about the range of techniques from simple concept evaluations (like our Idea Audit) to more complex conjoint studies. I succinctly outline the additional learning you get as the budget increases. These little five to ten minute symposiums help me become better at talking about what we do.
 
Second, talking to someone as committed to a product as an inventor is infectious. They can articulate exactly how they intend to use the results in a way that some corporate researchers can’t (because they are not always told). While some of their needs are pretty typical (pricing research for example), others are very unique. I enjoy trying to find a range of solutions for them (from various new product research methods) that will answer the question at a budget they can afford. 
 
In many cases, I even steer them away from research. For many inventions something like Kickstarter is all they need.  In essence the market decides if the concept has merit. If that is all they need then why waste money on primary research? My hope is that they succeed and return to us when they have more sophisticated needs down the road.
 
Of course, I particularly enjoy it when the inventor engages us for research. Often the product is different than anything else we’ve researched and there is just something special about helping out a budding entrepreneur. The fact that these engagements make us better researchers for our corporate research clients is just a bonus.   
 
Hits: 1962 0 Comments

new-product-research-floating-grilleI recently heard an old John Oliver comedy routine in which he talked about a product he'd stumbled upon...a floating barbeque grille. He hilariously makes the case that it is nearly impossible to find a rationale for such a product and I have to agree with him. Things like that can make one wonder if in fact we've pretty well invented everything that can be invented.

A famous quote attributed to Charles Holland Duell makes the same case: "Everything that can be invented has been invented". He headed up the Patent Office from 1898 to 1901 so it's not hard to see why he might have felt that way. It was an era of incredible invention which took the world that was largely driven by human and animal power into one in which engines and motors completely changed everything.

It is easy for us to laugh at such stupidity, but I suspect marketers of the future might laugh at the notion that we live in a particularly hard era for new product innovation. In fact, we have many advantages over our ancestors 100+ years ago. First, the range of possibilities is far broader. Not only do we have fields that didn't exist then (such as information technology), but we also have new challenges that they couldn't anticipate. For example, coming up with greener ways to deliver the same or better standard of living.

Second, we have tools at our disposal that they didn't have. Vast data streams provide insight into the consumer mind that Edison couldn't dream of. Of course I'd selfishly point out that tools like conjoint analysis or consumer driven innovation (using tools like our own Idea Mill) further make innovation easier.

The key is to use these tools to drive true innovation. Don't just settle for slight improvements to what already exists....great ideas are out there.

...
In today’s fast paced, high stakes business environment, where budgets are tighter than ever, finding the time and dedicating the resources needed for generating breakthrough product ideas can be very challenging. Many companies we’ve worked with either have no process in place or rely on internal brainstorming to come up with their next product ideas.

As we know, brainstorming sessions often include a variety of stakeholders. Our academic colleague, Jacob Goldenberg, points out in his book “Inside the Box,” that a better approach to coming up with new ideas is to brainstorm on your own independently, and then bring ideas to the drawing board anonymously for further discussion and feedback. 
 
In addition, while ideation brainstorming sessions often draw upon a wealth of information and trends accumulated via various types of past research, it is difficult to come up with product ideas that are truly new. Thus, most efforts result in close-in modifications or adaptations to existing offers.
 
However, there are some key advantages to such a process as well. Likely, the most important of those is early stakeholder engagement. Having the team onboard early and throughout the process certainly increases your odds of success, and those odds increase even more when consumers are also included early in the process.
Recent comment in this post - Show all comments
  • Kevin Dona
    Kevin Dona says #
    I am glad that you liked the webinar! The most efficient way somewhat depends on how you define “efficient” and the end purpose o

Want to know more?

Give us a few details so we can discuss possible solutions.

Please provide your Name.
Please provide a valid Email.
Please provide your Phone.
Please provide your Comments.
Enter code below : Enter code below :
Please Enter Correct Captcha code
Our Phone Number is 1-800-275-2827
 Find TRC on facebook  Follow us on twitter  Find TRC on LinkedIn

Our Clients