Welcome visitor you can log in or create an account

800.275.2827

Consumer Insights. Market Innovation.
blog-page
Recent blog posts

Conjoint Analysis Home buyingDuring my recent first time home buying experience I learned there are many, often competing, factors to consider.   My last blog discussed how I used Bracket™, a tournament-based analytic approach, to determine what homebuyers find most important when considering a home. My list of 13 items did not include standard house stats like # of bedrooms, # of baths, etc. To measure preference for those items I used a conjoint design.

I framed up the conjoint exercise by asking homebuyers to imagine they were shopping for a home and to assume it is located in their ideal location. Using our online panel of consumers, we showed recent or soon-to-be homebuyers 2 house listings side by side, plus an “I wouldn’t choose either of these” option. Each listing included the following:

        • Number of bedrooms: 1, 2, 3 or 4
        • Number of bathrooms: 1 full, 1 full/1 half, 2 full, 2 full/1 half or 3 full
        • House style: Single Family, Townhouse, Condominium, or Multi-Family
        • House condition: Move-in ready, Some work required or Gut job
        • Price: $150,000, $200,000, $250,000, $350,000 or $450,000

I felt a conjoint was best suited here, because in addition to importance, I wanted to see what trade-offs homebuyers were willing to make between these 5 items that are highly important in home buying. Are homebuyers willing to give up a bedroom to get the right price? Are they willing to do some sweat equity to get the number of bedrooms and/or bathrooms they want?

We found the top three most important factors are # of bedrooms, price and house condition. This made perfect sense to me as I would not consider any house with less than 3 bedrooms. Price and house condition were the next two key pieces. Was the house in my price range? How much work was needed? Did the price give me enough wiggle room for repairs? I was curious to see the play between price and house condition among the recent and soon-to-be homebuyers we interviewed.

Using the simulator I selected a 3 bedroom , 2 full baths, Single Family home. I picked 3 price points ($150,000, $300,000, $450,000) and then varied the house condition. Overall, homebuyers are less interested in a "gut job" compared to "move-in-ready". However, at the $150,000 price point, share of preference drops more drastically going from "move-in-ready/some work required" to "gut job" compared to higher price points.

...

whats important homebuying market researchThe weather is starting to warm up and more of us are venturing outside, myself included. Walking my dog around the neighborhood I’ve noticed a number of for-sale signs and it reminds me of my own recent home buying experience. It was exciting and at the same time stressful. Once I made the decision to buy I started watching all the home buying shows and attending open houses to figure out my list of must-haves and nice to haves. I wondered how my list stacked up against others who went through or are going through the home buying process.

Using our online panel of consumers, I employed TRC’s proprietary Bracket™ exercise to find out what homebuyers find most important when considering buying a home. Bracket™ is a tournament-based analytic approach to understanding priorities. For each participant, Bracket™ randomly assigns the items being evaluated into pairs. Participants choose the winning item from each pair; that item moves on to the next round. Rounds continue until there is one “winner” per participant. Bracket™ uses this information to prioritize the remaining items, and calculate the relative distance between them.

I created a list of 13 things to consider. I didn’t include standard house stats: # of bedrooms, # of baths, etc. as I tested those separately using a conjoint analysis (my next blog will dive into what I did there).

Proximity to work

Proximity to family

...

Catalog Cover TestingVery few clients will go to market with a new concept without some form of market research to test it first. Others will use some real world substitutes (such as A/B Mail tests) to accomplish the same end. No one would argue against the effectiveness of things like this...they provide a scientific basis for making the right decision. Why is it then that in early stage decision-making science is often replaced with their gut?

Consider this...an innovation department cooks up a dozen or more ideas for new or improved products and services. At this point they are nothing more than ideas with perhaps some crude mock-ups to go along with them. Doing full out concept testing would be costly for this number of ideas and a real world test is certainly not in the cards. Instead, a "team" which might include product managers, marketing folks, researchers and even some of the innovation people who came up with the concepts are brought together to wean the ideas down to a more manageable level.

The team carefully evaluates each concept, perhaps ranks them and provides their thinking on why they liked certain ones. These "independent' evaluations are tallied and the dozen concepts are reduced to two or three. These two or three are then developed further and put through a more rigorous and costly process - in-market testing. The concept or concepts that score best in this process are then launched to the entire market.

This process produces a result, but also some level of doubt. Perhaps the concept that the team thought was best scored badly in the more rigorous research or the winning concept just didn't perform as well as the team thought it would. Does anyone wonder if perhaps some of the ideas that the team weaned out might have performed even better than the "winners" they picked? What opportunities might have been lost if the best ideas were left on the drawing board?

The initial weaning process is susceptible to various forms of error including group think. The less rigorous process is used not because it is seen as best, but because the rigorous methods normally used are too costly to employ on a large list of items. Does that mean going with your gut is the only option?

...

Market Research Event conjoint AnalysisLast week we held an event in New York in which Mark Broadie from Columbia University talked about his book “Every Shot Counts”. The talk and the book detail his analysis of a very large and complex data set…specifically the “ShotLine” data collected for over a decade by the PGA. It details every shot taken by every pro at every PGA tournament. He was able to use it to challenge some long held assumptions about golf…such as “Do you drive for show and putt for dough?”

On the surface the data set was not easy to work with. Sure it had numbers like how long the hole was, how far the shot went, how far it was from the hole and so on. It also had data like whether it ended up in the fairway, on the green, in the rough, in a trap or the dreaded out of bounds. Every pro has a different set of skills and there were a surprising range of abilities even in this set, but he added the same data on tens of thousands of amateur golfers of various skill levels. So how can anyone make sense of such a wide range of data and do it in a way that the amateur who scores 100 can be compared to the pro who frequently scores n the 60’-s?

You might be tempted to say that he would use a regression analysis, but he did not. You might assume he used Hierarchical Bayesian estimation as it has become more commonplace (it drives discrete choice conjoint, Max Diff and our own Bracket™), he didn’t use it here either.

Instead, he used simple arithmetic. No HB, no calculus, no Greek letters, just simple addition, subtraction, multiplication and division. At the base level, he simply averaged similar scores. Specifically he determined how many strokes it took on average for players to go from where they were to the hole. These averages were further broken down to account for where the ball started (not just distance, but rough, sand, fairway, etc) and how good the golfer was.

These simple averages allow him to answer any number of “what if” questions. For example, he can see on average how many strokes are saved by going an extra 50 yards off the tee (which turns out to be more than for being better at putting). He can also show that in fact neither driving nor putting is as important as the approach shot (the last full swing before putting the ball on the green). The ability to put the ball close to the hole on this shot is the biggest factor in scoring low.

...
  • market research philadelphia farmersA recent post on my Facebook timeline boasted that Lansdale Farmers Market was voted the Best of Montgomery County, PA two years in a row. That’s the market I patronize, and I’d like to feel a bit of pride for it. But I’m a researcher and I know better.

Lansdale Farmers Market is a nice little market in the Philadelphia outskirts, but is it truly the best in the entire county? Possibly, but you can’t tell from this poll. Lansdale Farmers Market solicited my participation by directing me to a site that would register my vote for them (Heaven only knows how much personal information “The Happening List” gains access to).  I’m sure that the other farmers markets solicited their voters in the same or similar ways. This amounts to little more than a popularity contest. Therefore, the only “best” that my market can claim is that it is the best in the county at getting its patrons to vote for it.

But if you have more patrons voting for you, shouldn’t that mean that you truly are the best? Not necessarily. It’s possible that the “best” market serves a smaller geographic area, doesn’t maintain a customer list, or isn’t as good at using social media, to name a few.

  • A legitimate research poll would seek to overcome these biases. So what are the markers of a legitimate research poll? Here are a few:
  1. You’re solicited by a neutral third party. Sometimes the survey sponsors identify themselves up front and that’s okay. But usually if a competitive assessment is being conducted, the sponsor remains anonymous so as not to bias the results.
  2. You’re given competitive choices, not just a plea to “vote for me”.  
  3. You may not be able to tell this, but there should be some attempt to uphold scientific sampling rigor. For example, if the only people included in the farmers market survey were residents of Lansdale, you could see how the sampling method would introduce an insurmountable bias.

The market opens for the summer season in a few weeks, and you can bet that I’ll be there. But I won’t stop to admire the inevitable banner touting their victory.

Hits: 855 0 Comments

SolarPanel conjoint AnalysisWe marketing research types like to think of the purchase funnel in terms of brand purchase. A consumer wants to purchase a new tablet. What brands is he aware of? Which ones would he consider? Which would he ultimately purchase? And would he repeat that purchase the next time?

Some products have a more complex purchase funnel, one in which the consumer must first determine whether the purchase itself – regardless of brand – is a “fit” for him. One such case is solar home energy.

Solar is a really great idea, at least according to our intrepid research panelists. Two-thirds of them say they would be interested in installing solar panels on their home to help offset energy costs. There are a lot of different ways that consumers can make solar work for them – and conjoint analysis would be a terrific way to design optimal products for the marketplace.

But getting from “interest” to “consideration” to “purchase” in the solar arena isn’t as easy as just deciding to purchase. Anyone in the solar business will tell you there are significant hurdles, not the least of which is that a consumer needs to be free and clear to make the purchase – renters, condo owners, people with homeowners associations or strict local ordinances may be prohibited from installing them.

Even if you’re a homeowner with no limitations on how you can manage your property, there are physical factors that determine whether your home is an “ideal” candidate for solar. They vary by region and different installers have different requirements, but here’s a short list:

...

An issue that comes up quite a bit when doing research is the proper way to frame questions. In my last blog I reported on our Super Bowl ad test in which we surveyed viewers to rank 36 ads based on their “entertainment value”. We did a second survey that framed the question differently to see if we could determine which ads were most effective at driving consideration of the product…in other words, the ads that did what ads are supposed to do!

As with life, framing, or context, is critical in research. First off, the nature of questions is important. Where possible the use of choice questions will work better than say rating scales. The reason is that consumers are used to making choices...ratings are more abstract. Techniques like Max-Diff, Conjoint (typically Discrete Choice these days) or our own proprietary new product research technique Bracket™ get at what is important in a way that ratings can’t.

Second, the environment you create when asking the question must seek to put consumers in the same mindset they would be in when they make decisions in real life. For example, if you are testing slogans for the outside of a direct mail envelope, you should show the slogans on an envelope rather than just in text form.

Finally, you need to frame the question in a way that matches the real world result you want. In the case of a direct mail piece, you should frame the question along the lines of “which of these would you most likely open?” rather than “which of these slogans is most important?”. In the case of a Super Bowl ad (or any ad for that matter), asking about entertainment value is less important than asking about things like “consideration” or even “likelihood to tell others about it”.  

So, we polled a second group of people and asked them “which one made you most interested in considering the product as advertised?” The results were quite interesting.

...

Budweiser puppyWell, it is the time of year when America’s greatest sporting event takes place. I speak of course about the race to determine which Super Bowl ad is the best. Over the years there have been many ways to accomplish this, but like so often happens in research today, the methods are flawed.

First there is the “party consensus method”. Here people gathered to watch the big game call out their approval or disapproval of various ads. Beyond the fact that the “sample” is clearly not representative, this method has other flaws. At the party I was at we had a Nationwide agent, so criticism of the “dead kid” ad was muted. This is just one example of how people in the group can influence each other (anyone who has watched a focus group has seen this in action). The most popular ad was the Fiat ad with the Viagra pill…not because it was perhaps the favorite, but because parties are noisy and this ad was largely a silent picture.

Second, there is the “opinion leaders” method. The folks who have a platform to spout their opinion (be it TV, YouTube, Twitter or Facebook) tell us what to think. While certainly this will influence opinions, I don’t think tallying up their opinions really gets at the truth. They might be right some of the time, but listening to them is like going with your gut…likely you are missing something.

Third, there is the “focus group” approach. In this method a group of typical people is shuffled off to a room to watch the game and turn dials to rate the commercials they see.   So, like any focus group, these “typical” people are of course atypical.   In exchange for some money they were willing to spend four hours watching the game with perfect strangers.   Further, are focus groups really the way to measure something like which is best? Focus groups can be outstanding at drawing out ideas, providing rich understandings of products and so on, but they are not (nor are they intended to be) quantitative measures.

The use of imperfect means to measure quantitative problems is not unique to Super Bowl ads. I’ve been told by many clients that budget and timing concerns require that they answer some quantitative questions with the opinions of their internal team, or their own gut or qualitative research. That is why we developed our agile and rigorous tools, including Message Test Express™ (MTE™).

...

conjoint analysis blizzardHere in Philly we are recovering from the blizzard that wasn’t. For days we’d been warned of snow falling multiple inches per hour, winds causing massive drifts and the likelihood of it taking days to clear out. The warnings continued right up until we were just hours away from this weather Armageddon. In the end, only New England really got the brunt of the storm. We ended up with a few inches. So how could the weather forecasters have been this wrong?

The simple answer is of course that weather forecasting is complicated. There are so many factors that impact the weather…in this case an “inverted trough” caused the storm to develop differently than expected. So even with the massive historical data available and the variety of data points at their disposal the weather forecasters can be surprised.  

At TRC we do an awful lot of conjoint research…a sort of product forecast if you will. It got me thinking about some keys to avoiding making the same kinds of mistakes as the weather forecasters made on this storm:

  1. Understand the limitations of your data. A conjoint or discrete choice conjoint can obviously only inform on things included in the model. It should be obvious that you can’t model features or levels you didn’t test (such as say a price that falls outside the range tested). Beyond that however, you might be tempted to infer things that are not true. For example, if you were using the conjoint to test a CPG package and one feature was “health benefits” with levels such as “Low in Fat”, “Low in carbs” and so on you might be tempted to assume that the two levels with the highest utilities should both be included on the package since logically both benefits were positive. The trouble is that you don’t know if some respondents prefer high fat and low carbs and others the complete opposite. You can only determine the impact of combinations of a single level of each feature so you must make sure that anything you want to combine are in separate features. This might lead to a lot of “present/not present” features which might overcomplicate the respondent’s choices. In the end you may have to compromise, but best to make those compromises in a thoughtful and informed way.
  2. Understand that the data were collected in an artificial framework. The respondents are fully versed on the features and product choices…in the market that may or may not be the case. The store I go to may not offer one or more of the products modeled or I may not be aware of the unique benefits one product offers because advertising and promotion failed to get the message to me. Conjoint can tell you what will succeed and why but the hard work of actually delivering on those recommendations still has to be done. Failing to recognize that is no better than recognizing the possibility of an inverted trough.
  3. Understand that you don’t have all the information. Consumer decisions are complex. In a conjoint analysis you might test 7 or 8 product features but in reality there are dozens more that consumers will take into account in their decision making. As noted in number 1, the model can’t account for what is not tested. I may choose a car based on it having adaptive cruise control, but if you didn’t test that feature my choices will only reflect other factors in my decision. Often we test a hold out card (a choice respondents made that is not used in calculating the utilities, but rather to see how well our predictions do) and in a good result we find we are right about 60% of the time (This is good because if a respondent has four choices random chance would dictate being right just 25% of the time). Weather forecasters are not pointing out that they probably should have explained their level of certainty about the storm (specifically that they knew there was a decent chance they would be wrong).

So, with all these limitations is conjoint worth it? Well, I would suggest that even though the weather forecasters can be spectacularly wrong, I doubt many of us ignore them. Who sets out for work when snow is falling without checking to see if things will improve? Who heads off on a winter business trip without checking to see what clothes to pack? The same is true for conjoint. With all the limitations it has, a well executed model (and executing well takes knowledge, experience and skill) will provide clear guidance on marketing decisions.  

Hits: 1120 0 Comments

Market Segmentation ResearchAs anyone with experience with pets will tell you, no two are alike. If you tune in to Animal Planet’s series “Too Cute,” about the first few months of the lives of litters of puppies and kittens, you’ll find evidence of siblings’ behavior differences. One is reticent, another is always hungry, one sleeps a lot while his sibling is bouncing off the walls.

In my household, our two cats are no exception. They are very different from one another. You can categorize them by saying one is alpha, the other omega (or dominant/submissive, leader/follower). Alpha cat is a bully. He struts around like he owns the place and pushes Omega cat off her perch in the sun so he can claim her spot. She allows him to do this with little to no resistance.

And yet…

The moment the doorbell rings, Alpha hides under the bed while Omega rushes to the door. Alpha is afraid of the vacuum cleaner, strangers and loud noises, and he rolls over in a submissive pose when the neighbor’s dogs are around. Omega, on the other hand, is food-obsessed and gives Alpha the evil eye when he approaches “her” food dishes. And she’s fearless when encountering new people and strange objects.

So we have a dominant cat and a submissive cat, but those labels don’t really tell the whole story.

...

Last year Time Magazine featured a cover story about fat…specifically that fat has been unfairly vilified and that in fact carbs and sugars are the real danger. They were not the first with the story nor will they be the last. The question is, how will this impact the food products on the market?

The idea that carbs and sugar were the worst things you could eat would not have surprised a dieter in say 1970. It was in the 1980’s that conventional wisdom moved toward the notion that fat caused weight gain and with that heart disease and thus should be avoided. Over time the public came to accept this wisdom (after all the idea that fat causes fat isn’t hard to accept) and the market responded with a bunch of low fat products. Unfortunately those products were higher in sugar and carbs and the net result is that Americans have grown heavier.  

If the public buys into this new thinking we should expect the market to respond. To see how well the message has gotten out, we conducted a national survey with two goals in mind:

  • Determine awareness of the sugar/carbs being worse than fat thinking.
  • Determine if it would change behavior.

About a third of respondents said they were aware of the new dietary thinking. While still a minority, a third is nothing to be sneezed at. Especially when you consider that the vast majority of advertising still focus on the low fat message and food nutrition labels still highlight fat calories at the top. It took time for the “low fat” message to take hold and clearly it will take time for this to take hold as well.

Already there is evidence of change. Those aware of the message prior to the survey were far more likely to recommend changes to people’s diets (38%) than those who were not aware prior to the survey (11%). Clearly it takes more than being informed in a survey to change 30 years of conventional wisdom, but once the message takes hole, expect changes. In fact, two thirds of those aware of the message before doing the survey have already made changes to behavior:

...

This past spring we surveyed our consumer panel about the winter of 2013 – 2014. We used our proprietary message prioritization tool called BracketTM to determine that high heating bills were the worst part of enduring a challenging winter.

Energy utilities dedicate resources toward educating consumers about ways to conserve, which both increases sustainability and also keeps money in consumers’ wallets. One conservation method is to use programmable thermostats – homes and businesses can be kept cooler at night or when no one is around, and warmer when people are home (and the opposite is true in the summer). The set-it-and-forget-it nature of the program means the consumer doesn’t need to fiddle and adjust; once you decide what temperature you want on which day and at which time, the system takes over.

But we like to fiddle and adjust, and thermostats can also be controlled through apps on your PC or mobile device. This allows you to over-ride the program if you forget to re-set it while you’re on vacation, for example.

We were interested to understand consumer interest in these technologies, so we polled our panel once again and asked them what type of thermostat they used, if any, and how interested they’d be in installing a fancier type than they have now.

Nearly all of our survey participants use some type of thermostat. Half use a standard thermostat (not programmable). This number is higher among non-homeowners (61% vs. 48%). Landlords take note: consider upgrading to a programmable thermostat in your rental units.

...

This past summer, much of my TV viewing was dedicated to watching the series “Downton Abbey” and “Breaking Bad” in their entirety. “Downton Abbey” continues this January, but “Breaking Bad” concluded its five-season run before I started watching it. I was concerned that I would accidentally learn Walter’s and Jesse’s fates before seeing the final episodes. My friends who had seen the series were quite accommodating. But it’s tough keeping secrets in the digital age, and unfortunately, I did learn what happened in advance of watching. Jesse’s outcome was revealed by Seth Meyers during this year’s Emmy broadcast. And Walter? Well, I guess I’m to blame, since I was stupid enough to read a New York Times article in which the first paragraph states “Warning: Contains spoilers about the new age of television.”

The article, by Emily Steel, discusses the social ramifications of revealing dramatic plot twists. She cites a study by Grant McCracken, which Netflix plans to use as the basis for a digital promotion which creates a flow chart to classify people by their propensity for spoiling. At the root of all this is an attempt to understand how people view television content in the age of time-shifting and streaming, which has critical impacts on TV’s business model.

As viewing patterns change, so does water cooler conversation. You can’t simply blurt out, “How crazy was it when Danny took out Joey last night?” You need to first establish that the episode was watched by the people in the room. But the burden seems to fall more so on the one who isn’t caught up; I fell a few episodes behind my friends watching “Sons of Anarchy” this fall – so I had to make sure that they weren’t talking about it when I was around.

But with all of this conversational jockeying going on, I needed to ask a pretty basic question: how much time must elapse before what happened in a popular TV show becomes “fair game” – no longer subject to Spoiler Alerts?

To find out, we surveyed TV viewers from our online consumer panel. We know from conducting new product development market research studies (using conjoint and Bracket) that the way a question is framed influences how respondents answer. We wanted to look at the issue from both sides, so we randomly split our sample into two groups, and posed essentially the same question to both: how many days need to pass before people can communicate freely about a show and not face criticism for spoiling it for someone who hasn’t watched it yet?  We asked each group to assume a different role: one group was told to assume that they had just watched the episode, and the other group was told to assume the episode had aired, but they hadn’t watched it yet. We upped the stakes by describing the show as the final episode of a series that they liked a lot.  

...

Message testing advice"Become the known." My parents have given me plenty of great advice over the years, but this is my dad's favorite. If a new restaurant opens in town, he's on a first name basis with the owner within a week; at a large social gathering, he'll make a new friend in no time. While in these situations I usually prefer to remain just a face in the crowd, he encourages me to step forward and make myself known.

Recently, someone sent me a list of 50 pieces of advice being shared on social media (although "become the known" didn't make the cut!). This got me thinking – what are the best pieces of advice out there?

I set out to answer this question using TRC's online panel and our message testing Bracket™ technique. Through this tournament-style approach, we asked 500 respondents ages 25+ to choose the best (and worst) pieces of advice from this list of 50 items. Click here to see the full list. Our results were calculated at the respondent level, then aggregated and normalized on a 100-point scale.

So, what advice did our participants like best overall? The top 10 pieces of advice, in order of relative performance, were:

1. Show respect for everyone who works for a living, regardless of how trivial their job.
2. Remember, no one makes it alone. Have a grateful heart and be quick to acknowledge those who helped you.
3. Never waste an opportunity to tell someone you love them.
4. Never deprive someone of hope; it might be all they have.
5. Take charge of your attitude. Don't let someone else choose it for you.
6. Don't burn bridges. You'll be surprised how many times you have to cross the same river.
7. Count your blessings.
8. Choose your life's mate carefully. From this one decision will come 90 percent of all your happiness or misery.
9. Never give up on anybody. Miracles happen every day.
10. Loosen up. Relax. Except for rare life-and-death matters, nothing is as important as it first seems.

...

Purchase Funnel Measuring AwarenessWe at TRC conduct a lot of choice-based research, with the goal of aligning our studies with real-world decision-making. Lately, though, I’ve been involved in a number of projects in which the primary objective is not to determine choice, but rather awareness. Awareness is the first – and arguably the most critical - part of the purchase funnel. After all, you can’t very well buy or use something if you don’t know it exists. So getting the word out about your brand, a new product or a product enhancement matters.

Awareness research presents several challenges that aren’t necessarily faced in other types of research. Here’s a list of a few items to keep in mind as you embark on an awareness study:

Don’t tip your hand. If you’re measuring awareness of your brand, your ad campaign or one of your products, do not announce at the start of the survey that your company is the sponsor. Otherwise you’ve influenced the very thing you’re trying to measure. You may be required to reveal your identity (if you’re using customer emails to recruit, for example), but you can let participants know up front that you’ll reveal the sponsor at the conclusion of the survey. And do so.

The more surveys the better. Much of awareness research focuses on measuring what happens before and after a specific event or series of events. The most prevalent use of this technique is in ad campaign research. A critical decision factor is how many surveys you should do in each phase. And the answer is, as many as you can afford. The goal is to minimize the margin of error around the results: if your pre-campaign awareness score is 45% and your post-campaign score is 52%, is that a real difference? You can be reasonably assured that it is if you surveyed 500 in each wave, but not if you only surveyed 100. The more participants you survey, the more secure you’ll be that the results are based on real market shifts.

Match your samples. Regardless of how many surveys you do each wave, it’s important that the samples are matched. By that we mean that the make-up of the participants should be as consistent with each other as possible each time you measure. Once again, we want to make certain that results are “real” and aren’t due to methodological choices. You can do this ahead of time by setting quotas, after the fact through weighting, or both. Of course, you can’t control for every single variable. At the very least, you want the key demographics to align.

...

American Healthcare Market Research BlogAs a market researcher who has studied the health insurance industry for over two decades, this year has shown a dramatic shift in consumer thinking; that is, consumers are being forced to think harder than ever before to determine which health insurance plan is best for their family.

While I have the benefit of working directly with health insurance companies on product development, numerous articles are published each week illustrating some of the challenges consumers are facing with their new plans. These experiences make it clear why conjoint (Discrete Choice) is such a strong tool to understand consumer preferences for different health insurance plan components.

As I digest all of this information, a number of themes continue to surface:

High deductibles – consumers need to know what is subject to the deductible and what isn’t (preventive, Rx etc.). It’s unlikely that insurers want consumers to avoid preventive care as a way to manage their costs, and yet that is exactly what some consumers are doing.

Limited Network – consumers are learning the hard way that you get what you pay for. There are many stories of consumers having to drive ridiculous distances to get treatment from an in-network provider, or those who validated that their physician accepts their carrier to later learn that they don’t accept all plans offered by the carrier. Many are also having difficulty getting an appointment within a reasonable period of time, and some have even visited an in-network hospital and received a bill from an out-of-network physician who treated them there.  

...

I begin every weekday by driving through a toll plaza on the Pennsylvania Turnpike to get to work. By this time, I haven’t usually had my morning cup of coffee yet; therefore, my mathematical skills are probably not always up to par. So, I take the easy way out and use my E-ZPass, which saves me the daily burden of counting out change to make my way through the toll booth.

Overall, the E-ZPass system seems relatively straightforward. You use a credit card to open an account and you receive an electronic tag, or transponder, that has your personal billing and vehicle information embedded into it. You put the transponder somewhere on the dashboard or windshield of your vehicle, which then sends a signal to a receiver as you drive through the toll booth that detects your tag, registers your information and charges your account accordingly. When all is said and done, you see the polite green light that says “Thank You” (unless you have a low balance, of course) and you are on your merry way. Quick and simple, right?

Before I began working in market research, I wouldn’t have thought much more about the E-ZPass system other than it gets me to where I need to go quickly. Now that I’m almost a year into my market research career with more of a research-oriented point of view, I got to wondering a little more in depth about the E-ZPass system and how the company conducted its research within the toll-user market to find out if its new toll system would prosper. After a little research, I found that the company used the ever-reliable conjoint analysis method of research.

The scholarly article, Thirty Years of Conjoint Analysis: Reflections and Prospects by Paul Green, Abba Kreiger and Yoram Wind, discusses the use of conjoint analysis in an abundance of studies throughout the past 30 years. One of the studies that this article focuses on is the research done prior to the development and implementation of the E-ZPass system. E-ZPass has been in the works for about 12 years now; the company began its market research in 1992. Two states, New Jersey and New York, had conducted conjoint analysis research using a sample size of about 3,000 to decipher the potential of the system. There were seven attributes used in this conjoint study, such as number of lanes available, tag acquisition, cost, toll prices, invoicing and other potential uses of the transponder. Once the respondents’ data was collected, it was analyzed in total and by region and facility. The study yielded an estimated 49% usage rate, while the actual usage rate seven years later was a close 44%. While both percentages were not extremely high, the company estimated the usage rate would continue to increase in the future.

Green, Kreiger and Wind make a fair point in their article when they say that conjoint analysis has the ability “to lead to actionable findings that provide customer-driven design features and consumer-usage or sales forecasts”. This study serves as a great example to support this statement just by looking at how close the projected usage rate from the data collected ended up being to the actual usage rate. An abundance of the studies that we execute here at TRC use conjoint analysis because of its dependable predictive nature. Whether clients are looking to enter a new product or service into the market, or are looking to improve upon an already existing product or service, conjoint analysis provides them with direction for a successful plan.

...

Truth or Research

Posted by on in New Research Methods

respondents telling truth in surveysI read an interesting story about a survey done to determine if people are honest with pollsters. Of course such a study is flawed by definition (how can we be sure those who say they always tell the truth, are not lying?). Still, the results do back up what I’ve long suspected…getting at the truth in a survey is hard.

The study indicates that most people claim to be honest, even about very personal things (like financing). Younger people, however, are less likely to be honest with survey takers than others. As noted above, I suspect that if anything, the results understate the potential problem.

To be clear, I don’t think that people are just being dishonest for the sake of being dishonest….I think it flows from a few factors.

First, some questions are too personal to answer, even on a web survey. With all the stories of personal financial data being stolen or compromising pictures being hacked, it shouldn’t surprise us that some people might not want to answer some kinds of questions. We should really think about that as we design questions. For example, while it might be easy to ask for a lot of detail, we might not always need it (income ranges for example). To the extent we do need it, finding ways to build credibility with the respondent are critical.

Second, some questions might create a conflict between what people want to believe about themselves and the truth. People might want to think of themselves as being “outgoing” and so if you ask them they might say they are. But their behavior might not line up with reality. The simple solution is to ask questions related to behavior without ascribing a term like “outgoing”. Of course, it is always worth asking it directly as well (knowing the self image AND behavior could make for interesting segmentations variables for example).

...

My daughter was performing in The Music Man this summer and after seeing the show a number of times, I realized it speaks to the perils of poor planning…in forming a boys band and in conducting complex research.  

For those of you who have not seen it, the show is about a con artist who gets a town to buy instruments and uniforms for a boys band in exchange for which he promises he’ll teach them all how to play. When they discover he is a fraud they threaten to tar and feather him, but (spoiler alert) his girl friend gets the boys together to march into town and play. Despite the fact that they are awful, the parents can’t help but be proud and everyone lives happily ever after.

It is to some extent another example of how good we are at rationalizing. The parents wanted the band to be good and so they convinced themselves that they were. The same thing can happen with research…everyone wants to believe the results so they do…even when perhaps they should not.

I’ve spent my career talking about how important it is to know where your data have been. Bias introduced by poor interviewers, poorly written scripts, unrepresentative sample and so on will impact results AND yet these flawed data will still produce cross tabs and analytics. Rarely will they be so far off that the results can be dismissed out of hand.

The problem only gets worse when using advanced methods. A poorly designed conjoint will still produce results. Again, more often than not these results will be such that the great rationalization ability of humans will make them seem reasonable.

...

big league research conjointWhile there is so much bad news in the world of late, here in Philly we’ve been captivated by the success of the Taney Dragons in the Little League World Series. While the team was sadly eliminated, they continue to dominate the local news. It got me thinking about what it is that makes a story like theirs so compelling and of course, how we could employ research to sort it out.

There are any number of reasons why the story is so engrossing (especially here in Philly). Is it the star player Mo’ne Davis, the most successful girl ever to compete in the Little League World Series or perhaps the fact that the Phillies are doing so poorly this year or maybe we just like seeing a team from various ethnicities and socio-economic levels working together and achieving success? Of course it might also be that we are tired of bad news and enjoy having something positive to focus on (even in defeat the team fought hard and exhibited tremendous sportsmanship).

The easiest thing to do is to simply ask people why they find the story compelling. This might get at the truth, but it is also possible that people will not be totally honest (for example, the disgruntled Phillies fan might not want to admit that) or they don’t really know what it is that has drawn them in. It might also identify the most important factor but not make note of other critical factors.

We could employ a technique like Max-Diff and ask them to choose which features of the story they find most compelling. This would provide a fuller picture, but is still open to the kinds of biases noted above.

Perhaps the best method would be to use a discrete choice approach. We take all the features of the story and either include them or don’t include them in a “story description” then ask people which story they would most likely read. We can then use analytics on the back end to sort out what really drove the decision.  

...

Want to know more?

Give us a few details so we can discuss possible solutions.

Please provide your Name.
Please provide a valid Email.
Please provide your Phone.
Please provide your Comments.
Enter code below : Enter code below :
Please Enter Correct Captcha code
Our Phone Number is 1-800-275-2827
 Find TRC on facebook  Follow us on twitter  Find TRC on LinkedIn

Our Clients