Conjoint Analysis Primer - Why, What and How
By Rajan Sambandam, PhD, TRC's Chief Research Officer
Let’s say your company wants to launch a new product and your job is to understand how it should be designed – that is, you want to know what consumers will value. The simplest approach is just asking consumers what they want (direct elicitation). Let’s first look at why this may not be the best approach, then consider what would be better, and how we can achieve that.
In the most straightforward approach, respondents could be asked how much they value the components of a product – that is, how much they like the features (benefits) and how much they are willing to pay (costs). But focusing on individual features is not only tedious, but also uninformative as the way in which such evaluations are obtained (say, through importance ratings scales) is unlikely to provide adequate discrimination.
What about asking people about their intention to purchase a product and hence inferring how much they like it? This gets more directly at the buying decision (rather than focusing on feature appeal), but does not provide any information on specific features that are valued. And, we can only know the reaction to the presented product, not to any other variation that may interest consumers.
A common way to overcome this problem is to use what is commonly known as a monadic design or A/B testing (or more formally a between-subjects design in experimental design terminology). Two groups of respondents (similar in every way) can be shown a product that varies in only one respect. Their differing responses can inform us about the impact of that feature along with the attractiveness of the product overall. But more groups will be needed to test other variations and this can quickly rise to impractical levels.
There is also the issue of what we are asking the respondents to do (i.e. asking for purchase intentions as a scaled response). This is inherently less realistic than what consumers actually do in a market (i.e. choose, rather than express product preference on a scale).
So, what we need is an approach that is practical, effective and realistic. Enter conjoint analysis.
Let’s say you have a product (chocolate) that can be described with two intensities (dark and milk) and filling (almonds and plain). Since there are four combinations in total, a monadic design will need four cells. As more features are included to better describe the product, the cells needed keeps increasing, making a monadic approach impractical.
But what if we could ask more than one question of each respondent? In the simple two-feature example, each respondent could be asked about their preference for each of the four combinations and we could simply tally up the combination that scores best. Let’s say the dark chocolate with almonds is the most preferred. We still don’t know if it is love of dark chocolate or almonds that is driving this preference. By looking at the ranking of all four combinations, we might be able to make some deductions (say, the top two are dark chocolate). Now we have something that is practical (uses only one cell of respondents), gives us good information on product preference and some reasonable information on feature importance. In experimental design language this would be called a repeated-measures design. By appropriately asking each person to respond to multiple product offerings, we could derive what they value.
Now consider what happens if we wanted to test five features, each with two variations. The total number of combinations increases significantly (to 32), making it a much harder task for each respondent. If we increased the number of features, or the variations in each one, this can become completely impractical. Wouldn’t it be better if we could ask just a subset of all possible combinations and still derive the information we want?
That is essentially what conjoint analysis does.
The easiest way to understand how conjoint works, is to think in terms of frequencies. Let’s say dark chocolate was shown ten times and picked eight times, while milk chocolate was picked two out of ten times. All else being equal we can say that dark chocolate is preferred about four times as much as milk chocolate. But is all else equal? Yes – if we set up the design properly. Conjoint uses an experimental framework to develop product combinations such that every level of every product appears roughly an equal number of times. Therefore, though dark chocolate would sometimes appear on attractive products, other times the opposite would happen – and similarly for milk chocolate. The variations will cancel each other out such that the all-else-is-equal standard can be met.
In a typical conjoint task a respondent is exposed to products with varying levels of features – say, flavors, fillings, brand and price. Not every level may be attractive to a given respondent. For example, a respondent may like the chocolate flavor and the nut filling, could be indifferent about the brand, while considering the price to be too high. By forcing the respondents to make trade-offs between the different features, we are able to understand what really matters to a given respondent. Across a group of respondents the same logic helps identify the features that are attractive for the market as a whole.
This approach is more realistic and quite different from the direct approaches described earlier. But there is room for further realism. What if instead of asking for the evaluation of a single product, we provide multiple products and ask respondents to choose the one they are most likely to buy? By framing the question as a choice (i.e. purchase decision), we can get more realistic feedback, while also reducing the focus on individual features.
This is the approach used in a version of conjoint called discrete choice, currently the most popular way to implement this technique. Discrete choice conjoint also has another special feature that makes it even better – the ability to include a “None” option. That is, a respondent is asked to choose the product that she is most likely to buy, but if none make the cut, then she can choose a no-buy option.
This is, of course, what happens in a real (non-monopolistic) market and can provide more accurate share information. It is also very useful for providing information on market expansion potential when features that are new to a market are tested.
The usefulness of conjoint analysis does not end with the collection of accurate preference information. Once the preferences (known as utility scores or part-worth values) of individual respondents are known, we can predict what will happen through a process of simulation. Let’s think about it this way.
Say we asked respondents directly about their willingness to purchase a product described by certain features and price. Their response is static in the sense that we will know (given the usual qualifiers about accuracy of survey research) their reaction to that specific product and only that product. How would they react if a feature was changed, or price increased? There is no further information available because the features of the product have not been varied. Instead, let’s say that we used two groups of respondents, and both saw the same product but with different prices. Now we know how the market values the product at different prices (i.e., price sensitivity). This was made possible by varying the price. Similarly, varying other features can tell us about the preferences for those features.
In a conjoint design, every feature has variations (called levels). Since we know the preference of every respondent for every feature variation (i.e., utility score or part-worth), we can create any product we desire and get a good understanding of how it will perform in that market. So, we could create an almond filled dark chocolate product (priced higher) and a plain milk chocolate product (priced lower) and easily calculate how their preference shares will fall out. If the share of the milk chocolate product was low, we can drop the price and see how much more attractive it becomes. All these can be conducted in a market simulator that uses the individual feature-level preferences as its input. In that sense, conjoint results are dynamic. No other research approach provides this type of simulation capability, which partly explains the popularity of conjoint analysis.
So, there you have it – a primer on conjoint analysis.