Conjoint Analysis vs Self-Explicated Method: A Comparison

Hero Image: Conjoint Analysis vs Self-Explicated Method: A Comparison

Preference measurement can be approached in two ways: with a compositional approach or a de-compositional approach. The former is a “bottom-up” approach where feature importance is first ascertained and then used to create product attractiveness scores. The latter is a “top-down” approach where overall evaluations of a product are decomposed to get at feature importance. Conjoint Analysis (CA) is generally a de-compositional approach, whereas Self-Explicated Method (SEM) is an example of a compositional approach.

While CA has received considerable attention in the literature and has been used often by practitioners, SEM is seldom used. This is in spite of academic studies showing that SEM can be as good as conjoint in some cases and may even be preferable in specific situations. The objective of this paper is to conduct a split sample study to compare the results of the two methods, in order to demonstrate the application of SEM and study its relative effectiveness when compared to CA.

SEM is much easier than CA to design and analyze. As with CA we start with a definition of features and levels that we are interested in studying. But product profiles are not constructed as would be done in CA. Instead, survey respondents are presented the features individually and asked for their evaluations. Specifically, levels of each feature are first presented and respondents evaluate their desirability. So for example, if in an auto study gas mileage were a feature and 20mpg, 25mpg and 30mpg were three levels, then respondents would be asked to evaluate the desirability of those three levels on a scale. There are at least two ways of doing this. One is to provide a straight desirability rating on a scale of, say, 0-10. Another is to ask for the most desirable level and assign it a value of 10, ask for the least desirable level and assign it a value of 0, and then have the remaining levels assigned appropriate values in-between 0 and 10.

Once the desirability scores are assigned to various levels, the respondents are asked to evaluate the importance of the features. This can again be done in at least two different ways. Respondents could rate features on regular importance scales (say 0-10). Alternatively, they could use a constant sum scale to assign 100 points in accordance with the importance of each feature. Since there is a built in trade-off in the constant sum scale the importance scores are likely to be more accurate. Once the level desirability and feature importance scores are obtained, simple multiplication of the two produces utility scores for every level of every feature. Thus, levels that are desirable and occur in important features will have higher utility scores, while those that occur in less important features will have appropriately lower scores.

Practically the utility scores obtained using SEM are similar to those obtained through CA even though the latter are derived using a much more complicated process. The SEM utilities are available at the individual respondent level and can therefore be used for simulations or follow-up segmentation. If it is so straightforward and simple to use, how is it that SEM is not more popular?

There are at least a few issues that may impact the results of SEM and therefore need to be considered before implementation. Respondents approach the task feature by feature and hence the whole product perspective that is often seen in CA is missing. The whole product is what a consumer sees in the marketplace and hence it could be argued that CA is more realistic. Conversely, the advantage of SEM is that a large number of features can be included in the study.

Evaluation by feature also means that respondents are not aware of what features are coming up and hence may provide higher (or lower) ratings to levels seen earlier. To avoid this, respondents need to be made familiar with all features and levels before they begin their evaluations. Evaluation by feature may also result in distributions of utilities that are “flatter” when compared to the distribution of utilities from CA. That is, the scores of very important features may be underestimated and that of unimportant features may be over-estimated. This happens because each feature is considered individually and rated, something that may not actually happen in the marketplace. Such a phenomenon has also been observed in variations of CA where respondents are not asked to evaluate the entire product.

An Example of a Study Comparing Self-Explicated Method vs. Conjoint Analysis

A split sample design was used on a web sample for studying the choice of checking provider accounts. SEM respondents rated desirability on a 0-10 scale and feature importance on a 100-point constant sum scale. CA respondents were given a discrete choice exercise. The study included eight features with two to four levels per feature.

As can be seen from the table, feature importance scores for the SEM are not very different from those obtained using CA. There is some level of underestimation at the upper end but, by and large, the SEM results are comparable. At the utility score level also there is considerable correspondence between the two methods.

A third cell had been included in the design that also used an SEM. The main difference was that the feature importance for the third cell was measured using a standard 0-10 scale instead of a constant sum scale. What is interesting is that the results of the third cell mirror those of the SEM results reported here almost exactly. This has implications for data collection. CA is almost impossible to do over the phone. How can one get CA-like results using a phone study? Based on the results from the third cell, it would seem that an SEM with importance scales could be done over the phone, and good results can be expected. Of course, it would make sense not to have too many features, too many levels per feature, or lengthy descriptions of levels.

Features Levels Self-Explicated Conjoint
Utilities Importance Utilities Importance
Type of Bank National 51 8% -7 11%
Regional 51 7
Local Community 52 7
Credit Union 51 -7
Balance/Fees No min balance and No monthly fees 168 24% 138 34%
No minimum balance and $5-10 monthly fees 34 -92
Minimum balance of $300 and no monthly fees 55 -47
Online Banking/Billpay No online banking 23 21% -99 24%
Free online banking 45 26
Free online banking and bill pay 140 72
Nearest branch Close to home 152 12% 11 7%
Close to work 127 -4
Supermarket where you shop 107 -7
Branch hours Weekdays 9am-3pm 41 14% -29 9%
Weekdays 9am-7pm 101 7
Weekdays 9am-3pm/Open Saturday 81 6
Weekdays 9am-3pm/Open Saturday and Sunday 80 16
ATM Network ATMs at branches only 45 10% -7 4%
ATMs at branches and other places 93 7
Customer Service Reputation Excellent 93 8% 15 7%
Good 72 3
Average 62 -19
Prior Relationship Had prior relationship with the bank 54 4% 0 5%
No prior relationship 35 -10
Someone recommends it 52 10