Looking Back Vs. Forward: When You Should Consider Conjoint Over Key Driver Analysis
By Rajan Sambandam, PhD, TRC's Chief Research Officer
Collection and analysis of customer satisfaction data is a widespread practice in companies. What purpose does it serve? Generally, companies use it to understand how satisfied their customers are and what areas of satisfaction need to be improved. But sometimes in practice we see companies extrapolating the information from customer satisfaction studies into areas of decision-making where it may not be appropriate to do so. In such situations sub-optimal decisions can be made because the correct framework was not used to determine the type of data that should be collected. In this article we will look at a specific situation and examine why the correct framework is important.
First, let’s take a look at what happens in a typical customer satisfaction study. The survey collects information on a variety of attributes along with a few overall questions. So for example, a health insurance company may be interested in how doctors perceive it on a variety of attributes (such as Speed of Reimbursement, Clinical Autonomy, etc), and hence measure the satisfaction scores of the doctors on all these factors. Along with this the overall satisfaction of the doctors with the company is also measured. A fairly standard way to analyze the data is to run a key driver analysis.
Usually this takes the form of a multiple regression analysis model where the attributes are treated as independent variables and overall satisfaction is the dependent variable. Of course, other analytical approaches are also possible. One useful approach is path analysis, where chains of causative models are developed to more clearly isolate the actionable attributes. Whatever the analysis, the importance scores (also called beta weights) indicate the strength of impact of each attribute on overall satisfaction. Thus it may be possible to say that Speed of Reimbursement is the biggest driver of doctor satisfaction with the health insurance company and Clinical Autonomy is the second most important factor and so on. This analysis is industry agnostic and can be applied pretty much everywhere.
By itself there is nothing wrong with the analysis or the interpretation. Regression analysis is a long-used and robust approach and the interpretation of the importance weights is quite standard. The problem arises when companies try to extrapolate beyond the results. Consider a situation where the health insurance company is interested in taking action that would encourage more doctors to use them. Looking at the key driver analysis results seems like a logical way to approach this problem. If satisfaction of current doctors is driven by certain factors, does it not stand to reason that those are the factors instrumental in getting more doctors to use the company?
Yes, to some extent, but there are flaws in this argument.
It is a well established fact in research that there is a gap between attitude and behavior. The model has identified only the impact of factors on overall satisfaction of the doctors, not how their future behavior will be affected. Generally the satisfaction will have a positive relationship with future behavior, but this relationship is far from perfect, or even completely reliable. This is especially the case if the satisfaction ratings were somewhat suspect to begin with – and several factors could have made them so.
As is often the case, the survey may have been long and tedious, and the doctors may not have been engaged. Or, the doctors could be exhibiting scale response tendencies which lead to certain patterns of answers that do not truly reflect their satisfaction. In any case, it is hard to state categorically that the satisfaction scale is an unbiased measure of their true satisfaction, given the flaws inherent in today’s surveys. So, if the satisfaction measure itself has some uncertainty surrounding it, and it is not very well correlated with their future behavior, how useful are the key drivers in helping the company understand what it should be doing? The problem is not really mitigated even if an alternative metric such as Likelihood to Recommend is used (as in NPS studies) instead of satisfaction.
A further problem is that satisfaction studies, by definition, are confined to current customers and therefore do not include prospects. This is an important distinction. If there is a systematic difference between doctors who are currently using the health insurance company and those who are not, then the key driver analysis would have nothing at all to say on that since prospect satisfaction cannot be measured.
Finally, a more fundamental problem is that, by its nature, key driver analysis on satisfaction data is a backward looking approach. Respondents are asked to look back and evaluate their satisfaction with something they have experienced in the past. The analysis tries to understand what factors may have affected their satisfaction based on their evaluations of specific factors – which are also backward looking. In contrast, what the company wants to do in this situation is to predict or change future behavior. We know from research that factors that are important in choosing a brand are not necessarily the same as those that drive satisfaction with the brand.
If a company is interested in influencing future behavior then the more appropriate approach would be one that is forward looking, rather than backward looking. That is, what if the company asked the doctors how they would make future decisions, rather than try to glean their intentions from backward looking analyses? Conjoint analysis can be very useful here as it is perfectly set up for such a situation. In conjoint analysis (specifically discrete choice conjoint), respondents evaluate products or scenarios and make choices. The choices are not easy – they are set up such that the respondent has to make trade-offs. For example would she choose a higher quality product at a higher price, or a lower quality product at a lower price? The answer is not obvious – but the choices made by the respondent tell us whether she values quality or price more.
In the health insurance example, we would construct scenarios for doctors to choose from. Would they prefer a health insurance company that reimburses quickly, but provides less clinical autonomy than others? Or would they prefer the opposite? Perhaps for smaller doctor practices cash flow is important and they would take faster payment over more autonomy. Either way, the conjoint analysis does exactly what we need. It places the doctor in a realistic situation (this is after all how they make these decisions in real life) and forces them to think and make trade-offs that would reveal their true preferences. This is quite different from the typical satisfaction survey where respondent attitudes are measured with stand-alone scales.
Of course, conjoint is still conducted through a survey, and hence some of the critiques of satisfaction research would apply here too. If the survey is too long, the respondents may not be engaged and may provide rote answers. If the conjoint has too many features and levels (especially prices), it may be confusing and respondents may use shortcuts they otherwise would not use. We have to be cognizant of these and other pitfalls to ensure that we can develop a good conjoint survey that will get us the information we need.
But assuming that one does this well, a conjoint-type forward looking approach is a better way to understand future choices than a key driver-type backward looking approach.