In his opus Thinking, Fast & Slow, Nobel winner Daniel Kahneman (click here for previous post) relates a story from early in his career when he was leading a team to develop a curriculum and write a textbook on judgment and decision-making in high schools. He had assembled a group of experts and after working diligently for a year they had completed an outline of the syllabus and written two chapters. One fine day when discussing procedures for estimating uncertain quantities, it occurred to him that he should get an estimate from everyone on how long he thought this whole project would take. Being the clever psychologist that he was, rather than ask the group to guess publicly, he asked each person to make a confidential prediction. The mean was about two years and the range was about half a year on either side. In other words, the group was very consistent in its prediction.
Then Kahneman had the idea of asking the curriculum expert in the group, Seymour Fox, for his specific opinion. Only this time he asked Seymour to think about other teams like theirs and asked how long it had taken them to finish. After a long silence the astonishing answer came out. Nearly half the groups never even finished the project. Among those who did the average time taken was about seven years! Seymour Fox also estimated that this group was slightly below average in terms of the skill set it possessed compared to the other groups. The killer, of course, was how long it actually took Kahneman’s group to complete their project. Eight years!
Effectively what had happened was that a group of experts in judgment and decision-making had somehow fooled themselves into thinking way too optimistically about the future and had made predictions based on it. This included the expert who in spite of having the best information somehow ignored that in favor of an optimism bias. As Kahneman graciously adds, it also included a leader who did not pull the plug on a project that would likely take another six years and was a coin toss as to whether it would even be completed.
The biggest lesson Kahneman draws from this episode is that there are two approaches to forecasting which he labels the inside view and the outside view. The inside view is when we focus on the specifics of our own situation, try to form a coherent story and somehow convince ourselves that given the “special” nature of our situation success is just around the corner. In some ways this probably explains the enormously high failure rates of new products and the only slightly lower failure rates of new small businesses. The outside view is one that takes into account the general failure rate of the reference class of objects. Assuming the reference class is properly chosen, the outside view should provide a nice ballpark of where the estimate is going to be. In practice it is better to start there and adjust it using the special knowledge of the inside view and thus avoid embarrassing predictions. Not following this kind of procedure is why we routinely read about say, large transportation projects often running over by years and into several times the original projected cost. It is also why kitchen renovations routinely cost twice the initial estimate for the average household.
So are there specific lessons for market researchers? Of course. One is with the likelihood of success of any kind of new technological advance (mobile, neuro, text analytics, social media monitoring, whatever). Without understanding the reference information for how such new technologies can ultimately fare, we can too easily get caught up in the fanciful nature of a specific technology and make prognostications not just about success, but also about time frames within which such things can come true. On the flip side the death of older technologies can be too gleefully forecast (“Surveys will die in a year!”) because of the glamour of newer techniques if the reference cases are not carefully analyzed....