We love Max-Diff! It is the industry gold standard for feature prioritization, and with good reason. It has been documented in journals, articles and white papers countless times how it is superior to typical Likert rating scales. The nature of the task forces respondents to make a trade-off among subsets of items, choosing the “best” and “worst” item within each group. After some modeling, the items are typically scored on a relative scale from 0-100, where both the rank order and the distance from one item to another is observed. And unlike rating scales which tend to have scores clustering on the high end, Max-Diff results in a nice spread of scores clearly indicating which items are relatively superior.
But, how do we know that the winning items are actually appealing to respondents, and not just the best of a set of bad options? Max-Diff scores are relative, meaning they only compare the items to each other. But we don’t have any information about an item’s absolute preference.
Luckily, we have a couple options.
Suppose a potato chip manufacturer wants to test out 10 new flavors and we run a Max-Diff exercise to get the order of preference. From the figure below, we see flavor A is leading the pack, with flavors B & C not far behind, and the rest further down.
We’ve been on the GRIT list of most innovative research companies for five years now. I’m proud of that achievement and of the fact that we’ve moved up 10 places in those five years (many much larger firms rank lower or not at all). I think the key point for me to share though is that we don’t innovate to make the GRIT list, but rather GRIT simply recognizes what is a way of life at TRC.
TRC was founded in 1987 at a time when more than half of all phone interviews were done using hard copy paper and pencil forms and almost no one had a PC on their desk. From the start, every TRC employee had a PC on their desk from interviewers through our top executives. To do this we installed what was, at the time, the largest PC network in the world (PC World Magazine wrote an article on us). From there we adopted digital recording technology so we could quantify quality, and then went on to become very early adopters of using the internet to do surveys.
Beyond data collection, we innovated in techniques. Over the years we created techniques like asymmetrical key driver analysis (which doesn’t assume that all features will have the same positive and negative impact) and Bracket (a more efficient way of doing ranking exercises). We also applied things that we learned from our many academic partners such as Smart Incentives (a gamified incentive aligned method for ideation within quantitative surveys).
We continue to come up with new ways of driving insights. Some are improvements on existing methods (such as better ways to do Discrete Choice) and some are applying new tools to better understand what drives consumer behavior (such as text analytics)....
Folks isolating at home during the COVID-19 pandemic are looking for inexpensive family-friendly ways to entertain themselves. Jigsaw puzzles seem to be fitting that bill, and my family has been doing them since before the shut-down began.
At my house, as we’ve gotten better at doing them, we’ve also gotten more particular about which puzzles to buy. Subject matter, the size and number of the pieces, the construction material, border type and repetitiveness of the patterns all factor into our decision for which puzzles to tackle. We won’t attempt something all in one color palette nor one with rounded edges (that grayscale Moon puzzle circulating social media is a definite NO). But we also don’t want to waste our time on something that is too easy or with juvenile subject matter.
As I’m dreaming of the perfect puzzle, I can easily see how a manufacturer could utilize conjoint to help determine the types of puzzles to design. Puzzle-buying consumers could trade-off puzzle features and price, perhaps even bundling some puzzles together. Suggestions for puzzle subject matter could be generated through a crowdsourcing-style research exercise, such as our Idea Mill™ agile product. The 6 to 36 designs with the most promise could then be winnowed down in an Idea Magnet™ feature prioritization exercise.
So now that I have the entire research program laid out, I just need a jigsaw puzzle company to embrace my research plan and quickly – before I run out of puzzles!