Welcome visitor you can log in or create an account


Consumer Insights. Market Innovation.

After the 2016 election I felt the need to both defend the polls and point out how they differ from what we do at TRC…market research. To paraphrase Ronald Reagan…”Here we go again!”
With the surprise victory by Donald Trump in 2016 there was much soul searching in the polling world. For this cycle we were assured that they learned from the last cycle. What they determined was that they undercounted working class whites so they fixed that. In most cases though they were still off and often outside the margin of error. I think generals call this “fighting the last war”.
One issue is that politics can get ugly fast. People may prefer Coke or Pepsi but that preference is unlikely to cause a family rift at Thanksgiving. While it is impossible to prove, it stands to reason that some people might be afraid to tell an interviewer (or even a web survey) the truth about their voting intention. How much this impacts things is hard to say.  
While people are less likely to fear giving their opinion in a market research survey, we still need to be on guard. For example, we know that price laddering studies create bias that doesn’t exist in a monadic design (and is much reduced by the use of things like conjoint analysis). That’s why careful design in research is critical.
In the end though, my argument last time still holds…polls can only take us so far. The country is very close to evenly divided. In an election with the biggest turnout in history and highest percent of eligible voters in over 100 years (meaning going back to a time when a much smaller percent of the population was eligible as women were not allowed to vote) it shouldn’t surprise us that polls are often “wrong”. Many are within the margin of error mind you (therefore not wrong) and other differences come down to issues related to turnout (such as the surprising turnout of white working class men last time).  
When we do market research we face our own “turnout” issues. Even established markets require us to determine things like “likely car buyers in the next six months”. We know that intent and reality don’t always line up, but they get close enough for our needs. After all, in market research it is not all or nothing like it is in voting. Only one candidate will be the next President whereas win or lose Coke or Pepsi will still sell a lot of soda.
Our “turnout” issues become really challenging in new or rapidly changing markets. For example, if you needed to understand the electronic payments market you have to figure out who the customers are (new ones start up every day) and who the competition is (this isn’t limited to direct competitors but also includes traditional players like credit cards or banks AND new players who might not have handled payments in the past. We’ve helped clients figure out (and size) markets like these…in essence, made sure that they don’t ignore the equivalent of working class voters in their market (if you’d like to understand how we’ve successfully done this, let’s have a chat… rraquet@trchome.com).
Defining markets like this requires a carefully planned customized approach…and I am speaking from experience as we’ve done this in multiple markets (happy to have a call to discuss how it might help in your marketplace).  
Another difference between market research and polling is we don’t have to face weeks of counting, recounts, court battles and so on. For now, I’m going to focus on helping my clients navigate the weeks ahead. I suggest you all do the same…the politics will take care of itself.
I suppose, given the nature of this blog, that you might want to know where I stand.  Honestly I am really split. I like the slightly sweeter taste of Pepsi, but there is something special about drinking Coke from one of those small bottles right from the fridge. 
Hits: 279

We love Max-Diff! It is the industry gold standard for feature prioritization, and with good reason. It has been documented in journals, articles and white papers countless times how it is superior to typical Likert rating scales. The nature of the task forces respondents to make a trade-off among subsets of items, choosing the “best” and “worst” item within each group.  After some modeling, the items are typically scored on a relative scale from 0-100, where both the rank order and the distance from one item to another is observed. And unlike rating scales which tend to have scores clustering on the high end, Max-Diff results in a nice spread of scores clearly indicating which items are relatively superior.

But, how do we know that the winning items are actually appealing to respondents, and not just the best of a set of bad options? Max-Diff scores are relative, meaning they only compare the items to each other. But we don’t have any information about an item’s absolute preference.

Luckily, we have a couple options.

Two Ways to Control the Relativity of Feature Prioritization

Suppose a potato chip manufacturer wants to test out 10 new flavors and we run a Max-Diff exercise to get the order of preference. From the figure below, we see flavor A is leading the pack, with flavors B & C not far behind, and the rest further down.




We’ve been on the GRIT list of most innovative research companies for five years now. I’m proud of that achievement and of the fact that we’ve moved up 10 places in those five years (many much larger firms rank lower or not at all). I think the key point for me to share though is that we don’t innovate to make the GRIT list, but rather GRIT simply recognizes what is a way of life at TRC. 

TRC was founded in 1987 at a time when more than half of all phone interviews were done using hard copy paper and pencil forms and almost no one had a PC on their desk. From the start, every TRC employee had a PC on their desk from interviewers through our top executives. To do this we installed what was, at the time, the largest PC network in the world (PC World Magazine wrote an article on us). From there we adopted digital recording technology so we could quantify quality, and then went on to become very early adopters of using the internet to do surveys.

Beyond data collection, we innovated in techniques. Over the years we created techniques like asymmetrical key driver analysis (which doesn’t assume that all features will have the same positive and negative impact) and Bracket (a more efficient way of doing ranking exercises). We also applied things that we learned from our many academic partners such as Smart Incentives (a gamified incentive aligned method for ideation within quantitative surveys).

We continue to come up with new ways of driving insights. Some are improvements on existing methods (such as better ways to do Discrete Choice) and some are applying new tools to better understand what drives consumer behavior (such as text analytics).



Folks isolating at home during the COVID-19 pandemic are looking for inexpensive family-friendly ways to entertain themselves. Jigsaw puzzles seem to be fitting that bill, and my family has been doing them since before the shut-down began.

At my house, as we’ve gotten better at doing them, we’ve also gotten more particular about which puzzles to buy. Subject matter, the size and number of the pieces, the construction material, border type and repetitiveness of the patterns all factor into our decision for which puzzles to tackle. We won’t attempt something all in one color palette nor one with rounded edges (that grayscale Moon puzzle circulating social media is a definite NO). But we also don’t want to waste our time on something that is too easy or with juvenile subject matter.

As I’m dreaming of the perfect puzzle, I can easily see how a manufacturer could utilize conjoint to help determine the types of puzzles to design. Puzzle-buying consumers could trade-off puzzle features and price, perhaps even bundling some puzzles together. Suggestions for puzzle subject matter could be generated through a crowdsourcing-style research exercise, such as our Idea Mill™ agile product. The 6 to 36 designs with the most promise could then be winnowed down in an Idea Magnet™ feature prioritization exercise.

So now that I have the entire research program laid out, I just need a jigsaw puzzle company to embrace my research plan and quickly – before I run out of puzzles!

Hits: 1043 0 Comments
Every corner of our world, and every type of business, has been impacted by COVID-19.  
In market research, we have seen in-person qualitative projects (focus groups, one-on-one interviews, etc.) move to online platforms. Like business meetings, school classrooms, and social events, research conversations have been able to take place virtually thanks to the Internet. There is a proliferation of tools at our disposal to continue our work under new parameters. Research technology providers have shared many success stories under this “new normal,” and I have experienced them first-hand as a moderator.  


How the nature of the qualitative research conversations had to adjust 

The past few weeks have reminded me that these interactions truly are “conversations” and not just information gathering sessions. While quantitative surveys depend on consistency and objectivity, qualitative engagements require active listening and adaptation to individual participants. Truly hearing someone involves understanding their state-of-mind.  
Regardless of the research topic, it feels appropriate to touch on respondents’ experiences – medical and social – before asking for their input. Jumping right into moderator guide content feels like ignoring “the elephant in the room.” 

Specific Examples

I have interviewed both physicians and consumers during this crisis and unless a respondent leads the conversation in another direction, I have generally started with an informal check-in to acknowledge the unusual situation we’re all experiencing.
 - I’m starting consumer interviews with questions like “First of all, how are you feeling, and is everyone around you okay?” or “Before we get too far, how have you been doing with everything going on in the world right now?”
 - With physicians, I might begin with something like “I’m sure things have been very different for you lately. Is there anything you need me to know about what’s happening with you or your practice at this time, before we get into our questions?”
Most participants and their families are healthy; many are concerned about jobs, and all are facing the disruptions caused by stay-at-home orders. By addressing the challenges they articulate (“I understand, and I’m hearing that from others as well” or “I am sure that’s difficult, and I appreciate your taking the time to talk with me at such an unusual time”), I can both establish the important context around interviews during this crisis and also then move beyond it to the actual interview content.  

People long for interaction

Shelter-in-place requirements surely play a role in maintaining pre-Coronavirus research response rates.  Interview targets have time and flexibility, and an honorarium is as valued as ever, if not more.  But there is another facet for many: isolation is lonely. Participating in a market research discussion is an opportunity to “meet” someone new. Many of us miss that type of interaction, but for someone living alone or without much social engagement, being a respondent has additionally compelling value.  
Of course the objective for any moderator is to explore a research question and provide answers to our clients. We are not in this business to become friends with respondents. But in this unprecedented time, we should be sensitive to participants’ mindset and motivations. As always, we are asking them to open a window into their world by answering our questions. For now, we might also be opening a kind of window for them to look out from.
Hits: 999 0 Comments

Want to know more?

Give us a few details so we can discuss possible solutions.

Please provide your Name.
Please provide a valid Email.
Please provide your Phone.
Please provide your Comments.
Invalid Input
Our Phone Number is 1-800-275-2827
 Find TRC on facebook  Follow us on twitter  Find TRC on LinkedIn

Our Clients