Welcome visitor you can log in or create an account

800.275.2827

Consumer Insights. Market Innovation.

blog-page
Rich Raquet

Rich Raquet

President, TRC


Rich brings a passion for quantitative data and the use of choice to understand consumer behavior to his blog entries. His unique perspective has allowed him to muse on subjects as far afield as Dinosaurs and advanced technology with insight into what each can teach us about doing better research.  

Numbers That Don't Add Up

Posted by on in Better Graphics

In my last blog I talked about a simple chart on Morning Joe, which was presented by Steven Rattner. I submitted that when we see data presented in the media or especially by politicians, we should judge it in terms of how a researcher would have presented the same data (because of course researchers are free of bias...well let's leave that for another blog). I gave Mr. Rattner a pass last time, but his presentation of a chart on infrastructure was misleading and would only have pleased a client who wanted misleading data to prove a point.

In this case he presented a chart showing infrastructure spending as a percentage of GDP . It showed a massive drop from the high in the 1950's to the low of today. The chart had a y axis that went from 0% to 1.5% which made the drop easier to see. Nothing wrong with that (assuming those viewing the chart understood that it was not based on 0-100%).

A  few blogs back I talked about how the political season would bring on a rash of misuse and abuse of numbers. I've had my ears open for examples and a couple that came up recently got me to realize that a more nuanced view is necessary here. The real rule should be that pundits and politicians should be held to the same standards as we are by our clients. Namely, the numbers should help in the decision making process...not mislead or confuse the facts.

In the next two blogs I'll use some charts presented by Steven Rattner on the Morning Joe television program. For those of you who don't know, Mr. Rattner was the President's Car Czar. While this probably means he comes with his own bias, I have generally found that when he presents data he does so in a pretty fair way.

esomar logoLast time I talked about how we as an industry worry about response rates and respondent engagement either too much or for the wrong reasons. This time, I'd like to expand on that point by picking up on a comment made by Joan M. Lewis of Procter &Gamble.

The second day of the ESOMAR CONGRESS conference featured a panel of big research buying clients. They talked about the things they wanted and were not getting. Two big areas were boiling data down to as few charts as possible and to help them drive innovation and change. Both are related. In essence, don't give me a 100 page report or a chart with 100 numbers on it. Boil it all down and tell me what to do!

Tagged in: Reporting
Recent comment in this post - Show all comments
  • Ed Olesky
    Ed Olesky says #
    In his book "Never Confuse a Memo with Reality: And Other Business Lessons Too Simple Not to Know", Richard A. Moran has the follo

esomar logoTwo other topics that came up a lot at ESOMAR were respondent engagement and representativeness.   Personally, I think discussions of the former are often misguided and discussions of the latter are a waste of time. Not that I oppose engaging respondents or high response rates, just that I'm practical enough to recognize that neither will happen without a good business reason for them to happen.

With regards to response rate, the boat has clearly sailed. Surely this is clear now that huge research buyers like P&G suggest moving beyond focus on response rate. I suspect they, like me, would love higher response rates, but they have come to realize that it isn't going to happen. The massive increase in the number of surveys being done (I get one every time I take my car in, and I was just handed on here on my plane trip back from ESOMAR) has caused the public to tire of doing them. Add in that improving response rates involves greater costs (more attempts, mixed modes, higher incentives) and greater time. 

Time For More Game Playing in Market Research

Posted by on in Conferences

I'm on the plane heading back from ESOMAR. I found the diversity of opinions and ideas shared there to be both interesting and thought provoking. Over the next couple blogs I'll share my thoughts on what I got from the event.

First off, gaming; no subject divides researchers more. Several presentations showed tests that used game elements to engage the respondents. One effort by MSI created a sort of fantasy backdrop in which players answered questions to get things they would need on their game quest. The idea was to engage respondents and with that get better data. Sadly, the results didn't back that up at all. Results did not vary much (specifics are available on the ESOMAR site), but respondents who did it were more engaged. At the same time, response rates were lower (loading time put some people off and some had no interest in the game). Easy enough to theorize that the mistake here was that the game was a sort of reward for doing the survey, but not related to it. As such, it does little to engage the respondent.

I Look at Data From Both Sides Now

Posted by on in A Day in a (MR) Life

caddie and golferI was watching the final round of the Bridgestone Invitational and my 14 year old son came in to the room.  I told him the established narrative. After a difficult two years Tiger Woods had returned to golf, but not before firing his long time and very loyal caddie.   Most saw this as just plain nasty on Tiger's part.  

I then told him how another golfer, Adam Scott, hired the caddie and was now on the verge of winning the tournament. I summed it up by saying that justice had prevailed.

He didn't even miss a beat before asking me, "Did Adam Scott fire his caddie so that he could hire the caddie Tiger fired?"

I don't follow competitive golf closely enough to know the answer. Worse, I had not even considered that the narrative "Tiger mean/Adam good" might be a bit off.  

A good lesson for any analyst to learn.

Hits: 339534 0 Comments

knowing enough to make the right decisionSometimes as researchers we get too hung up on knowing everything.   We get frustrated by interesting findings that can't be explained with the available data and this can cause us to miss important insights. I suspect that the proliferation of available data will do little to help fill in the blanks...in fact, it might make the problem worse. A simple exercise in text analytics highlights this point.

There are now an array of tools available to help quantify and understand massive amounts of text.  For example,  at one of our conferences last year, Oded Netzer of Columbia University presented an amazing tool that analyses message boards and other online forums to learn about specific markets (slides can be found at:  http://www.trchome.com/research-knowledge/conferences/437). Tools like these provide a rich and valuable source of data, but insight can also be gleaned from far more simple approaches.

Tagged in: Statistics Text Mining

istock_000000237809xsmallThe recent New MR Virtual Festival on presenting data had a number of really useful and interesting presentations. Mike Sherman’s presentation, “Less is More: Getting Value (Not Just Reams of Data) From Your Research” led to an interesting exchange that I think highlights the change in thinking that Market Research must make.

Mike reiterated the point that many have been making…we need to focus our reporting on the key things we learned and not waste executives’ time with a lot of superfluous information. In addition, the report should not just summarize the data, but rather it should synthesize it. He gave an example of a data set with these facts:

  • · Jim broke his knee
  • · A burglar broke Jim’s car window
  • · Jim got a speeding ticket.

A summary of these data might be “Jim’s knee and car window were damaged and he got a speeding ticket”.

A synthesis of that data would be “Jim has been living dangerously”.

The 2012 Presidential Election season is upon us. I don't know about you, but other than the barrage of commercials, the thing I like least about political campaigns is the terrible abuse of numbers. Combined with the current debate on the debt limit and we have the makings of a tsunami of misleading or outright incorrect statistics.

A  few weeks ago, Megan Holstine started a discussion about a Senator using a totally made up statistic. Sadly for him, he quoted a number that was far from accurate, but also one that was easily verified. His defense was that he didn't intend the statistic to be taken "literally".

Makes me wonder if perhaps we've got it wrong.  Think of the possibilities for us if we stopped taking numbers literally!

A new book attempts to make behavioral economics interesting and approachable by couching it in the world of sports. Personally I try to avoid books on economics, but I did find a review quite interesting. Not only did it help to explain why the Philadelphia Flyers lost the 1980 Stanley Cup, but it also helps to illustrate the limitations of crowd sourcing and the reality of Asymmetry in key driver analysis.

Behavioral economics studies the role of emotion in economic decision making (something marketers need to master). In application it can help to explain the illogical decision making of shoppers. A classic example of this is when someone spends $1000 on a product they don't need thanks to a price cut of say $200. They will often focus on what they saved ("I saved $200!!!) and not on what they spent or the actual need.

Recent Comments - Show all comments
  • Ed Olesky
    Ed Olesky says #
    Really this is an innovative and different ideas which you have described in a blog. I have got it more from your blog. Thanks for
  • Ed Olesky
    Ed Olesky says #
    I am instead excited which i arrived throughout this website. I do not locate out any other educational website on this issue mate

Don't panic. CASRO's government affairs committee isn't warning this will happen and I don't have any evidence that it will. The point of the question is along the lines of "necessity is the mother of invention".

For example, over the past 15 years we have seen a move away from phone data collection and toward the web. Initially the focus was on cutting costs and ensuring the quality of the data were the same. As the industry embraced web, however, we began to use all kinds of innovative techniques that we simply could not do on the phone (or by mail for that matter). So, as we face a future with more and more access to data, I thought it would be interesting to think about what we would do if our traditional tools were simply taken away and we had to go cold turkey.

A good place to focus is Satisfaction research, which is already showing signs of decline. According to Inside Research (February, 2011), spending as a percentage of all MR has dropped in Europe (from 18% in '06 to 13% last year) and is stagnant at best in the states (11% last year which is in line with 12% in '09 and 10% in '08). I suspect this decline is not an indication that firms no longer care about satisfaction. More likely it reflects cheaper data collection methods and a realization that it need not be measured as intensively as in the past.

So in a world with no traditional MR, how will firms measure and impact satisfaction?

 

I come to you, as is the tradition, with glad tidings for the New Year.

I do this in the midst of a lot of doom and gloom talk about the industry and our future. At CASRO's annual meeting, Simon Chadwick talked about "Do it Yourself" (DIY) research continuing to grow with no end in sight. A recent LinkedIn thread asked if there was a better word for 'survey' that wouldn't carry the negative connotations. The MR Heretic calls their site "Market Research Deathwatch" with constant warnings about engaging respondents better or destroying our industry. Add in the worst couple years the industry has ever faced and purchasing departments increasingly viewing us as commodities and you have the makings of glad tidings indeed!

 

For the past few weeks, there are two big debates raging in our office:

  • Will the configurator eventually replace conjoint in all its forms?
  • Was it the right call to trade Donovan McNabb to the Redskins?

On the surface the only thing connecting them is that we are a choice focused market research company located just outside of Philly, but in reality they are both the same debates...namely, when is it time for the superstar to move on and allow a new star to take charge? In both cases, the answer will depend on your needs and your perspective...in other words, there is no answer that everyone will agree with.

Some variation of conjoint (discrete choice, adaptive, etc) has been with us now for nearly 40 years. It has proven to be a very effective means of understanding the consumer's thinking process...especially when it comes to developing new products. At the same time, it is not without its flaws.

A few months back I wrote about the dangers of tying results from satisfaction surveys to compensation. The feedback I got was mixed, so I decided to do a quick survey to see what the public thinks.

Of the 72% who were asked to do a follow-up survey after some type of transaction, about 1 in 6 (16.1%) were told by their sales rep what rating to give. While 1 in 6 is alarming, the reality is probably worse because those that do try to influence responses do so repeatedly. My personal guess is that more compensation is impacted the more likely it is that customers will be asked to answer in a certain way.

I'm a regular reader of the Market Research Heretic Blog . The banner above his blog posts reads "Market Research Death Watch". Many great points are made about how we take respondents for granted and how many survey instruments simultaneously gather useless data and reduce the chances of that respondent ever doing another survey again. Most important, the point is made that the market research industry is resistant to change and ultimately that will lead to its demise.

The arrival today of the latest Honomichl 50 list certainly supports the notion that the industry is in trouble. The numbers are the most brutal I've ever seen. Revenue has declined and when you focus only on straight research firms (those doing primary qualitative and quantitative research) that decline is even larger. Employment has dropped even faster (and this is measuring research firm employment, I suspect client side researchers were hit even harder). Jack Honomichl is certainly dour in his column, but I think if anything he is understanding how bad a hit research took this year.

The question is, were the results of this year and last (2008 also showed declines) just related to the recession or do they reflect a trend that will continue long after the recession is officially over? My guess is, we will see some recovery with the better economy this year, but the heretic's warnings should not be ignored.

The research industry has for at least a decade now been facing two conflicting challenges. At the same time the representativeness and quality of data collected is being called into question, our clients are asking us to make our results tie to and be predictive of the real world. I believe that even with the limitations of response rate and respondent behavior, we can achieve good results by asking questions in the right manner. We need to mirror the way people make decisions in the real world…namely by making choices.

What got me thinking about this is the fact that my car lease is ending and I'm shopping.  The last three times I've leased a new car, the process of picking it up has been identical.  I go in, pay some money, sign a bunch of forms I don't understand, get a tour of the car's features and then I'm told that I'll be getting a survey and that I should give the highest marks on everything.   Sometimes the salesman says "if there is something you can't give the highest mark on, tell me what I have to do to earn it", but they always say, "If I don't get the highest mark it will hurt my commission."

I recognize that the car company might not view this or use this as they would pure market research.   In  many respects they are like response cards (like hotels or restaurants use) or invitations to do a survey found on receipts.   Even without all the controls pure market research puts in place, the data generated by these efforts can have tremendous value.   My firm, for example, has used them to help establish the bottom line impact of various attributes.

Are Researchers Too Ethical?

Posted by on in R Squared

Got an interesting question in my Linkedin morning update about the ethics of Market Researchers doing Market Intelligence work. While the question was vague enough to be unanswerable (what sort of Market Intelligence are you talking about?), it got me thinking about ethics. Specifically, I’ve been thinking that researchers are too often focused on strict ethical rules rather than on doing the right thing.

So, right off, let me state that I totally believe in obeying laws, regulations and, yes, ethics. This extends to our dealings with clients, vendors and, most importantly respondents. I wouldn’t want to work in an industry that doesn’t take ethical responsibility seriously. I'm concerned, however, that we don't apply ethical standards intelligently. This, in turn, works counter to the principles our ethics claim to protect and harms our effectiveness as an industry.

Telemarketers: Our Nemesis?

A Dinosaurs Weight

Posted by on in R Squared

Ever wondered how paleontologists know what a dinosaur weighs? OK, me neither, but an article I read in The Economist points to mistakes in past methods and I believe understanding these mistakes can teach us a lot about how to be better researchers.

A dinosaur’s weight is estimated by taking the bone structure and weight of existing animals and then through linear regression predicting the weight of dinosaurs using only their bone structures. For example, a Brontosaurus (technically called an Apatosaurus, but I learned my dinosaur names watching The Flintstones) is estimated to weigh about as much as seven African elephants.

Dr. Gary Packard of Colorado State University wondered how well these equations would do at predicting the weight of living animals. In essence, he pretended we don’t know how much an elephant weighs. He took the weight and bone structure of smaller animals and then used a linear regression to predict an elephant’s weight using only its bone structure. The result was 50% more than an elephant weighs.

Want to know more?

Give us a few details so we can discuss possible solutions.

Please provide your Name.
Please provide a valid Email.
Please provide your Phone.
Please provide your Comments.
Enter code below : Enter code below :
Please Enter Correct Captcha code
Our Phone Number is 1-800-275-2827
 Find TRC on facebook  Follow us on twitter  Find TRC on LinkedIn

Our Clients