Welcome visitor you can log in or create an account

800.275.2827

Consumer Insights. Market Innovation.
blog-page
Recent blog posts
The Economy of Food at Sporting Events
Image source: www.sports-management-degrees.com

As we learn to make sense of ever expanding amounts of data into simple recommendations, we would do well to think about presenting data in a better way. People often make the mistake of describing themselves as either a “numbers person” or a “picture person”, but in reality we all possess two sides of the brain. …right (images) and left (analytics). I read an article this week which makes the point that the best way to drive understanding is by presenting analytical data in a visual way. This engages both sides of the brain and thus helps us to quickly internalize what we are seeing.

We might be tempted to say that data visualization is easier said than done (but then what isn’t?). We might also be tempted to say that most market research data isn’t that interesting. I tend to disagree.  

Just last week I exchanged some emails with Sophia Barber of Sports-management-degrees.com. She pointed me to a great info graphic about spending on food at sporting events. It is colorful and comprehensively covers a lot of data. If you are a “numbers person” you might try paging about halfway down where all of the underlying data are presented in stark form. My bet would be that even the staunchest numbers person will get more from the combination than from the dull recitation of facts.  

Of course, both food and sports are relatively interesting topics, but what if the topic isn’t fun and interesting? I still say that results from even highly analytical studies (things like conjoint, discrete choice, pricing studies and so on) can be made more memorable and more interesting through the simple addition of pictures and I mean pictures that go beyond simple graphs and charts (which are often as dull as a list of numbers). Doing so drives the point home faster and with that makes our work more relevant.  

Hits: 1207 0 Comments

My favorite feature of Quirk's Marketing Research e-newsletter is Research War Stories. In one issue this spring, Arnie Fishman reported that he had an unexpectedly high result when he asked research participants whether they eat dog food "all the time." He framed the question by asking how often they ate each of a variety of "exotic foods," including rattlesnake meat and frog kidneys, among others.

This got us thinking that maybe you'd get a different result if you asked just about dog food rather than about dog food amongst other crazy types of foods. So, being the researchers that we are, we designed a monadic design experiment to see what would happen.

Using Arnie's same framework of exotic foods, we asked one group of our online research panelists how frequently they eat dog food. On the next screen we asked the same question about rattlesnake meat. They always saw dog food first, so they had no other stimulus when they answered the dog food question.

We asked another group of panelists about dog food, rattlesnake meat, frog kidneys, gopher brains, and chocolate covered ants all on the same screen. We hypothesized that this group would be more open to admitting to eat dog food when grouped with these other items rather than just being asked directly about dog food.

Well, we were wrong about that – none of the folks asked about dog food alone admitted to eating dog food all the time, and 1% of those asked about dog food amongst the other exotic items did so (not a statistically significant difference). The percent of folks in both groups saying that they "never" ate dog food was the same as well (96%). So in our experiment, the "framing" of the question had no bearing on the response.

...

Really enjoyed the IIeX Greenbook conference. I generally concurred with the opinions expressed and many of the presentations gave me ideas on how we might better serve our clients. Thought I might share some of my reflections here.

In general terms this was a conference that likely scared more than one researcher to jump. For example, Charles Vila the head of Campbell Soup’s Consumer and Customer Insights for North America said that within five years he doesn’t expect to use any survey data.   Personally, I tend to disagree with such sweeping statements (hopefully this won’t prevent me from working with Campbell’s moving forward), but perhaps they are necessary to shake our often complacent industry into thinking differently.

In that regard, Campbell’s is a good example. Their flagship product is soup, a product that has been around forever and sold by them for 100 years. This doesn’t stop them from innovating not just with new products, but in the way they engage the customer. Their staff is immersed in the latest gadgets that consumers are using so they can better understand how they can be employed in Campbell’s marketing efforts.

So, I’d encourage researchers to do the same. Ultimately it doesn’t matter if surveys go away or simply cease to be the primary form of data collection. If we allow ourselves to be defined by how we acquire data then we deserve to go the way of the proverbial buggy whip manufactures did at the turn of the last century.

The great news is that many of the new technologies being shown off are not really competing with us. Most seek to provide new tools for traditional research companies to use.   Some might replace surveys and others augment them. Some are really just surveys in another form (such as Google’s) and there are new ways to design and implement surveys to better get at the truth (my partner Rajan Sambandam’s presentation on “Behavioral Conjoint” being one self-serving example). The possibility of improving our ability to guide product development, pricing research and marketing is one we should embrace.

...

I was treated to a presentation given by Professor Joydeep Srivastava from the University of Maryland at our Frontiers of Research market research conference in May. Joydeep’s discussion focused on pricing research and perceptions of what consumers are willing to pay based on the way the prices are presented to them – whether prices for the components are bundled together or shown apart.

One point he touched on almost as an afterthought is that no one wants to pay for installation. I must agree with him that no one wants to agree to a price only to find out a few moments later that something essential (such as installation) isn’t included. This seems to break the contract, and can lead to feelings of resentment – and, as he pointed out, lost sales. On the other hand, presenting installation costs separately as an option can be enticing to the Do-It-Yourselfers who would want to be able to weigh the pros and cons of tackling that step themselves.

I was reminded of all of this when I ordered a map update to my car’s navigation system. When I received the jewel case in the mail I assumed it contained a CD which I could pop into my car CD player and install the update on my own. Only the jewel case didn’t contain a CD, it contained a memory card, and there were no accompanying instructions – not even a phone number. After popping it in my computer to look for a read-me file, I was still at a loss. So I gave my car dealer a call and they told me to bring it in for installation. When I arrived, the service technician told me I could have saved myself the trip and done it myself by inserting it in the card slot. I told him I didn’t know I had a card slot, and if he told me where to find it, I’d be happy to go do it on my own. A senior technician intervened, and taking pity on me he asked a tech install the maps and then told me there would be no service charge.

By the way, I finally found the card slot after searching for it for about 15 minutes.

There was no mention of installation in the up-front sales process whatsoever. So my first assumption was the correct one, that I should be able to do it myself. But that wasn’t addressed in the sales process nor in the product packaging. Not addressing installation up-front can lead to very different outcomes:

  • The manufacturer can keep the cost low and potentially sell more updates by not having to create detailed installation instructions which can vary by model and year. But even if professional installation was not required, leaving the consumer confused after a purchase is never a good idea and no doubt leaves consumers with some ill will.
  • In my case, the dealer took the view that my purchase of the vehicle (and the update) gave them the opportunity to help me out in situations like this.... And I could view it either as an extension of their awesome service or that their service was “bundled” into the original price of the vehicle. Either way, the dealer comes out looking good – so much so that perhaps they can charge a premium for this all-inclusive service “bundle” the next time around.
Hits: 1434 0 Comments

research conference may13 2013We just wrapped up another of our client conferences and it was another successful day for all concerned. This conference stood out for the level of interaction between the speakers and the audience, a testament to the speakers, their topics, and the keen interest that practitioners have in these topics.  

The first speaker was Olivier Toubia from Columbia University. Olivier is a true leader in the area of innovation research and teaches an MBA course called Customer Centric Innovation. He gave a quick round up of four important questions that he has been able to address through his research – how to motivate consumers to generate ideas, how to structure the idea generation process, how to screen and evaluate the ideas and how to find consumers who have good ideas. By taking us through a variety of studies (including surveys and experiments) he was able to answer these questions and provoke a lot of interesting thoughts from the audience.

Next up was Vicki Morwitz from New York University. She uses surveys extensively in her research and is a leader in understanding the impact that survey responses have on subsequent behavior. She was able to present evidence about the unintended effect that surveys have on respondents, something that should be of interest to all marketing research firms and indeed all marketers. In some cases surveys have a positive impact in that they increase future purchasing behavior, but said Vicki, should be used with caution as overt efforts to influence consumers do not seem to work.

Vicki’s presentation was followed by TRC’s own Michael Sosnowski who discussed the idea of doing more with less in a mobile world. He talked about the increasing numbers of survey respondents who are attempting to get at surveys using their smartphones and why we as researchers should be aware of that. He questioned the conventional wisdom that mobile phone surveys should be short and simple and showed examples of more complex choice based surveys (using TRC’s Bracket) can be conducted on mobile phones and how it provides results similar to an online survey. We may not be ready to do conjoint studies on mobile phones, he said, but neither should we artificially constrain ourselves to extremely simple data collection. Using good design and sophisticated analysis it is possible to get good quality information from mobile surveys.

Following Michael was Joydeep Srivastava from the University of Maryland an old friend of mine from my graduate school days. He is now a leading consumer behavior researcher who has done especially interesting work in the area of pricing. His specific interest is in partitioned pricing (such as charging a separate price for shipping) and he was able to enlighten the audience with the results of his experiments. For example, he was able to counter the myth that charging a separate shipping price and then providing a price discount to offset it would stave off any damage to the company. On the contrary, it actually reduced the purchase likelihood compared to not providing a discount. This, he said, was because of people’s unwillingness to pay for shipping in the first place and the explicit reminder of it with the offsetting charge.      

...

Pricing Research in Context

Posted by on in New Product Research

My last blog about pricing research was still fresh in my mind when I read an excerpt of Craig LaBan’s recent online chat. LaBan is the Philadelphia Inquirer’s restaurant critic and offers insightful reviews and information for foodies in the region. I was intrigued by the discussion of how Federal Donuts charges different prices at the ballpark than in their stand-alone restaurant locations.

Our clients typically look for answers to how to price their products either alone or bundled. But I personally have yet to have a client ask me how to price a product differently based on the situation or context. There is good information to be had on this topic: in “Contextual Pricing: The Death of List Price and the New Market Reality” the authors point out that the pricing scheme for Coca Cola includes air temperature at the point of sale. But what tools are available to the market researcher for exploring situation-based pricing?

At its simplest level, we can ask consumers what they’d be willing to pay given a certain situation (such as in an airport or on an airplane). By using a monadic design in which similar groups of respondents are asked about a single price point, we can compare across the groups to see what the various “take-rates” would be.

Discrete choice could be employed to vary both the context and the pricing – in that way multiple situations could be tested along with multiple price points. (My colleague, Rajan Sambandam will be speaking about Behavioral Conjoint at the Insight Innovation Exchange NA event in Philadelphia in June.)

I’m not sure how Federal Donuts arrived at their pricing decision – it could very well be that the ballpark charges more rent and that factor alone determined their pricing. But when all other factors are equal, determining how much to charge can have important financial consequences.

Hits: 1269 0 Comments

I was in a meeting last week about pricing research and we talked about how far it's come from the days of simply asking people what they'd pay for something. From laddering to Van Westendorp's Price Sensitivity Meter to Discrete Choice modeling, the research industry has grown in sophistication in addressing this very crucial aspect of product development and marketing.

I started thinking back over some of the pricing research I've been involved with over the years, and I realized that at times our clients come to us without the information they'll need to make the project a success. That's not to say they're not doing their job -- but pricing research does have a few requisites. Here are 3 keys to effective pricing research:

  • Know what it costs to produce. This can be tricky for a start-up service or for a physical product that hasn't been manufactured yet. But we need a basic understanding of what the minimum price should be -- anything below that would be unprofitable, so there's no sense including extremely low price points. The sky's the limit on the maximum, but we need the minimum in order to anchor the study design in reality.
  • Know the competition. Speaking of reality, we can design pricing research with or without factoring in competitive products. But if you're going to include your competitors, we need an understanding of what their products are and how they're priced. We want to construct choices that are as close to reality as possible. Premium-priced brands should reflect premium prices, or your results could skew in a strange direction.  
  • Know your pricing objective. What are you trying to maximize: unit sales? revenue? profit? Of course, everyone wants all of these. But in laying out a pricing strategy, it helps to understand how the trade-offs will impact your bottom line: is it more desirable to sell more units at a lower price or fewer units at a higher price?  

This list is by no means exhaustive -- I welcome your additions!

Hits: 1399 0 Comments

Big news today as Ron Johnson, CEO of J.C. Penney “resigned”. He did so following a series of decisions designed to make the staid Penney’s brand more hip. Sadly, thus far these changes have chased away existing customers without attracting enough new ones to turn around the store chain’s long decline.

Johnson’s decisions, like those of his mentor Steve Jobs, were made from the gut…no need for any market research. Jobs of course had an almost magic touch. Carefully choosing the markets to enter, when to do so and producing products that were seen as cutting edge. Often his decisions were seen as counter intuitive (such as the opening of retail stores in the Internet age), but time and time again he was proven to be right. So, why didn’t it work for Mr. Johnson?

First, Mr. Jobs was producing products for Apple, not J.C. Penney. Apple was known as a producer of fine computers that were easy to use (intuitive is a word often used). Some of the luster came off that reputation when Jobs left the company, but when he returned there was little doubt what Apple stood for and the types of products to expect. From the moment he returned he looked for places where that reputation (intuitive electronics) might find a market. Mr. Johnson, by contrast saw the J.C. Penney reputation as a problem and looked to change it…a far tougher task.

Second, Jobs often had success by leaping into relatively new markets and then using the power or Apple design and engineering to dominate it. He didn’t create the first digital music player, but he created one that was intuitive and he backed it up with a legal way to buy digital music. He could do this without giving up the existing Apple business (computers). Johnson needed to focus resources on shoring up the flailing store chain…perhaps if he’d had the luxury of creating small J.C.Penney Boutiques it would have worked.

Third, I am reminded of something that I once heard the legendary Warren Mitofsky say with regards to flawed sampling, “Results will be right until they aren’t”. Mr. Jobs was not always right. He left Apple the first time a failure (one could argue that others were at least as much to blame), started a new company that was largely a failure and then started his run, first at Pixar and then his triumphant return to Apple. He was clearly brilliant and had incredible vision, but he was not always right.   Of course, it goes without saying that not everyone has the same skills (me included).

...

Market Research in the Toilet

Posted by on in New Research Methods

market research in toiletI read an astounding fact this week, “More Indians have used a mobile phone than a toilet”. It seemed absurd to me that a relatively new technology would outpace an old (and very useful) one. I came to realize that the absurdity was mainly due to the fact that I couldn’t imagine a world without either device and it struck me that this is an example of what ails the market research world.

The fact is that Indians have not chosen the cell phone over indoor plumbing. The former is widely available (because cell phone infrastructure is relatively easy to build) and the latter is not. So it wasn’t a choice of toilets over telcom, it was a choice of having a cell phone or not having one. Those who got the phones have begun to find uses for it that go far beyond the obvious. For example, fishermen call in while at sea to find out which port is offering the best price for their catch, thus maximizing their profits.

In Market Research we are often blinded by our experience. Instead of viewing new market research technology for its potential, we view it through the lens of what we know. When web data collection arrived, many didn’t see the opportunities it offered and instead defensively dismissed it as being inferior to existing methods and only offered the benefits of being “cheap and fast”. After more than a decade, it amazes me how many still hold this belief.

For years now, my colleague Jessica would solicit donations to the American Cancer Societythrough its annual Daffodil Days® campaign. Each year I'd give Jessica my donation and a few weeks later I'd receive 10 daffodil buds. I'd arrange them in a vase in my office and watch as they opened up into beautiful blooms over the course of a few days. And in doing so I'd be reminded that my donation is being used to find ways to eradicate cancer and help people in need.

It was announced that this year would be the final year for Daffodil Days®.

product optimization daffodils

I have to admit, my first thought was not, "how will I donate to ACS now?" My first thought was that something was being taken away from me! Which, of course, irritated me. My second thought was that I'll have to look for another way to get daffodil buds next spring. And then it dawned on me that by cancelling the daffodils promotion, the ACS could be losing a long-time supporter.

Businesses are faced with product optimization decisions all the time – what will happen if I remove a product, service or distribution channel from the market? Will customers be lost? What will the short- and long-term effects be?

In my last blog I talked about the value of market research even if all it does is validate what you thought you already knew. A further question might be, "Should we encourage our clients to hypothesize?". My answer would be a definitive "YES!".

My answer is likely biased by the fact that we work with Hierarchical Bayesian (HB) Analytics so frequently (mainly using choice data such as that created by conjoint). After all, HB requires a starting hypothesis. But the reality is that even if we don't use HB, a hypothesis is a useful thing.

First, understanding what our clients EXPECT to find is a great way to understand what they NEED to find. They need to validate or reject their prior thinking so the more we understand their thought processes the more we know where to focus. In addition, this understanding often leads to insight into their firm's business decision making. This helps us to present results that tell a story that resonates with them. This is true even if the findings contradict their thinking.

Second, by presenting results in this way we help our clients to do more than meet the objectives of the current study, but to walk away with a better understanding of what to expect in the future. Flaws in logic will help them to avoid those flaws when similar issues come up.

Of course purists will point to the risk that starting with a hypothesis may bias our results. We might be inclined to design our research and reporting to match the narrative we expected to find. We might also be tempted to avoid the "kill the messenger" problem by sugar coating the truth.

These are fair points and well worth guarding against. They do not, however, undercut the premise that having a starting hypothesis makes for better market research and likely better use of results.

Hits: 1153 0 Comments
higgs bosonI read an article about the discovery of the Higgs Boson at CERN. This is the so called "god particle" which explains why matter has mass. While the science generally is beyond me, I was intrigued by something one of the physicists said:

"Scientists always want to be wrong in their theories. They always want to be surprised."

He went on to explain that surprise is what leads to new discoveries whereas simply confirming a theory does not. I can certainly understand the sentiment, but it is not unusual for Market Research to confirm what a client already guessed at. Should the client be disappointed in such results?

I think not for several reasons.

First, certainty allows for bolder action. Sure there are examples of confident business people going all out with their gut and succeeding spectacularly, but I suspect there are far more examples of people failing to take bold action due to lingering uncertainty. I also suspect that far too often overconfident entrepreneurs make rash decisions that lead to failure.

Second, while we might confirm the big question (for example in product development pricing research we might confirm the price that will drive success) we always gather other data that help us understand the issue in a more nuanced way. For example, we might find that the expected price point is driven by a different feature than we thought (in research speak, that one feature in the discrete choice conjoint had a much higher utility score than the one we thought was most critical).

...

Okay, so it wasn’t really just the two of us – there were a few hundred others involved. Still it was a very memorable evening that I think is worth sharing.

The day started innocently enough. I was heading out to Yale for a guest lecture in the MBA Marketing Research class taught by Jiwoong Shin as I have done for several Spring semesters now. I like this trip a lot as it allows me to catch up with many of my friends in the Yale Marketing Department. One of those is Shane Frederick and I had emailed him to see if he was around. He replied asking if I was attending Kahneman’s lecture. I had no idea that Daniel Kahneman, Nobel Prize winner and godfather of behavioral economics was giving a lecture there. The day was already getting better! I quickly changed my Amtrak ticket to a later time and told Shane I would come by his office so we could walk over.

My guest lecture went off very well with the students asking plenty of interesting questions. Then I had lunch with Zoe Chance who is doing some very interesting work with leading companies, applying ideas from behavioral economics. After a couple more meetings, I went to see Shane and we walked over early knowing there would be a big crowd. And we were glad we did, as the auditorium was overflowing by the time the lecture started.

Daniel Kahneman (Danny to his friends) was introduced by another notable person from Yale, Professor Robert Shiller (yes, he of the Case-Shiller Index you may have heard about during the housing crisis). Shiller talked about the widespread impact of Kahneman’s work , especially after the publication of his best seller Thinking, Fast & Slow. Trying to find Kahneman’s connections to Yale, Shiller pointed out that two of his coauthors (Shane Frederick and Nathan Novemsky, both in the marketing department) were at Yale.

And then it was time for Kahneman to speak. His humility, thoughtfulness and eloquence came through pretty much from the first few words. He started by saying that he doesn’t do university speeches anymore since he is not actively doing any research (he is retired), but could not say no to Bob Shiller. Most of his recent speeches have been about his book, and there had been so many that as a consequence he seems to have forgotten everything else he ever did (laughter!). And that, he said, makes sense because as he points out in the book, we like things that are familiar (more laughter!).

...

Market Research Data, A Love Story

Posted by on in Market Research

If you have read my blog, you know that I love digging through data to find new insights and I’m a believer that choice questions (such as those used in Discrete Choice Conjoint or MaxDiff) are the best way to engage respondents and unlock what they are thinking. Given that, a book called “Data, a Love Story” should be a natural fit for me because it is about the ultimate choice…choosing the right person to marry. Ultimately I decided against buying the book (wasn’t sure my wife would see it as purely a curiosity). At the same time, the review I read made me realize that some of the issues the author faced, are the same as those we face as researchers.

The premise is that dating websites can be gamed to find the right mate. Having never used one (my marriage pre-dates them), I assumed that these sites use complex algorithms to match compatible people. The trouble is that while this is true, these algorithms can break down.

First off, many people are not honest in their profile. They might be looking for someone to sit around with and watch television but admitting that is tantamount to saying “I’m really lazy” so they fudge a bit. Some go beyond this and tell whoppers like “I’m not married”. Obviously any bad data will lead to bad matches.

Second, aligning profiles is only a first step…it determines which profiles an individual sees. At that point the individuals are free to contact each other or not. Thus, how that profile reads is more important than the questions that determine the “match”.  

The author, Amy Webb, decided to gather her own data. After crunching the numbers she was able to both better attract invitations from the right men AND figure out which of them she should be talking to.  

...

Our utilities clients have raised the issue of infrastructure improvements on more than one occasion. These improvements are often expensive to implement, but the average customer sees no tangible benefit -- the water ran yesterday and it’s running again today.

Yet maintaining the pipes, lines and wires are critical to keeping water and power flowing to our homes and businesses. When it comes time to invest in these improvements, it’s hard to rally support when communities face other issues that can produce more visible outcomes when addressed.

Just how far apart are community leaders and residents about the importance of improving their communities’ infrastructure?

We decided to find out.

Asymmetry and the Lottery

Posted by on in Market Research

If the lottery can accurately be called a “tax on the stupid”, does my playing it make me stupid? To understand (or perhaps rationalize) the answer, you need to understand the principles of Asymmetry

As usually happens when the jackpot on PowerBall goes into the stratosphere (in this case it reached nearly $600 Million), someone here at TRC started a collection to play as a group. A pretty high percentage of our staff decided to play, even those with the most advanced degrees in statistics. So given the chances of winning are something like 1:175 million per ticket, why did we do it?

It certainly wasn’t that by buying so many tickets (nearly 50), the odds became anything near a slam dunk. In fact, they were easy enough to calculate (1:3,650,489.79) so there was no doubt in my mind that I wouldn’t win when I played and yet I still did.

The reason was simple. I had to choose to play or not to play and consider the likely outcome if we won or didn’t win:

  • I play and lose (A small $6 loss and an outcome that my brain expected all along)
  • I play and win (A massive win with my share being $10Million…despite expecting to lose, my brain is now elated)
  • I don’t play and they lose (I have some very minor bragging rights, but ultimately I missed out on the fun and only saved $6)
  • I don’t play and they win (Even as I console myself that the odds were with me, I feel like a complete idiot)

In other words, playing offered only upside and not playing only downside. That is exactly why we consider Asymmetric effects whenever we do analysis.   Otherwise we may miss what really drives consumer decision making.

Hits: 1778 0 Comments

electoral map 2012 nov 7tWas the election outcome a surprise for you? It wasn’t for me.

In some ways election night was quite boring. And I blame Nate Silver, Sam Wang and others who predicted the outcome with such stunning accuracy that (at least for me) the drama was completely missing. While conventional pundits and partisans were making all kinds of predictions ranging from “Toss-up” to “Romney landslide”, a group of analysts (nerds, if you choose) were quietly predicting that Obama had a small but consistent and predictable lead. Turns out they were spot-on in their predictions (and were predictably smeared by vested interests).

In my last post I talked about Nate Silver and the approach he uses. This time I want to draw your attention to another analyst, Sam Wang of the Princeton Election Consortium. He is a neuroscientist who has been forecasting for the last three presidential election cycles and has been doing a remarkably good job of it. He nailed the Electoral College vote in 2004 and missed by just one in 2008. How did he do this time? Well, he had two predictions. One of them (based on his median estimator) was 303 for Obama, which is where the tally currently stands, subject to Florida being officially called. The second one (based on his modal estimator) was 332 for Obama which is where the tally is likely to end up if/when Obama wins Florida. Excellent calls whichever way you look at it, given the extremely close race in Florida.

A friend of mine posted on Facebook that she’d taken a web quiz to tell her which presidential candidate best lined up with her stand on the issues. She was outraged that the web site thought she would vote the way it did. I’m not surprised (by the outrage, not her choice)…it is a case of a badly applied choice technique.

Basically the quiz worked by asking a series of questions to see where she stood on the issues. It then aligns her choices against the stand taken by the candidate (if you want to try one, here is one from the GOP Primaries this year). In essence it is a Configurator. Instead of building the perfect product for you (as you would with a Configurator) you build the perfect candidate. There are a couple of problems with this application.

First, Configurators allow you to build the ideal but generally don’t give a clear idea of what choices you might make if that ideal were not available (our proprietary Texo™ helps overcome that issue). In politics it is not unusual for voting decisions to hinge on a single issue and unlike products you can’t decide to add or subtract an important feature.  

I’ll give you the simple answer. Surveys!

No, I don’t mean looking at whatever survey happens to catch your eye or tickles your (or your favorite network or blog’s) ideological fancy. I mean, using a system that is powered by old fashioned surveys and making very, very good explanations and predictions based off that. There is someone who has been doing exactly that for several years now and it makes sense for anyone interested in surveys to understand how he is doing that. I’m talking, of course, about Nate Silver at fivethirtyeight.com.

Interestingly, Silver does not actually do a single survey himself. Instead what he has done is build a database of surveys (that contains thousands) and used some simple and clear rules to analyze them. Based on these rules and the statistical models he has built, he is able to provide the best, unbiased view of the race. All this from survey data. How does he do it? Let’s take a look at some (and by no means all) of his rules.

When we dropped my daughter off for her first year of college a few weeks back my parting words were “Be true to yourself”. I thought this reflected both my accepting that my influence on her was now very limited and my hope that whatever good I’ve done should be put into practice. It strikes me that researchers too should heed the advice.

Our industry has changed and continues to change. Many of the old rules either no longer work or can’t be easily applied to the new tools at our disposal. So how can we apply what we know? A philosophy like “be true to yourself” allows us to do just that.

Personally it has allowed me to accept that representative sampling is no longer the most critical rule (it can’t be in a world where truly representative sampling is too slow and costly). It doesn’t mean I take any respondents I can get…care in trying to get as representative a sample as we can remains important. It just isn’t a stone cold requirement of quantitative research.  

Want to know more?

Give us a few details so we can discuss possible solutions.

Please provide your Name.
Please provide a valid Email.
Please provide your Phone.
Please provide your Comments.
Enter code below : Enter code below :
Please Enter Correct Captcha code
Our Phone Number is 1-800-275-2827

Our Clients