Is the Mini Cooper seen as an environmentally friendly car? What about Tesla as a luxury car? The traditional approach to understanding these questions is to conduct a survey among Mini and Tesla buyers (and perhaps non-buyers too, if budget allows). Such studies have been conducted for decades and often involve ratings of multiple attributes and brands. While certainly feasible, they can be expensive, time consuming and can get outdated over time. Is there a better way to get at attribute perceptions of brands that can be fast, economical and automated?
Aron Culotta and Jennifer Cutler describe such an approach in a recent issue of the INFORMS journal Marketing Science, and it involves the use of social media data – Twitter, in this case. Their method is novel because it does not use conventional (if one can use that term here) approaches to mining textual data, such as sentiment analysis or associative analysis. Sentiment analysis (social media monitoring) provides reports on positive and negative sentiments expressed online about a brand. In associative analysis, clustering and semantic networks are used to discover how product features or brands are perceptually clustered by consumers, often using data from online forums.
Breaking away from these approaches the authors use an innovative method to understand brand perceptions from online data. The key insight (drawn from well-established social science findings) is that proximity in a social network can be indicative of similarity. That is, understanding how closely brands are connected to exemplar organizations of certain attributes, it is possible to devise an affinity score that shows how highly a brand scores on a specific attribute. For example, when a Twitter user follows both Smart Car and Greenpeace, it likely indicates that Smart Car is seen as eco-friendly by that person. This does not have to be true for every such user, but at “big data” levels there is likely to be a strong enough association to extract signal from the noise.
What is unique about this approach to using social media data, is that it does not really depend on what people say online (as other approaches do). It only relies on who is following a brand while also following another (exemplar) organization. The strength of the social connection becomes a signal of the brand’s strength on a specific attribute. “Using social connections rather than text allows marketers to capture information from the silent majority of brand fans, who consume rather than create content,” says Jennifer Cutler, who teaches marketing at the Kellogg School of Management in Northwestern University.
Sounds great in theory, right? But how can we be sure that it produces meaningful results? By validating it with the trusted survey data that has been used for decades. When tested across 200+ brands in four sectors (Apparel, Cars, Food & Beverage, Personal Care) and three perceptual attributes (Eco-friendliness, Luxury, Nutrition), an average correlation of 0.72 shows that social connections can provide very good information on how brands are perceived. Unlike with survey data, this approach can be run continuously, at low cost with results being spit out in real time. And there is another advantage. “The use of social networks rather than text opens the door to measuring dimensions of brand image that are rarely discussed by consumers in online spaces,” says Professor Cutler....
I’ve become a huge fan of podcasts, downloading dozens every week and listening to them on the drive to and from work. The quantity and quality of material available is incredible. This week another podcast turned me on to eBay’s podcast “Open for Business”. Specifically the title of episode three “Price is Right” caught my ear.
While the episode was of more use to someone selling a consumer product than to someone selling professional services, I got a lot out of it.
First off, they highlighted their “Terapeak” product which offers free information culled from the massive data set of eBay buyers and sellers. For this episode they featured how you can use this to figure out how the market values products like yours. They used this to demonstrate the idea that you should not be pricing on a “cost plus” basis but rather on a “value” basis.
From there they talked about how positioning matters and gave a glimpse of a couple market research techniques for pricing. In one case, it seemed like they were using the Van Westendorp. The results indicated a range of prices that was far below where they wanted to price things. This led to a discussion of positioning (in this case, the product was an electronic picture frame which they hoped to be positioned not as a consumer electronic product but as home décor). The researchers here didn’t do anything to position the product and so consumers compared it to an iPad which led to the unfavorable view of pricing.
Finally, they talked to another researcher who indicated that she uses a simple “yes/no” technique…essentially “would you buy it for $XYZ?” She said that this matched the marketplace better than asking people to “name their price”.
Of the two methods cited I tend to go with the latter. Any reader of this blog knows that I favor questions that mimic the market place vs. asking strange questions that you wouldn’t consider in real life (what’s the most you would pay for this?”). Of course, there are a ton of choices that were not covered including conjoint analysis which I think is often the most effective means to set prices (see our White Paper - How to Conduct Pricing Research for more).
Still there was much that we as researchers can take from this. As noted, it is important to frame things properly. If the product will be sold in the home décor department, it is important to set the table along those lines and not allow the respondent to see it as something else. I have little doubt if the Van Westendorp questions were preceded by proper framing and messaging the results would have been different.
I also think the use of big data tools like Terapeak and Google analytics are something we should make more use of. Secondary research has never been easier! In the case of pricing research, knowing the range of prices being paid now can provide a good guide on what range of prices to include in, say, a Discrete Choice exercise. This is true even if the product has a new feature not currently available. Terapeak allows you to view prices over time so you can see the impact of the last big innovation, for example.
Overall, I commend eBay for their podcast. It is quite entertaining and provides a lot of useful information…especially for someone starting a new business.
If you open your mailbox today, chances are that there will be a catalog in it. Even with the explosion in online purchasing, paper catalogs continue to be an important part of the retail marketing mix. Whether they spur traditional mail- or telephone-ordering or, more often now, online purchasing and even foot traffic in brick and mortar stores, catalogs remain critical for retailers. They not only show consumers what is available, but they also serve as an important branding tool.
Even if the recipient does not open or thoroughly review a catalog, its cover, its size and the kind of paper it is printed on can all telegraph meaning about the sender's brand.
But isn't there much more to be gained if the consumer does open the catalog?
Based on an online survey among a panel of consumers nationwide, TRC estimates that the average household receives 3.7 catalogs per week. That is nearly 200 in the course of a year!
So how can catalog marketers break through the mailbox clutter and inspire consumers to look at what is actually inside their materials? We asked our national panel about some factors that influence their decisions to open (or not open) a catalog they receive. A key learning is something catalog marketers would certainly confirm: targeting is critical. Product interest and perceived need account for a large share of the decision to open a catalog, so getting the catalog to the right person is of course essential.
But once the catalog is in the right mailbox, it is clear that what the recipient sees on its cover will be important in whether or not the catalog is opened. First and foremost is the specific offer (sale, percent off, etc.) highlighted on that cover. Cover imagery also plays a role, particularly if the brand is familiar to the recipient.
Take a look at the accompanying chart, and note that we asked some respondents to think about catalogs they might receive from familiar companies, while others considered catalogs from companies they had not heard of before. All of those answering had indicated earlier in the survey that they receive and open/look through catalogs in a typical week.
Knowing that the cover can be so important in whether a catalog is opened, TRC believes it is well worth it to devote resources to ensure that the right cover is used. While some catalog marketers will test multiple covers prior to full mail launches, it is impractical to test more than just a few. Those few are typically selected from among a broader set – based on “gut feel” or simple preferences on the part of the design team.
But what if there was an efficient, consumer data driven method to select a “winning” cover from among a broad set of candidates? TRC has developed just that method: our approach leverages our proprietary Bracket™ survey technology to submit a large number of cover designs to a tournament-type evaluation that yields rankings and relative distance across the entire set of designs. An even more streamlined approach, Message Test Express™ or MTE™, can provide similar insights for up to 16 cover designs – in around a week and for a cost of approximately $10,000.
Considering the volume that any catalog must compete against in the typical recipient’s mailbox, isn’t it practical to maximize the likelihood that the catalog will be opened? Concise, consumer-driven metrics on likely success have been shown in our experience to be superior to “gut feel” evaluations and are certainly more affordable than in-market testing of even a small number of options. Why risk missing a great opportunity by overlooking an optimal cover execution?