We are about to launch a new product called Idea Mill™ which uses a quantitative system to generate ideas and evaluates those ideas all in one step. Our goal was to create a fast and inexpensive means to generate ideas. Since each additional interview we conduct adds cost, we wondered what the ideal number would be.
To determine that we ran a test in which we asked 400 respondents for an idea. Next, we coded the responses into four categories.
Unique Ideas – Something that no other previous respondent had generated.
Variations on a Theme – An idea that had previously been generated but this time something unique or different was added to it.
Identical – Ideas that didn’t add anything significantly different from what we’d seen before.
No Answer – Respondents who had nothing to offer.
Our assumption was that at some point we would see a drop in the number of new ideas we generated and more duplication of effort, both of which we saw:
After just 150 respondents only 12% of respondents were coming up with new ideas. At 250 that dropped to just 6%. By the same token repeat ideas increased to more than half of all respondents after just 200 interviews.
So, given the relatively low return of ideas (1 out of 8 respondents) does it make sense to keep going after 200? The answer is a great big it depends:
Even one idea in eight (or more) could justify the expense if it is a good one.
We might learn more even if we are not generating purely new ideas.
In terms of learning, the “variations on a theme” category points to some real possibilities. Even as the number of unique slid and the number of identical grew, “variations” were pretty consistent. As seen below, roughly 15% of respondents provided an idea that we already had, but with a new take on it. These might have been additional benefits or just different ways of achieving the same benefit.
Data like these are worthy of consideration…especially in a world with tight budget and timelines. It is well worth questioning the need to for example ask the same open ended question 1000 times (unless you are planning to cut the coded data into sub segments, you probably won’t learn that much more from the 1000th respondent that you don’t already know.
Before rushing off to use these data, however, it is important to understand how they were generated. First, we used our proprietary gaming technique Smart Incentives™ tool. This generates more ideas and better ideas to questions that require creativity. So, chances are a more standard “Why did you choose to purchase product X” type question might require more respondents before you would see declining returns.
Second, this experiment was part of another experiment in which we also asked respondents to evaluate the ideas other had presented. To do this each was shown 6 ideas and asked to choose the best using our proprietary choice methods. Only then were they asked to generate their own idea. This and past efforts showed us that doing it in this way actually reduces the amount of duplication (respondents rarely offer the same idea or a variation on the idea they have just seen). It is impossible to say if a more standard question (even an ideation one) would reach declining returns sooner or later than our experience here.
My point, however, remains valid. Budgets don’t allow us to do everything we want…best to question the value of each and every aspect of your research project.
Rich brings a passion for quantitative data and the use of choice to understand consumer behavior to his blog entries. His unique perspective has allowed him to muse on subjects as far afield as Dinosaurs and advanced technology with insight into what each can teach us about doing better research.