MaxDiff: the secret to getting the truth from customers about what they actually care about
Because asking customers to rank features or attributes on a scale from 1-5 is so 2000-and-late.
All signs pointed to evaluating pricing and doing more product discovery to really understand what customers were struggling with.
And the founder, credit be given to him, only put up a small fight and yielded with a thoughtful pause and a smile.
“Okay, Asia. Let’s run the survey. Perhaps I’ve assumed for too long what customers want from us.”
He runs a SaaS product with a tiny technical team in the productivity space — an industry I would not wish on my worst enemies given its competition and investment necessary to avoid the long, slow SaaS ramp of death.
The greatest growth challenge he faced?
A 40% net cohort revenue retention rate after 12 months.
This meant that every year, he’s gotta replace more than half of his revenue. And without a marketing team or enough funds to invest in growth, it also meant that growing the business would feel like a struggle unless he was able to retain more customers long-term (and therefore revenue).
After some digging, it seemed part of the culprit was changing pricing a few times in years prior without a real strategy.
The other part, however, was a lack of focus from a GTM perspective.
They didn’t have a clear customer profile with a clear story to tell, and this reflected in the marketing website, the onboarding experience, and the product roadmapping process.
The fastest way to understand this is of course to conduct customer interviews, but given the client’s budget, he opted for a survey instead.
The Old Way
Our objectives were clear. We needed to understand a few things:
Who was the actual best paying customer
What do they actually value about the product
What features did they want to see from the product in the future
What price sensitivity did they have
What aspects of the product would they be more willing to pay for
Many of these questions are pretty straightforward.
Add in a question around their experience-level with solutions like this. Ask about their role, type of company, industry, etc. Ask about their struggle moments that led to looking for a solution and what they hoped to accomplish.
But when we get to what they value about the product and what features they wanted to see, we’ll notice there’s a few ways this can go wrong.
So normally, in a survey context, the temptation for a project like this would be to ask a question like “Which of these features are more valuable to you? Please rank each one on a scale from 1-5 with 5 being extremely important and 1 being not important at all.”
And you end up with a chart like this:
And then you’re like, “Ooooh. Okay. Looks like pretty much everything is at least really important — except for `Tracking progress`, but that still seems to be at least important.”
You walk away assuming that from a feature perspective and value perspective, customers seem to value all of it.
You do your best to prioritize this information when you go to your product roadmap, but you struggled a little bit with knowing which one actually comes first. You assume it’s “user-friendly interface” based on the chart, but you’re skeptical that the average of “4.73” is really that different from “4.45.”
When you get to the marketing website, you’d ideally like to prioritize the messages on the page based on the survey, but again, you’re at a loss since slapping “user-friendly” at the top of the page seems moot and the “Overview of Projects” and “Visual customization of charts” both got a close score.
If that sounds like you or something you’ve experienced before, just know that it’s not you — it’s those damn Likert scales and their cloaks of false-comfort when you analyze the results later.
Likert scales were originally designed help researchers understand the range of sentiment from respondents. It’s a way of understanding and observing the underlying psychographics of the people being surveyed.
Ideally Likert is only used in scenarios where each question is something the respondent either has a strong reaction to, or a relatively neutral reaction. For example, the statement “I believe I am a consistent person” may have a wide range of types of responses from respondents ranging from Extremely Disagree to Neutral to Extremely Agree on a 5- or 7-point scale.
The problem is when marketers, product strategists, and founders try to use it in a context where each question doesn’t elicit a strong reaction — firmly placing you in the land of “everything seems important and I’m not sure which way is up.”
Here, the question might sound like “How would you rate the importance of user-friendliness in our product?” with options ranging from Not Important At All to Extremely Important.
Except this, of course, just gives us a chart like the above rather than something that’s actionable.
The New Way: MaxDiff survey
Enter the MaxDiff question.
MaxDiff is pretty simple: instead of having customers rank each attribute or feature on a Likert scale, ask them what one attribute was most important to them and what one attribute was least important to them.
Then you subtract the amount of people who said Most Important from the Least Important for each attribute across all the responses. You’ll be taking the “difference” and forcing a clear prioritization across each attribute.
When you analyze the data, you end up instead with a chart that looks like this:
Now this is far more helpful than whatever that other chart was doing. I can look at this chart above and know exactly where to spend my time (probably the Overview of the projects) and what to de-prioritize (probably not the asset or resource management stuff).
But what’s even better is if you know that you can segment this data further to get even more interesting insights (because.. you know.. you read The Work and you know that I’ve already taught you how to segment stuff and not just settle for the high-level shenanigans 😉).
Segmenting for further insights
Now that I’ve asked this question along with other questions in my survey, I can segment based on other data attributes I have such as:
LTV
Persona or segment of the customer
Current lifecycle status (such as trialist, free, paying, churned, etc.)
Industry
Role
Sentiment about the product (such as NPS or affinity)
Literally whatever else you want
That allows me to see what’s more or less important in various scenarios. Who could have guessed, for example, that Industry 3 valued this product differently than users in Industry 2 (specifically the ability to track progress)?
Or that companies from Industry 1 appeared to be the least price-sensitive across all the potential segments?
Out of context, this information might not seem all that interesting, but for the founder, it completely changed the way he thought about his customers and his roadmap.
He’d been operating under the assumption that spending his time prioritizing all of these various segments would be in his best interest and building more and more features related to Visual customization of the charts and graphs would make him more money.
Once we ran this MaxDiff survey (along with a price sensitivity analysis), it was clear that more time and energy on certain aspects of the product wouldn’t necessarily drive more value for the customers or his bank account at the end of the day.
That, and it was also pretty clear that there was a specific industry and persona-type that seemed to value the product more than others (which meant a clearer GTM focus).
So what do you think? Will you be using the MaxDiff-style questions in your next survey? Sound off below 👇