Category Archives: Research

The Key to Making Market Research Actionable, Part II

In the first part of this two-part series we discussed the importance of timing in ensuring that market research is actionable.  The moral was to time the arrival of research (e.g., win/loss reporting, NPS surveys, awareness and marketing funnel studies) with the cadence of your company’s quarterly and annual strategic goal setting process.

Research that arrives asynchronously gets read (if you’re lucky) and then forgotten.  Research that arrives synchronously becomes a homework assignment for the meeting and a session on the agenda.  That way, its findings are top-of-mind when you sit down to decide priorities and hammer out OKRs.

In part II, we’ll take a more strategic look at the question.  Ultimately, to make market research actionable, you need to ensure five things:

  • Good timing.  It must show up at a time when you’re ready to absorb and action it.  The subject of the first post in this series.
  • High relevance.  It needs to help answer your most important strategic questions.
  • Action-oriented framing.  Work to ask questions in a way that provides action-oriented answers.  You can ask, “do you have plans to move to the cloud in the next five years?” or you can ask, “do you plan to move to the cloud in the next year, and if so, what are your top three evaluation criteria?”
  • Time for consensus building.  You can’t just spit out the answer from a black box.  At each stage of the process, you want to have discussion and get buy in, so that when the end is reached people feel the process was valid and buy into the conclusions.
  • A qualitative component.  Quantitative answers what, but not why.  Qualitative can lead to understanding why.  Pair surveys with interviews for this reason.

Put differently, as my friends at Topline Strategy say, market research that gets turned into action is market research that was designed from the outset to be actionable.

Let’s drill into relevance and action-orientation a bit more.  To ensure you’re asking relevant questions, you should do two things.

First, create what I call the hypothesis file.  This is a file where you write down, over the course of the year, every time you hear an assertion or a hypothesis that you’d love to validate.  Examples:

  • The problem is nobody’s ever heard of us.  We’re just not seeing enough deals.
  • The issue is we’re not making the short list and that because we’re not seen as a leader.
  • We can’t sell use-case A because we’re seen as weak on features 1, 2, and 3.
  • The leakage point in our funnel is demo.  We lose too many deals there and that’s because of our UI.
  • We’re not speaking to the business buyer’s priorities.
  • Everyone’s tired of talking about (e.g.,) data culture, we need a new message.
  • If we just focused on BigCo replacements, we could do the numbers on that alone.

These are rarely offered as hypotheses.  They’re usually statements, often presented as self-evident facts.  You need to tune your ears to hear them and write them down.  Don’t fight every one in real time.  But be keenly aware that these are the foundations of your internal corporate mythology — and it’d sure be nice to know if they’re true or not.

When it’s time to do a market study, review the questions in the file and decide which ones you want answered.  Picking the hottest questions will guarantee that people will be champing at the bit to see the results.

Second, to ensure high relevance, try to identify the core strategic questions you’re facing, whether they appear in the hypothesis file or not.  Such as:

  • In which segment are we really most successful, not just at winning deals but renewing and expanding them?
  • If our market is transitioning to a platform, what are the key elements that must be included?  Where do customers want us to partner so they can buy best of breed?
  • How can we easily regain some product differentiation that matters to customers with our limited R&D capacity?
  • Is our Microsoft partnership a key asset on which to double down or a liability to unwind?
  • Will our category be entirely absorbed into a broader suite or can we sustain a moat of product differentiation to protect us?

By debating these questions, and which ones to include in the study, you again guarantee a high degree of interest in learning the answers once they are available.

To ensure you’re asking action-oriented questions, as you build the study, question-by-question you need to ask yourself, “what would I do differently based on the answer?”

  • For multiple choice, what would I do differently if the majority answer were A vs. C?
  • For rankings, what would I do knowing the top three ranked choices were 123?  Or the bottom, 789?
  • For progression questions, what would I do if I notice everyone was dropping out at stage 3 of our funnel?

Sometimes, the answer is ask more questions.  So build follow-up and drill-down questions into the survey.  Sometimes, the answer is you’re circling the wrong question.  Think:  we asked a lot of questions that will help us determine the TAM.  Say we conclude it’s $20B, then what?  I think we’re asking the wrong question, I don’t want to know what the TAM is, I won’t to know the velocity with which it’s coming to market.  Ultimately, how many deals will be happening in the space next year and how many of those do we need to participate in to make our numbers?

Simply put, you can research how big the TAM is, or you can research how much of it is coming to market next year.  The latter is a lot more actionable than the former.

So let’s wrap up.  If we want to ensure that our market research is actionable, then we need to:

  • Time its arrival
  • Study the right questions
  • Ask those questions in a way that provides action-oriented answers

The Key to Making Market Research Actionable

Ever heard any of these?

  • “Yes, we run a quarterly customer satisfaction (CSAT) and NPS survey, but I feel like we don’t really take any action based on it.”
  • “Win/loss reporting, well yes, we do it, but I don’t think it’s very effective and I don’t think we do much in response to the data.”
  • “Wow, we ran an amazing overall market study last year, but I can’t think of a single strategic change made as a result of the findings.”

What’s wrong here?  Why are we going to the trouble of doing good market research, but not making any use of it?

Sure, sometimes it’s politics or change resistance or the ostrich effect.

But more often than not, in my experience, the reason is simple:  timing.  Companies know they should do this research.  They know their peers do.  They know it’s a best pratice.  So they do it.

But it arrives asynchronously.  Like the win/loss report that arrives the week after the ops review.  Or the strategic analysis that arrives six months before the strategy offsite.  Or the CSAT survey that arrives arrives mid-quarter.

When these deliverables arrive asynchronously, people do their best to read them.  They share them on Slack or email and make a few interesting observations.  But then they forget them.  There is so much other data.  And so much else to do.

How do we fix this?  Simple.  Fix the timing.  If your company has a a quarterly business review (QBR) where next-quarter OKRs are discussed and assigned, then ensure the CSAT report arrives days before that meeting.  Better yet, make the CSAT report review a standing item on the QBR agenda.  Along with a review of the win/loss report.  Time the annual state-of-the-market study so it arrives a week or two before the strategy offsite.

Don’t fight your company’s cadence.  Instead, slide into it.  Design the surveys to be actionable and time them so they can be.  Work with your vendors to move to new timing.  Otherwise, you’re spending $50K to dump a report into a drawer (or a PDF into a shared drive).

Market research doesn’t improve with age.  Build it and time it appropriately, and you’ll find that it’s a lot more actionable than you might think.

For the second part of this series, see The Keys to Making Market Research Actionable, Part II.

(Expanded to two-part series on 2/14/23)

Video of my Presentation at SaaStr 2021: A CEO’s Guide to Marketing

About two weeks ago I spoke at SaaStr Annual 2021, giving a presentation entitled A CEO’s Guide to Marketing, which discusses why marketing is sometimes seen as a dark art and then discusses 5 things that every CEO (and startup exec) should know about marketing in order to work best with the marketing team.

The slides from that session are here.  Below please find a video captured as part of the Stage A stream.  I start presenting at 8:01 and go for 30 mins.

Thanks for watching!

The Dogshit Bar: A Memorable Market Research Concept

I can’t tell you the number of times I’ve seen market research that suffers from one key problem.  It goes something like this:

  • What do you think of PRODUCT’s user interface?
  • Do you think PRODUCT should be part of suite or a standalone module?
  • Is the value of PRODUCT best measured per-user or per-bite?
  • Is the PRODUCT’s functionality best delivered as a native application or via a browser?
  • Would you like PRODUCT priced per-user or per-consumption?
  • Rank the importance of features 1-4 in PRODUCT?

The problem is, of course, that you’ve never asked the one question that actually matters — would you buy this product — and are pre-supposing the need for the product and that someone would pay something to fulfill that need.

So try this:  substitute “Dogshit Bar” (i.e., a candy bar made of dog shit) for every instance of PRODUCT in one of your market research surveys and see what happens.  Very quickly, you’ll realize that you’re asking questions equivalent to:

  • Should the Dogshit Bar be delivered in a paper or plastic wrapper?
  • Would you prefer to buy the Dogshit Bar in a 3, 6, or 9 oz size?
  • Should the Dogshit Bar be priced by ounce or some other metric?

So before drilling into all the details that product management can obsess over, step back, and ask some fundamental questions first.

  • Does the product solve a problem faced by your organization?
  • How high a priority is that problem?  (Perhaps ranked against a list of high-level priorities for the buyer.  It’s not enough that it solves a problem, it needs to solve an important problem.)
  • What would be the economic value of solving that problem?  (That is, how much value can this product provide.)
  • Would you be willing to pay for it and, if so, how much?  (Which starts to factor in not just  value but the relative cost of alternative solutions.)

So why do people make this mistake?

I believe there’s some feeling that it’s heretical to ask the basic questions about the startup’s core product or the big company’s new strategic initiatiave that the execs dreamed up at an offsite.  While the execs can dream up new product ideas all day long, there’s one thing they can’t do:  force people to buy them.

That’s why you need to ask the most basic, fundamental questions in market research first, before proceeding on to analyzing packaging, interface, feature trade-offs, platforms, etc.  You can generate lots of data to go analyze about whether people prefer paper or plastic packaging or the 3, 6, or 9 ounce size.  But none of it will matter.  Because no one’s going to buy a Dogshit Bar.

Now, before wrapping this up, we need to be careful of the Bradley Effect in market research, an important phenomenom in live research (as opposed to anonymous polls) and one of several reasons why pollsters generally called Trump vs. Clinton incorrectly in the 2016 Presidential election.

I’ll apply the Bradley Effect to product research as follows:  while there are certain exception categories where people will say they won’t buy something that they will (e.g., pornography), in general:

  • If someone says they won’t buy something, then they won’t
  • If someone says they will buy something, then they might

Why?  Perhaps they’re trying to be nice.  Perhaps they do see some value, but just not enough.  Perhaps there is a social stigma associated with saying no.

I first learned about this phenomenom reading Ogivly on Advertising, a classic marketing text by the father of advertising David Ogilvy.  Early in his career Ogilvy got lucky and learned an important lesson.  While working for George Gallup he was assigned to do polling about a movie entitled Abe Lincoln in Illinois.  While the research determined the movie was going to be a roaring success, the film ended up a flop.  Why?  The participants lied.  After all, who wants to sound unpatriotic and tell a pollster that you won’t go see a movie about Abe Lincoln?  Here’s a picture of Ogilvy doing that research.  Always remember it.