Category Archives: Marketing

Three People To Call When You Need Help with Positioning

Lately, I’ve received some consulting inquiries where companies are asking for help with positioning and messaging.  While that’s definitely an area of interest and passion, my business model is advice-as-a-service (AaaS) — I work with a smaller set of companies, on a broader set of issues, over a longer period of time.  So I’m not really looking for such consulting projects myself.

Thus the purpose of this post is to offer a little quick advice on the subject and then refer readers to three people I’d recommend to help with positioning and messaging in enterprise software.

Quick advice:

The three people I’d call for help with positioning would be:

  • Crispin Read, the single best positioning and messaging person with whom I’ve ever worked.  With a scalpel of a marketing mind, he’s not going to tell you what you want to hear, but he will cut through the junk in your thinking and distill your message to its essence.  I’m not sure how much consulting he’s doing these days because he’s trying to drive scale with his product marketing community (PMMHive) and Product Marketing Edge.  But I’d ping him.
  • Jeffrey Pease, who runs a NY-based consulting business, Message Mechanics.  Like Crispin, he was on the marketing team at Business Objects back in the day, and he is very, very good at messaging.  He popped up back in my life via Bluecore who was droning on about this messaging wizard they loved working with — only for me to discover that I’d worked with him in the past.  Testimonials on Jeffrey’s website include Bluecore, Coupa, Veeva, and well, Crispin (when he was at Microsoft).  So it’s really all just one big, happy positioning family.
  • April Dunford.  This one’s slightly premature — as I’ve not yet finished her book and haven’t worked with her yet.  But based on the part of the book I’ve read, her Twitter feed, and her work with related portfolio companies and PE sponsors, I am simply certain that we are kindred positioning spirits and that I’m going to love working with her — as we’re slated to do in upcoming months with one of my portfolio companies.

Good luck, happy positioning, and keep it simple out there.

Product Power Breakfast with Chris McLaughlin on Big/Small, US/Euro, and Marketing/Product

This week’s episode of the SaaS Product Power Breakfast is Thursday, June 10th, at 8am Pacific and we welcome a special friend and unique guest, Chris McLaughlin, currently CMO at France-based powerhouse LumApps, a collaboration and communications platform backed by top European investors including Idinvest and Goldman Sachs.

I got to know Chris by working together in his prior gig as joint CMO and CPO at Nuxeo, a France-based content services platform that had a great exit earlier this year to Thoma Bravo / Hyland Software, and where I sat on the board of directors for the past 4 years.

Chris has a unique background because of its dualities, working:

  • As a senior executive for both US-based and European-based companies.
  • At both growth startups and large megavendors (e.g., EMC/Documentum, IBM/FileNet)
  • In leadership roles on both the Product and the Marketing side.

In this week’s episode we — and the audience — will ask Chris many questions, including:

  • How to get product and marketing working together, especially when they aren’t under a common boss.
  • How European startups should organize their go-to-market functions to enter and grow in the US market
  • The role of both the product and marketing leaders in startups with either a technical founder or business founder
  • When is the right time to hire your first CPO and/or CMO
  • How to align product, marketing, and sales around a strategy — and dealing with the normal challenges in focusing that strategy

See you there, Thursday 6/10 at 8 am Pacific — and bring a friend.

As always, the room will be recorded and posted.  We think of the show as a podcast recorded in front of a live, studio audience.

SaaS Product Power Breakfast with Stephanie McReynolds on Category Creation

Please join us for our next episode of the SaaS Product Power Breakfast at 8am Pacific on 5/20/21 as we have a discussion with former Alation CMO Stephanie McReynolds on the topic of category creation and her learnings as she helped drive the creation of the data catalog category and establish Alation as the leader in it [1].

In addition to her gig at Alation, Stephanie’s had a great career at many leading and/or category-defining vendors including E.piphany, Business Objects, PeopleSoft, Oracle, Aster Data, ClearStory, and Trifacta.

Questions we’ll address include:

  • Does a vendor create a category or do market forces?
  • In creating a category do you lead with product or solution?
  • How do you know if you should try to create a category?
  • What role do industry analysts play in category creation?
  • What happens once you’ve successfully created a category?  What next?

This should be great session on a hot topic.  See you there.  And if you can’t make it, the session will be available in podcast form.  We think of our show, like Dr. Phil, as a podcast recorded before a live (Clubhouse) studio audience.

# # #

[1] I am an angel investor in and member of the board of directors at Alation.

Navel Gazing, Market Research, and the Hypothesis File

Ask most startups about their go-to-market (GTM) these days and they’ll give you lots of numbers.  Funnel metrics.  MQLs, SQLs, demos, and associated funnel conversion rates.  Seen over time, cut by segment.  Win/loss rates and close rates as well, similarly sliced.  Maybe an ABM scorecard, if applicable.

Or maybe more financial metrics like customer acquisition cost (CAC) ratio, lifetime value (LTV) or net dollar retention (NDR) rate.  Maybe a Rule of 40 score to show how they’re balancing growth and profitability.

And then you’ll have a growth strategy conversation and you’ll hear things like:

  • People don’t know who we are
  • But the people who know us love us
  • We’re just not seeing enough deals
  • Actually, we are seeing enough deals, but we’re not making the short list enough
  • Or, we’re making the short list enough, but not winning enough.

And there are always reasons offered:

  • We’re not showing enough value
  • We’re not speaking to the economic buyer
  • We’re a vitamin, not a pain killer
  • We’re not aligned with their business priorities
  • People don’t know you can solve problem X with our solution
  • Prospects can’t see any differentiation among the offerings; we all sound the same [3]
  • They don’t see us as a leader
  • They don’t know they need one
  • They know they need one but need to finish higher priorities first

It’s an odd situation.  We are literally drowning in funnel data, but when it comes to actually understanding what’s happening, we know almost nothing.  Every one of the above explanatory assertions are assumptions.   They’re aggregated anecdotes [4].  The CRM system can tell us a lot about what happens to prospects once they’re in our funnel, but

  1. We’re navel gazing.  We’re only looking at that portion of the market we engaged with.  It’s humbling to take those assertions and mentally preface them with:  “In that slice of the market who found us and engaged with us, we see XYZ.”  We’re assuming our slice is representative.  If you’re a early-stage or mid-stage startup, there’s no reason to assume that.  It’s probably not.
  2. Quantitative funnel analysis is far better at telling you what happened than why it happened.  If only 8% of our stage 2 opportunities close within 6 quarters, well, that’s a fact [5].  But companies don’t even attempt to address most of the above explanatory assertions in their CRM, and even those times when they do (e.g., reason codes for lost deals), the data is, in my experience, usually junk [6].  And even on the rare occasion when it’s not junk, it’s still the salesrep’s opinion as to what happened and the salesrep is not exactly an unbiased observer [7].

What’s the fix here?  We need to go old school.  Let’s complement that wonderful data we have from the CRM with custom market research, that costs maybe $30K to $50K, and that we run maybe 1-2x/year and ideally right before our strategic planning process starts [8].  Better yet, as we go about our business, every time someone says something that sounds like a fact but is really an assumption, let’s put it into a “hypothesis file” that becomes a list of a questions that we want answered headed into our strategic and growth planning.

After all, market research can tell us:

  • If people are aware of us, but perhaps don’t pick us for the long list because they have a negative opinion of us
  • How many deals are happening per quarter and what percent of those deals we are in
  • Who the economic buyer is and ergo if we are speaking to them
  • What the economic buyer’s priorities are and if we are aligning to them
  • When features are most important to customers shopping in the category
  • What problems-to-be-solved (or use-cases) they associate with the category
  • Perceived differences among offerings in the category
  • Satisfaction with various offerings with the category
  • If and when they intend to purchase in the category
  • And much more

Net — I think companies should:

  • Keep instilling rigor and discipline around their pipeline and funnel
  • Complement that information with custom market research, run maybe 1-2x/year
  • Drive that research from a list of questions, captured as they appear in real time and prompted by observing that many of these assertions are hypotheses, not facts — and that we can and should test them with market research.

 

# # #

Notes

[1] As many people use “demo” as a sales process stage.  Not one I’m particularly fond of [2], I might add, but I do see a lot of companies using demo as an intermediate checkpoint between sales-accepted opportunity and closed deal — e.g., “our demo-to-close rate is X%”

[2] I’m not fond of using demo as a stage for two reasons:  it’s vendor-out, not customer-in and it assumes demo (or worse yet, a labor-intensive custom demo) is what’s required as proof for the customer when many alternatives may be what they want — e.g., a deep dive, customer references, etc.  The stage, looking outside-in, is typically where the customer is trying to answer either (a) can this solve my problem or (b) of those that can solve my problem is this the one I want to use?

[3] This is likely true, by the way.  In most markets, the products effectively all look the same to the buyer!  Marketing tries to accentuate differentiation and sales tries to make that accentuated differentiation relevant to the problem at hand, but my guess is more often than not product differentiation is the explanation for the selection, but not the actual driver — which might rather be things like safety / mistake aversion, desire to work with a particular vendor / relationship, word of mouth recommendations, belief that success is more likely with vendor X than vendor Y even if vendor X may (perhaps, for now) have an inferior product)

[4] As the saying goes, the plural of anecdote is not data.

[5] And a potentially meaningless one if you don’t have good discipline around stages and pipeline.

[6] I don’t want to be defeatist here, but most startups barely have their act together on defining and enforcing / scrubbing basics like stages and close dates.  Few have well thought-out reason codes.

[7] If one is the loneliest number, salespersonship is the loneliest loss reason code.

[8] The biggest overlooked secret in making market research relevant to your organization — by acting on it — is strategically timing its arrival.  For example, win/loss reports that arrive just in time for a QBR are way more relevant than those that arrive off-operational-cycle.

What a Pipeline Coverage Target of >3x Says To Me

I’m working with a lot of different companies these days and one of the perennial topics is pipeline.

One pattern I’m seeing is CROs increasingly saying that they need more than the proverbial 3x pipeline coverage ratio to hit their numbers [2] [3].  I’m hearing 3.5x, 4x, or even 5x.  Heck — and I’m not exaggerating here — I even met one company that said they needed 100x.  Proof that once you start down the >3x slippery slope that you can slide all the way into patent absurdity.

Here’s what I think when a company tells me they need >3x pipeline coverage [4]:

  • The pipeline isn’t scrubbed.  If you can’t convert 33% of your week 3 pipeline, you likely have a pipeline that’s full of junk opportunities (oppties). Rough math, if 1/3rd slips or derails [5] [6] and you go 50-50 on the remaining 2/3rds, you convert 33%.
  • You lose too much.  If you need 5x pipeline coverage because you convert only 20% of it, maybe the problem isn’t lack of pipeline but lack of winning [7].  Perhaps you are better off investing in sales training, improved messaging, win/loss research, and competitive analysis than simply generating more pipeline, only to have it leak out of the funnel.
  • The pipeline is of low quality.  If the pipeline is scrubbed and your deal execution is good, then perhaps the problem is the quality of pipeline itself.  Maybe you’re better off rethinking your ideal customer profile and/or better targeting your marketing programs than simply generating more bad pipeline [8].
  • Sales is more powerful than marketing.  By (usually arbitrarily) setting an unusually high bar on required coverage, sales tees up lack-of-pipeline as an excuse for missing numbers.  Since marketing is commonly the majority pipeline source, this often puts the problem squarely on the back of marketing.
  • There’s no nurture program.  Particularly when you’re looking at annual pipeline (which I generally don’t recommend), if you’re looking three or four quarters out, you’ll often find “fake opportunities” that aren’t actually sales opportunities, but are really just attractive prospects who said they might start an evaluation later.  Are these valid sales opportunities?  No.  Should they be in the pipeline?  No.  Do they warrant special treatment?  Yes.   That should ideally be accomplished by a sophisticated nurture program. But lacking that, reps can and should nurture accounts.  But they shouldn’t use the opportunity management system to do so; it creates “rolling hairballs” in the pipeline.
  • Salesreps are squatting.  The less altruistic interpretation of fake long-term oppties is squatting.  In this case, a rep does not create a fake Q+3 opportunity as a self-reminder to nurture, but instead to stake a claim on the account to protect against its loss in a territory reorganization [9].   In reality, this is simply a sub-case of the first bullet (the pipeline isn’t scrubbed), but I break it out both to highlight it as a frequent problem and to emphasize that pipeline scrubbing shouldn’t just mean this- and next-quarter pipeline, but all-quarter pipeline as well [10].

# # #

Notes

[1] e.g., from marketing, sales, SDRs, alliances.  I haven’t yet blogged on this, and I really need to.  It’s on the list!

[2] Pipeline coverage is ARR pipeline divided by the new ARR target.  For example, if your new ARR target for a given quarter is $3,000K and you have $9,000K in that-quarter pipeline covering it, then you have a 3x pipeline coverage ratio.  My primary coverage metric is snapshotted in week 3, so week 3 pipeline coverage of 3x implies a 33% week three pipeline conversion rate.

[3] Note that it’s often useful to segment pipeline coverage.  For example, new logo pipeline tends to convert at a lower rate (and require higher coverage) than expansion pipeline which often converts at a rate near or even over 100% (as the reps sometimes don’t enter the oppties until the close date — an atrocious habit!)  So when you’re looking at aggregate pipeline coverage, as I often do, you must remember that it works best when the mix of pipeline by segment and the conversion rate of each segment is relatively stable.  The more that’s not true, the more you must do segmented pipeline analysis.

[4] See note 2.  Note also the ambiguity in simply saying “pipeline coverage” as I’m not sure when you snapshotted it (it’s constantly changing) or what time period it’s covering.  Hence, my tendency is to say “week 3 current-quarter pipeline coverage” in order to be precise.  In this case, I’m being a little vague on purpose because that’s how most folks express it to me.

[5] In my parlance, slip means the close date changes and derail means the project was cancelled (or delayed outside your valid opportunity timeframe).  In a win, we win; in a loss, someone else wins; in a derail, no one wins.  Note that — pet peeve alert — not making the short list is not a derail, but a loss to as-yet-known (so don’t require losses to fill in a single competitor and ensure missed-short-list is a possible lost-to selection).

[6] Where sales management should be scrubbing the close date as well as other fields like stage, forecast category, and value.

[7] To paraphrase James Mason in The Verdict, salesreps “aren’t paid to do their best, they’re paid to win.”  Not just to have a 33% odds of winning a deal with a three-vendor short list.  If we’re really good we’re winning half or more of those.

[8] The nuance here is that sales did accept the pipeline so it’s presumably objectively always above some quality standard.  The reality is that pipeline acceptance bar is not fixed but floating and the more / better quality oppties a rep has the higher the acceptance bar.  And conversely:  even junk oppties look great to a starving rep who’s being flogged by their manager to increase their pipeline.  This is one reason why clear written definitions are so important:  the bar will always float around somewhat, but you can get some control with clear definitions.

[9] In such cases, companies will often “grandfather” the oppty into the rep’s new territory even if it ordinarily would not have been included.

[10] Which it all too often doesn’t.

What is a Minimum Viable Product, Anyway? My Favorite MVP Analogy.

The concept of minimum viable product (MVP) has been popularized in the past decade thanks to the success of the wonderful book, The Lean Startup.  It’s thrown around so casually, and you hear it so often, that sometimes you wonder how — or even if — people define it.

In this post, I’ll describe how I think about MVPs, first using one real-life example and then using my favorite MVP analogy.

The concept of a minimum viable product is simple:

  • Every startup is basically a hypothesis (e.g., we think people will buy an X).
  • Instead of doing a big build-up during a lengthy stealth phase concluding in a triumphant (if often ill-fated) product unveiling, let’s build and ship something basic quickly — and start iterating.
  • By taking this lean approach we can test our hypothesis, learn, and iterate more quickly — and avoid tons of work and waste in the process.

The trick is, of course, those two pesky words, minimum and viable.  In my worldview:

  • Minimum means the least you can do to test your hypothesis.
  • Viable means the product actually does the thing it’s supposed to do, even in some very basic way.

I’ll use an old, but concrete, example of an MVP from my career at Business Objects.  It’s the late 1990s.  The Internet is transforming computing.  We sell a high-functionality query & reporting tool, capable of everything from ad hoc query to complex, highly-formatted reports to interactive multidimensional analysis.  That tool is a client/server Windows application and we need to figure out our web strategy.  We are highly constrained technologically, because it’s still the early days of the web browser (e.g., browsers had no print functionality) [1].

After much controversy, John Ball and the WebIntelligence team agreed on (what we’d now call) the following MVP:

  • A catalog of reports that users can open/browse
  • End-user ad hoc query
  • Production of very basic tabular reports
  • Semi-compatibility with our existing product [2]

But it would work in a browser without any plug-ins, web native.  No multi-block reports.  No pages.  No printing.  No interactive analysis.  No multidimensional analysis.  No charting.  No cross-tabs.  No headers, no footers.  Effectively, the world’s most basic reporting tool — but it let users run an ad hoc query over the web and produce a simple report.  That was the MVP.  That was the hypothesis — that people would want to buy that and evolve with us over time.

Because of that tightly focused MVP we were able to build the product quickly, position it clearly within the product line [3], and eventually use it as the basis for an entirely new line of business [4].

Now, let’s do the analogy.  Pretend for a moment we’re in a world where there are no four-wheel drive cars.  We have invented the four-wheel drive car.  We imagine numerous use-cases [5] and a big total available market (TAM).

What should be our MVP?  Meet the 1947 Jeep Willys [6] [7].

No roof.  No back seat.  In some cases, no windscreen.  No doors.  No air conditioning.  No entertainment system.  No navigation.  No cup holders.  No leather.  No cruise control.  No rearview camera.  No ABS.  No seatbelts.  No airbags.

No <all that shit that too many product managers say are requirements because they don’t understand what MVP means>.

Just the core:  a seat, a steering wheel, an engine, a transmission, a clutch, and four traction tires.

  • Is it missing all kinds of functionality?  Yes
  • In this case, would it even be legal to sell?  No.  Well, maybe off-road, but we’re in analogy-mode here.
  • But can it get you across a muddy field or down a muddy road?  YES.

And that’s the point.  It’s minimum because it’s missing all kinds of things we can easily imagine people wanting, later.  It’s viable because it does the one thing that no other car does.  So if you need to cross a muddy field or go down a muddy road, you’ll buy one.

As Steve Blank says:  “You’re selling the vision and delivering the minimum feature set to visionaries, not everyone” [8]. 

So next time you think someone is focused on jamming common but non-core attributes into an MVP, tell them they’re counting cupholders in a Willys and point them here.

# # #

Notes

[1] And printing is a pretty core requirement for a reporting application!

[2] This was key.  WebIntelligence could not even open a BusinessObjects report.  Instead, we opted for compatibility one layer deeper, at the semantic layer (that defined data objects and security) not the reporting layer.

[3] If you want all that power, use BusinessObjects.  If you want web native, use WebIntelligence.  And you can share semantic layer definitions and security.

[4] BI extranets.

[5] From military off-road applications to emergency off-road and/or slippery conditions to sand recreational to family vehicles on snow and many  more.

[6] Which in some ways literally was the MVP for Jeeps.

[7] Popularized by the Grateful Dead in Sugar Magnolia (“… jump like a Willys in four-wheel drive.”)

[8] Where I’ll define visionary as someone who has the problem we’re trying to solve and willing to use a new technology to solve it.  It’s a little easier to think of someone trying a next-generation database system as a “technology visionary” than the Army buying a Jeep, but it’s the same characteristic.  They need a currently unsolvable problem solved, and are willing to try unconventional solutions to do it.