Category Archives: Metrics

Appearance on the Metrics That Measure Up Podcast

“Measure or measure not.  There is no try.”

— My response to being called the Yoda of SaaS metrics.

Just a quick post to highlight my recent appearance on the Metrics That Measure Up podcast, hosted by Ray Rike, founder and CEO of RevOps^2, a firm focused on SaaS metrics and benchmarking.

Ray’s a great guy, passionate about metrics, unafraid of diving into the details, and the producer of a great metrics-focused podcast that has featured many quality guests including Bryon Deeter, Tom Reilly, David Appel, Elay Cohen, Mark Petruzzi / Paul Melchiorre, Sally Duby, Amy Volas, and M.R. Rangaswami.

In the episode, Ray and I discuss:

  • Top SaaS metrics — e.g., annual recurring revenue (ARR), ARR growth, net dollar retention (NDR), net promoter score (NPS), employee NPS, and customer acquisition cost (CAC) ratio
  • How metrics vary with scale
  • Avoiding survivor bias, both in calculating churn rates and in comparisons to public comparison benchmarks (comps) [1]
  • How different metrics impact the enterprise value to revenue (EV/R) multiple — and a quick place to examine those correlations (i.e., the Meritech comps microsite).
  • Win rates and milestone vs. cohort analysis
  • Segmenting metrics, such as CAC and LTV/CAC, and looking at sales CAC vs. marketing CAC.
  • Blind adherence to metrics and benchmarks
  • Consumption-based pricing (aka, usage-based pricing)
  • Career advice for would-be founders

If you enjoy this episode I’m sure you’ll enjoy Ray’s whole podcast, which you can find here.

# # #

Notes

[1] Perhaps more availability bias (or, as Ray calls it, selection bias) than survivor bias, but either way, a bias to understand.

Navel Gazing, Market Research, and the Hypothesis File

Ask most startups about their go-to-market (GTM) these days and they’ll give you lots of numbers.  Funnel metrics.  MQLs, SQLs, demos, and associated funnel conversion rates.  Seen over time, cut by segment.  Win/loss rates and close rates as well, similarly sliced.  Maybe an ABM scorecard, if applicable.

Or maybe more financial metrics like customer acquisition cost (CAC) ratio, lifetime value (LTV) or net dollar retention (NDR) rate.  Maybe a Rule of 40 score to show how they’re balancing growth and profitability.

And then you’ll have a growth strategy conversation and you’ll hear things like:

  • People don’t know who we are
  • But the people who know us love us
  • We’re just not seeing enough deals
  • Actually, we are seeing enough deals, but we’re not making the short list enough
  • Or, we’re making the short list enough, but not winning enough.

And there are always reasons offered:

  • We’re not showing enough value
  • We’re not speaking to the economic buyer
  • We’re a vitamin, not a pain killer
  • We’re not aligned with their business priorities
  • People don’t know you can solve problem X with our solution
  • Prospects can’t see any differentiation among the offerings; we all sound the same [3]
  • They don’t see us as a leader
  • They don’t know they need one
  • They know they need one but need to finish higher priorities first

It’s an odd situation.  We are literally drowning in funnel data, but when it comes to actually understanding what’s happening, we know almost nothing.  Every one of the above explanatory assertions are assumptions.   They’re aggregated anecdotes [4].  The CRM system can tell us a lot about what happens to prospects once they’re in our funnel, but

  1. We’re navel gazing.  We’re only looking at that portion of the market we engaged with.  It’s humbling to take those assertions and mentally preface them with:  “In that slice of the market who found us and engaged with us, we see XYZ.”  We’re assuming our slice is representative.  If you’re a early-stage or mid-stage startup, there’s no reason to assume that.  It’s probably not.
  2. Quantitative funnel analysis is far better at telling you what happened than why it happened.  If only 8% of our stage 2 opportunities close within 6 quarters, well, that’s a fact [5].  But companies don’t even attempt to address most of the above explanatory assertions in their CRM, and even those times when they do (e.g., reason codes for lost deals), the data is, in my experience, usually junk [6].  And even on the rare occasion when it’s not junk, it’s still the salesrep’s opinion as to what happened and the salesrep is not exactly an unbiased observer [7].

What’s the fix here?  We need to go old school.  Let’s complement that wonderful data we have from the CRM with custom market research, that costs maybe $30K to $50K, and that we run maybe 1-2x/year and ideally right before our strategic planning process starts [8].  Better yet, as we go about our business, every time someone says something that sounds like a fact but is really an assumption, let’s put it into a “hypothesis file” that becomes a list of a questions that we want answered headed into our strategic and growth planning.

After all, market research can tell us:

  • If people are aware of us, but perhaps don’t pick us for the long list because they have a negative opinion of us
  • How many deals are happening per quarter and what percent of those deals we are in
  • Who the economic buyer is and ergo if we are speaking to them
  • What the economic buyer’s priorities are and if we are aligning to them
  • When features are most important to customers shopping in the category
  • What problems-to-be-solved (or use-cases) they associate with the category
  • Perceived differences among offerings in the category
  • Satisfaction with various offerings with the category
  • If and when they intend to purchase in the category
  • And much more

Net — I think companies should:

  • Keep instilling rigor and discipline around their pipeline and funnel
  • Complement that information with custom market research, run maybe 1-2x/year
  • Drive that research from a list of questions, captured as they appear in real time and prompted by observing that many of these assertions are hypotheses, not facts — and that we can and should test them with market research.

 

# # #

Notes

[1] As many people use “demo” as a sales process stage.  Not one I’m particularly fond of [2], I might add, but I do see a lot of companies using demo as an intermediate checkpoint between sales-accepted opportunity and closed deal — e.g., “our demo-to-close rate is X%”

[2] I’m not fond of using demo as a stage for two reasons:  it’s vendor-out, not customer-in and it assumes demo (or worse yet, a labor-intensive custom demo) is what’s required as proof for the customer when many alternatives may be what they want — e.g., a deep dive, customer references, etc.  The stage, looking outside-in, is typically where the customer is trying to answer either (a) can this solve my problem or (b) of those that can solve my problem is this the one I want to use?

[3] This is likely true, by the way.  In most markets, the products effectively all look the same to the buyer!  Marketing tries to accentuate differentiation and sales tries to make that accentuated differentiation relevant to the problem at hand, but my guess is more often than not product differentiation is the explanation for the selection, but not the actual driver — which might rather be things like safety / mistake aversion, desire to work with a particular vendor / relationship, word of mouth recommendations, belief that success is more likely with vendor X than vendor Y even if vendor X may (perhaps, for now) have an inferior product)

[4] As the saying goes, the plural of anecdote is not data.

[5] And a potentially meaningless one if you don’t have good discipline around stages and pipeline.

[6] I don’t want to be defeatist here, but most startups barely have their act together on defining and enforcing / scrubbing basics like stages and close dates.  Few have well thought-out reason codes.

[7] If one is the loneliest number, salespersonship is the loneliest loss reason code.

[8] The biggest overlooked secret in making market research relevant to your organization — by acting on it — is strategically timing its arrival.  For example, win/loss reports that arrive just in time for a QBR are way more relevant than those that arrive off-operational-cycle.

A Ten-Point Sales Management Framework for Enterprise SaaS Startups

In this post, I’ll present what I view as the minimum sales management framework for an enterprise SaaS startup — i.e., the basics you should have covered as you seek to build and scale your sales organization [1].

  1. Weekly sheet
  2. Pipeline management rules, with an optional stage matrix
  3. Forecasting rules
  4. Weekly forecast calls
  5. Thrice-quarterly pipeline scrubs
  6. Deal reviews
  7. Hiring profiles
  8. Onboarding program
  9. Quarterly metrics
  10. Gong

Weekly Sheet
A weekly sheet, such as the one used here, that allows you to track, communicate, and intelligently converse about the forecast and its evolution.  Note this is the sheet I’d use for the CEO’s weekly staff meeting.  The CRO will have their own, different one for the sales team’s weekly forecast call.

Pipeline Management Rules with Optional Stage Matrix
This is a 2-3 page document that defines a sales opportunity and the key fields associated with one, including:

  • Close date (e.g., natural vs. pulled-forward)
  • Value (e.g., socialized, placeholder, aspiration, upside)
  • Stage (e.g., solution fit, deep dive, demo, vendor of choice)
  • Forecast category (e.g., upside, forecast, commit)

Without these definitions in place and actively enforced, all the numbers in the weekly sheet are gobbledygook.  Some sales managers additionally create a one-page stage matrix that typically has the following rows:

  • Stage name (I like including numbers in stage names to accelerate conversations, e.g., s2+ pipeline or s4 conversion rate)
  • Definition
  • Mandatory actions (i.e., you can be fired for not doing these)
  • Recommended actions (i.e., to win deals we think you should be doing these)
  • Exit criteria

If your stage definitions are sufficiently simple and clear you may not need a stage matrix.  If you choose to create one, avoid these traps:  not enforcing mandatory actions (just downgrade them to recommended) and multiple and/or confusing exit criteria.  I’ve seen stage matrices where you could win the deal before completing all six of the stage-three exit criteria!

Forecasting Rules
A one-page document that defines how the company expects reps to forecast.  For example, I’d include:

  • Confidence level (i.e., the percent of the time you are expected to hit your forecast)
  • Cut rules (e.g., if you cut your forecast, cut it enough so the next move is up — aka, the always-be-upsloping rule.)
  • Timing rules (e.g., if you can forecast next-quarter deals in this quarter’s forecast)
  • Management rules (e.g., whether managers should bludgeon reps into increasing their forecast)

Weekly Forecast Calls
A weekly call with the salesreps to discuss their forecasts.  Much to my horror, I often need to remind sales managers that these calls should be focused on the numbers — because many salespeople seem to love to talk about everything but.

For accountability reasons, I like people saying things that are already in Salesforce and that I could theoretically just read myself.  Thus, I think these calls should sound like:

Manager:  Kelly, what are you calling for the quarter?
Kelly:  $450K
Manager:  What’s that composed of?
Kelly:  Three deals.  A at $150K, B at $200K, and C at $100K.
Manager:  Do you have any upside?
Kelly:  $150K.  I might be able to pull deal D forward.

I dislike storytelling on forecast calls (e.g., stories about what happened at the account last week).  If you want to focus on how to win a given deal, let’s do that in a deal review.  If we want to examine the state of a rep’s pipeline, let’s do that in a pipeline scrub.  On a forecast call, let’s forecast.

I cannot overstate the importance of separating these three types of meetings. Pipeline scrubs are about scrubbing, deal reviews are about winning, and forecast calls are about forecasting.  Blend them at your peril.

Thrice-Quarterly Pipeline Scrubs
A call focused solely on reviewing all the opportunities in the sales pipeline.  The focus should be on verification:

  • Are all the opportunities actually valid in accordance with our definition of a sales opportunity?
  • Are the four key fields (close date, value, stage, forecast category) properly and accurately completed?
  • All means all.  While we can put more focus on this-quarter and next-quarter pipeline, we need to review the entire thing to ensure that reps aren’t dumping losses in out-quarters or using fake oppties to squat on accountants.

I like when these calls are done in small groups (e.g., regions) with each rep taking their turn in the hot seat.  Too large a group wastes everyone’s time.  Too small forgoes a learning opportunity, where reps can learn by watching the scrubs of other reps.

As a non-believer in alleged continuous scrubbing, I like doing these scrubs in weeks 2, 5, and 8 so the data presented to the executive staff is clean in weeks 3, 6, and 9.  See this threepart series for more.

Deal Reviews
As a huge fan of Selling Through Curiosity, I believe a salesperson’s job is to ask great questions that both reveal what’s happening in the account and lead the customer in our direction.  Accordingly, I believe that a sales manager’s job is to ask great questions that help salesreps win deals.  That is the role of deal review.

A deal review is a separate meeting from a pipeline scrub or a forecast call, and focused on one thing:  winning.  What do we need to learn or do to win a given deal?  As such,

  • It’s a typically a two-hour meeting
  • Run by sales management, but in a peer-to-peer format (meaning multiple reps attend and reps ask each other questions)
  • Where a handful of reps volunteer to present their deals and be questioned about them
  • And the focus is on asking reps (open-ended) questions that will help them win their deals

Examples:

  • What questions can you ask that will reveal more about the evaluation process?
  • Why do you think we are vendor of choice?
  • What are the top reasons the customer wouldn’t select us and how are we proactively addressing them?
  • How would we know if we were actually in first place in the evaluation process?

Hiring Profiles
A key part of building an enterprise SaaS company is proving the repeatability of your sales process.  While I have also written a threepost series on that topic, the TLDR summary is that proving repeatability begins with answering this question:

Can you hire a standard rep and onboard them in a standard way to reliably produce a standard result?

The first step is defining a hiring profile, a one-page document that outlines what we’re looking for when we hire new salesreps.  While I like this expressed in a specific form, the key points are that:

  • It’s specific and clear — so we can know when we’ve found one and can tell recruiters if they’re producing pears when we asked for apples.
  • There’s a big enough “TAM” so we can scale — e.g., if the ideal salesrep worked at some niche firm that only had 10 salespeople, then we’re going to have trouble scaling our organization.

Onboarding Program
The second key element of repeatability is onboarding.  Startups should invest early in building and refining a standard onboarding program that ideally includes:

  • Pre-work (e.g., a reading list, videos)
  • Class time (e.g., a 3-5 day live program with a mix of speakers)
  • Homework (e.g., exercises to reinforce learnings)
  • Assessment (e.g., a final exam, group exercise)
  • Mentoring (e.g., an assigned mentor for 3-6 months)
  • Reinforcement (e.g., quarterly update training)

In determining whether all this demonstrates a standard result, this chart can be helpful.

Quarterly Metrics
Like all functions, sales should participate in an estaff-level quarterly business review (QBR), presenting an update with a high-quality metrics section, presented in a consistent format.  Those metrics should typically include:

  • Performance by segment (e.g., region, market)
  • Average sales cycle (ASC) and average sales price (ASP) analysis
  • Pipeline conversion analysis, by segment
  • Next-quarter pipeline analysis, by segment
  • Customer expansion analysis
  • Win/loss analysis off the CRM system, often complemented by a separate quarterly third-party study of won and lost deals
  • Rep ramping and productivity-capacity analysis (e.g., RREs)

Gong
As someone who prides himself on never giving blanket advice: everybody should use Gong.

I think it’s an effective and surprisingly broad tool that helps companies in ways both tactical and strategic from note-taking to coaching to messaging to sales enablement to alerting to management to forecasting to generally just connecting the executive staff to what actually happens in the trenches — Gong is an amazing tool that I think can benefit literally every SaaS sales organization.

# # #

Notes
[1] This post assumes the existence of functioning upstream work and processes, including (a) an agreement about goals for percentage of pipeline from the four pipeline sources (marketing, SDR/out, sales/out, and partners), (b) a philosophically aligned marketing department, (c) good marketing planning, such as the use of an inverted funnel model, (d) good sales planning, such as the use of a bookings capacity model, and (e) proper pipeline management as discussed in this threepart series.

What a Pipeline Coverage Target of >3x Says To Me

I’m working with a lot of different companies these days and one of the perennial topics is pipeline.

One pattern I’m seeing is CROs increasingly saying that they need more than the proverbial 3x pipeline coverage ratio to hit their numbers [2] [3].  I’m hearing 3.5x, 4x, or even 5x.  Heck — and I’m not exaggerating here — I even met one company that said they needed 100x.  Proof that once you start down the >3x slippery slope that you can slide all the way into patent absurdity.

Here’s what I think when a company tells me they need >3x pipeline coverage [4]:

  • The pipeline isn’t scrubbed.  If you can’t convert 33% of your week 3 pipeline, you likely have a pipeline that’s full of junk opportunities (oppties). Rough math, if 1/3rd slips or derails [5] [6] and you go 50-50 on the remaining 2/3rds, you convert 33%.
  • You lose too much.  If you need 5x pipeline coverage because you convert only 20% of it, maybe the problem isn’t lack of pipeline but lack of winning [7].  Perhaps you are better off investing in sales training, improved messaging, win/loss research, and competitive analysis than simply generating more pipeline, only to have it leak out of the funnel.
  • The pipeline is of low quality.  If the pipeline is scrubbed and your deal execution is good, then perhaps the problem is the quality of pipeline itself.  Maybe you’re better off rethinking your ideal customer profile and/or better targeting your marketing programs than simply generating more bad pipeline [8].
  • Sales is more powerful than marketing.  By (usually arbitrarily) setting an unusually high bar on required coverage, sales tees up lack-of-pipeline as an excuse for missing numbers.  Since marketing is commonly the majority pipeline source, this often puts the problem squarely on the back of marketing.
  • There’s no nurture program.  Particularly when you’re looking at annual pipeline (which I generally don’t recommend), if you’re looking three or four quarters out, you’ll often find “fake opportunities” that aren’t actually sales opportunities, but are really just attractive prospects who said they might start an evaluation later.  Are these valid sales opportunities?  No.  Should they be in the pipeline?  No.  Do they warrant special treatment?  Yes.   That should ideally be accomplished by a sophisticated nurture program. But lacking that, reps can and should nurture accounts.  But they shouldn’t use the opportunity management system to do so; it creates “rolling hairballs” in the pipeline.
  • Salesreps are squatting.  The less altruistic interpretation of fake long-term oppties is squatting.  In this case, a rep does not create a fake Q+3 opportunity as a self-reminder to nurture, but instead to stake a claim on the account to protect against its loss in a territory reorganization [9].   In reality, this is simply a sub-case of the first bullet (the pipeline isn’t scrubbed), but I break it out both to highlight it as a frequent problem and to emphasize that pipeline scrubbing shouldn’t just mean this- and next-quarter pipeline, but all-quarter pipeline as well [10].

# # #

Notes

[1] e.g., from marketing, sales, SDRs, alliances.  I haven’t yet blogged on this, and I really need to.  It’s on the list!

[2] Pipeline coverage is ARR pipeline divided by the new ARR target.  For example, if your new ARR target for a given quarter is $3,000K and you have $9,000K in that-quarter pipeline covering it, then you have a 3x pipeline coverage ratio.  My primary coverage metric is snapshotted in week 3, so week 3 pipeline coverage of 3x implies a 33% week three pipeline conversion rate.

[3] Note that it’s often useful to segment pipeline coverage.  For example, new logo pipeline tends to convert at a lower rate (and require higher coverage) than expansion pipeline which often converts at a rate near or even over 100% (as the reps sometimes don’t enter the oppties until the close date — an atrocious habit!)  So when you’re looking at aggregate pipeline coverage, as I often do, you must remember that it works best when the mix of pipeline by segment and the conversion rate of each segment is relatively stable.  The more that’s not true, the more you must do segmented pipeline analysis.

[4] See note 2.  Note also the ambiguity in simply saying “pipeline coverage” as I’m not sure when you snapshotted it (it’s constantly changing) or what time period it’s covering.  Hence, my tendency is to say “week 3 current-quarter pipeline coverage” in order to be precise.  In this case, I’m being a little vague on purpose because that’s how most folks express it to me.

[5] In my parlance, slip means the close date changes and derail means the project was cancelled (or delayed outside your valid opportunity timeframe).  In a win, we win; in a loss, someone else wins; in a derail, no one wins.  Note that — pet peeve alert — not making the short list is not a derail, but a loss to as-yet-known (so don’t require losses to fill in a single competitor and ensure missed-short-list is a possible lost-to selection).

[6] Where sales management should be scrubbing the close date as well as other fields like stage, forecast category, and value.

[7] To paraphrase James Mason in The Verdict, salesreps “aren’t paid to do their best, they’re paid to win.”  Not just to have a 33% odds of winning a deal with a three-vendor short list.  If we’re really good we’re winning half or more of those.

[8] The nuance here is that sales did accept the pipeline so it’s presumably objectively always above some quality standard.  The reality is that pipeline acceptance bar is not fixed but floating and the more / better quality oppties a rep has the higher the acceptance bar.  And conversely:  even junk oppties look great to a starving rep who’s being flogged by their manager to increase their pipeline.  This is one reason why clear written definitions are so important:  the bar will always float around somewhat, but you can get some control with clear definitions.

[9] In such cases, companies will often “grandfather” the oppty into the rep’s new territory even if it ordinarily would not have been included.

[10] Which it all too often doesn’t.

My Two Appearances on the SaaShimi Podcast: Comprehensive SaaS Metrics Overview and Differences between PE and VC

The SaaShimi podcast just dropped the first two episodes of its second season and I’m back speaking with PNC Technology Finance banker Aznaur Midov, this time discussing some of the key difference between private equity (PE) and venture capital (VC) when it comes to philosophy, business model, portfolio company engagement, diligence,  and exit processes.  You can check out the entire podcast on the web here or this episode on Spotify or Apple podcasts.

I’ve also embedded it below:

Dave Kellogg on SaaShimi Discussing Differences between Private Equity and Venture Capital.

 

If you missed it and/or you’re otherwise interested, on my prior appearance we did a pretty darn comprehensive overview of SaaS metrics, available here on Apple podcasts and here on Spotify.

I’ve embedded this episode as well, below:

Dave Kellogg on SaaShimi with a Comprehensive Overview of SaaS Metrics.

 

Thanks Aznaur for having me.  I think he’s created a high quality, focused series on SaaS.