Category Archives: Sales

The Elements of a Good Apology

After a negative customer experience on a recent fishing trip an old friend of mine said, “I judge people by the quality of their apologies.” Interesting idea, I thought.

This led to a discussion about the apology given to us by the proprietor of the ranch at which we stayed, roughly summarized as: “this only happened because it’s the end of a long, hard season, and there are things — things I can’t tell you about — that took a lot out of me.”

I, being something of a sucker, bought it — pardon the expression — hook, line, and sinker. “Oh you poor man, I hope you get through this.”

My friend, who is somewhat more skeptical, responded differently: “He didn’t really own it. He literally blamed it on something that he declared secret and couldn’t tell us about. And does that really matter anyway? Do we really care why something undesirable happened? Or do we want him to just own the mistake and apologize for it?”

This led to a conversation where I came up with these simple elements of a good apology.

  • Hear it. Let the customer talk. Hear what they say. Don’t interrupt. Don’t get defensive. Listen. When they’re done, repeat it back: “I understand that the door flew open, Fluffy flew out, and that terrified everyone.” Or, “I understand that the software repeatedly crashed and was basically unusable during your end-user onboarding session and that was horribly embarrassing for you personally and a waste of time and money for the company.”
  • Own it. Admit the mistake and say it was your fault. “I didn’t attach the schmidget properly and because of that the door flew open. It was my mistake.” In a tech context, “I’m sorry that the release was not adequately tested and caused the software to crash repeatedly during your user onboarding session.”
  • Apologize for it. Say, “I am sorry.” Don’t ask anyone to accept that apology as it feels you’re asking for absolution. You’re not. You’re apologizing.
  • Avoid deflection or transference. Don’t say, “I’m sorry that you didn’t notice the schmidget was not attached.” Or, “I’m sorry that you chose to hold your training the day after a major, new release.” Doing this is the opposite of owning it. Avoid at all costs any apology that starts with, “I’m sorry you were offended by.”
  • Optionally, say how you feel about it. “I feel terrible that your cat flew out the open door (but was happily uninjured).” Or, “I feel terrible that we hung you out to dry in front of your end users, especially after you went to bat to help us win the deal.”
  • Optionally, tell them what you’re doing about it. Some people will care about this and want to know how you’re preventing this from happening to others. Some won’t. Read the room. “I’m going to revise our departure checklist to add schmidget attachment.” Or, “I’m going to fly to India, show the team your picture, tell them how much you did to support us, and then tell them how this impacted you.” (This, by the way, is a real example and I did fly to India the next week and do precisely that.)
  • Don’t quibble over details. If it’s an online product review and it says, “the schmidget was not attached on the 20-foot vehicle,” do not reply, “our vehicles are 19 feet.” If you worry that failing to do this concedes incorrect facts, then say, “Some details notwithstanding, the important part here is the cat flew out the door, and we are deeply sorry about Fluffy and the trauma she endured.”
  • Optionally, offer compensation. Not everyone wants compensation. For some, it’s about principle. For others, it’s about ensuring future clients don’t have the same problem. For others, it’s all about compensation. For others still, it’s about putting some wood behind the apology arrow. Read the room. Ensure the compensation matches the problem: “I’m offering you a free day with our top guide on your next trip out.” If you’re unsure, you can offer in the hypothetical: “would it help if I were to offer you blank?” Avoid proposing illogical compensation: “I’ll give you two free days from the same plumber who misinstalled the pipes that flooded your house in the first place.” (No thanks!)
  • Finally, thank the customer for their business. “You are important to our company, that’s why I wanted to make this apology to you personally. And thank you for being a customer.”

I worked with a sales VP who began every customer conversation by saying, “thank you for being a customer.” It’s not a bad way to end one, either.

Does Your Startup Need a Sales Playbook or Just a Few Plays?

When a company is transitioning from founder-led sales (FLS) to sales-led sales (SLS), you hear the word “playbook” a lot. For early-stage companies, this rubs me the wrong way because when I hear playbook, it conjures up an image of:

  • A large sales enablement team
  • A hefty three-ring binder full of paper (or its digital equivalent)
  • A lot of templates (which perhaps commit the cardinal sin of the template leading the content)
  • A formal onboarding program that teaches playbook contents
  • And perhaps a formal sales process (e.g., MEDDIC) or methodology

That’s all great when you’re $100M+ in ARR and you’re trying to institutionalize a model that you know works — from repeated experience with scores of reps over many quarters. But for an early-stage company with less than a dozen reps and that’s still highly dependent on the founder(s) to sell software, it’s overkill.

So when these companies say they need a playbook my retort is, “no you don’t — you don’t need a playbook; you just need a handful of plays.”

What is a Playbook?

While the term gets bandied about, few seem to define it. Many companies will tell you how to make a sales playbook. For example, Pipedrive does so in a not-so-mere 4,500 words. But if the how-to-make-one guide is nine, single-spaced pages, then how big are the playbooks themselves? Usually, big. Per Pipedrive:

A sales playbook is a document that outlines your sales processes, procedures, and best practices. By following the strategies in a playbook, sales reps can increase their productivity, improve their win rates and drive revenue growth for the company. Sales playbooks typically include […] target customer profiles, stages of the sales process, how to handle customer objections, sales methodologies, sales tools and technologies, key performance indicators (KPIs), and strategic objectives.

Pipedrive’s how-to guide is a fine piece of work. It’s just way too heavy for early-stage startups. These startups can’t make large playbooks, nor should they. They don’t have the resources to build them, but far more importantly, they don’t know what to say — they simply don’t have enough experience to know what works across a wide range of buyers and situations. Sure, you can pay an intern to fill in templates, but you don’t have quality content.

That said, what’s my definition of a playbook?

A playbook is a collection of plays.

What is a Play?

My definition begs the question: what, then, is a play? So let’s define that, too.

A play is a series of steps to make in a given situation to help you win a deal.

The keywords are:

  • Steps: the things that the sales team needs to do. While different team members may do different things at different times, the quarterback of the deal is always the seller.
  • Situation: the situation for which the play is designed. For example, you might have play for leaving a deal that you don’t think is qualified (the Polite Walk Away) or for saving a deal you know you’re losing (the Hail Mary).
  • Win: the purpose of the play is to win the deal. As James Mason said of lawyers in The Verdict, “you’re not paid to do your best, you’re paid to win.” The same is true is in sales. The purpose of the play is to win.

An Example Play

Because I find the notion of play still somewhat amorphous, I’ll provide a concrete example.

Situation. You sell BI tools. You are competing against a hot competitor with a slick user interface that’s generally preferred by end-users to your own. One feature, in particular, gets audible wows when demoed. Your product and engineering team has recently released a similar but inferior version of that feature to help. Because the competitor knows they will win in end-user demos, they encourage selection committees to “let the users decide” by having a large end-user demo near the conclusion of the selection process. Your competitor calls their play the “End Run” because they’re running around the IT group charged with the selection to the end-users.

Steps. You take the following steps in this situation.

  • Build or re-use the slickest available demo of the product that you can find.
  • Request an end-user demo session for your company, too, justified by basic process fairness.
  • Demonstrate the “wow” feature several times. Know that you are likely to still lose with the end-users, but that’s not the point. You are trying to minimize the perceived gap and convince the end-users that — even if they don’t see your solution as “best” — that it’s certainly “good enough” to get the job done.
  • Call a meeting with the IT team to discuss security and administration. Convince them of the importance of security and the cost of administration. Show that your product, rightfully, is superior in both these areas.
  • Get IT to reframe the end-user vote as “input” (versus “selection”) and that they should ask the end-users two questions: which is your preferred solution and can both solutions do the job?
  • Win the deal when IT selects your product based on security and adminstration with the end-users’ consent that your solution is good enough to do the job.

That is a play. It’s not complicated. It’s easily taught. You can and should build tools to support its execution — e.g., the wow demo and a security and adminstration white paper.

Plays Are Applied Marketing

Are plays marketing or sales? While plays are always executed by sales, I think of building plays as applied marketing. We start with what we know about the customer and market. We add what we know about the competition — both in terms of product strengths/weaknesses and common sales tactics. Then we apply that knowledge into making a play (i.e., a series of steps) to beat them.

What Plays Do You Need?

I tihnk most startups need 3 or 4 plays, each of which can be described in less than a page (if not a single paragraph):

  • Replicate success. This is your primary play. If you have a few big insurance companies using your product for use-case X, then you need a play for replicating that. Who to call. What to ask. What to say. How to tell the story of your existing references. How to overcome objections. How to close.
  • Replace BigCo. If you have newer, better, faster, cheaper technology than an established (now “legacy”) vendor, you need a play for how to replace them. Who to call. What to ask. What to say. How to qualify. How to win. When to give up.
  • Beat archrival startup. If you have a head-to-head startup rival, you’ll need a play for how to beat them. This is usually a mix of product differentiators tied to use-cases combined with vision/roadmap to address objections along with strong messaging on safety, company/investor quality, and early market leadership.
  • Polite walk alway. As an early-stage startup you should walk away from plenty of deals, so you should get good at it. The deals you qualify out today are next year’s opportunities so treat them well and get good at slow nurture.

Slides from SaaS Metrics Palooza 2023: How To Present SaaS Metrics Like a Pro

Last month I spoke at SaaStr Annual 2023 on The Strategic Use and Abuse of SaaS Metrics (video here). When I wrote that presentation I found myself with something of a content blivit on my hands. I had a bunch of strategic things I wanted to say, but darn it, I had a lot of tactical things I wanted to say as well.

While the strategic use of metrics is key, poor tactical presentation of metrics can lead to anything from obsfucation to disaster. Never forget Edwin Tufte’s reminder that tactical presentation mistakes can lead to quite strategic problems, demonstrated via his analysis of a Powerpoint deck that was used in a discussion of the Columbia re-entry decision.

So, instead of trying to jam everything into a single deck, I decided to write two different presentations:

While there is a touch of overlap between the two presentations (e.g., piecemealing), they are designed to be consumed together and reinforce each other, so please take a look at them both.

I have discovered a new mantra in building these decks. Because so many SaaS metrics problems are ultimately driven by a lack of trust, and because templates can do so much to build both trust and alignment, I am now in the habit of repeating:

Templates build trust. Templates build trust. Templates build trust.

I also have a new theme song (and walk-on music) for mistake number seven, excessive use of smoothing. Don’t be a Smooth (metrics) Operator.

I’ve embedded the slides of the SaaS Metrics Palooza presentation below. You can download a PDF of them as well. You can find a video of the presentation at the SaaS Metrics Palooza Website (registration required, but free.)

Thanks to those who attended the presentation and thanks to BenchmarkIt and my SaaS Talk podcast partner, metrics brother Ray Rike, for inviting me.

What Do “Pipeline Coverage” and “Forecast” Mean When Your Sales Cycle is 30 Days?

I grew up in enterprise.  I have already written a post on the tricky problem of mapping one’s mindset from enterprise to velocity SaaS, meaning smaller deals, shorter contract durations (e.g., month-to-month), and/or monthly-varying pricing [1].  That post was focused on what, if anything, “annual recurring revenue” (ARR) means such an environment, and how that impacts metrics that rely on ARR as part of their definition (e.g., CAC ratio).

In this post, I’ll continue in the velocity SaaS direction by exploring short average sales cycles (ASC), as opposed to short contracts.  Specifically, what does it mean in short ASC companies when you discuss common concepts like pipeline coverage and the sales forecast?

Let’s demonstrate the problem.

In enterprise, quarterly pipeline (defined as the sum of the values of opportunities with a close date in the quarter) is somewhat intertwined the notion of long sales cycles.  Meaning that in a company with 9–12-month sales cycles, virtually every deal that has a chance of closing within the quarter is already in the pipeline at the start of the quarter.  Thus, you can meaningfully calculate “coverage” for the quarter by dividing the quarterly starting pipeline by the quarterly sales target.  Most sales VPs like a 3x ratio [2].

Thus, the concept of pipeline coverage implicitly assumes a sales cycle (significantly) longer than the coverage period.  That’s why most companies don’t look at out-quarter pipeline coverage much (though they should) and if they do, they expect a much lower coverage ratio.

Now, let’s imagine an average sales cycle of 30 days and — rather than futzing with cohorts, statistics, and distributions [3] — let’s assume that all oppties are won or lost in exactly 30 days [4].

In this scenario, at the start of the quarter, what is the pipeline coverage ratio? It’s 1.0x.  Why?  We have zero pipeline for months 2 and 3 of the quarter.  If we assume that we have 3.0x coverage for month one and that the quarterly goal is evenly distributed across months, then we’d have 3.0x, 0.0x, and 0.0x for the three months of the quarter, or 1.0x overall [5].

In this example, quarterly pipeline coverage is basically meaningless because two-thirds of the pipeline you need to close during the quarter hasn’t been created yet.  Assuming a 30-day MQL-to-opportunity lag, one-third is working its way through the high funnel and the other third is still a wink in marketing’s eye.

If quarterly pipeline coverage is basically meaningless in short ASC companies, then what is meaningful?

  • Examining monthly pipeline coverage. Instead of week-3 quarterly pipeline coverage [6], we should look at day-3 monthly pipeline coverage — dividing the starting monthly pipeline by the monthly sales target. (After that, you can use to-go pipeline coverage to get continuous insight.)
  • Treating months 2 and 3 the way you’d treat next-quarter and the quarter thereafter in enterprise. Using a pipeline progression chart to see how the out-month pipeline is shaping up.
  • Getting marketing to forecast starting pipeline for month 2 and month 3, based on what they have already generated in the high funnel and their current pipeline generation plans for month 2.

Inherent in my point of view is that the definition of “coverage” is based on opportunities that already exist in the pipeline. Call me untrusting, but somehow I can’t feel covered by something that hasn’t been created yet.  Some might define quarterly coverage in this environment using month 1 pipeline plus month 2 pipeline forecast and month 3 pipeline plan.  But to me, that’s not coverage.  And it’s objectively not the same thing as pipeline coverage when we use the term in enterprise.

Now, let’s zip back to reality for a minute.  In the velocity companies that I work with, ASC is closer to 60 days and with a pretty broad distribution where maybe 90% of the deals close within 30 and 120 days.  Happily, this means you will have month 2 and month 3 opportunities in the starting quarter pipeline, but it nevertheless also means you will be increasingly reliant on to-be-generated opportunities across the months of the quarter.

In this case, I would make a three-layer forecast:

  • Sales (from existing opportunities). Forecast month 1, 2, and 3 sales using the normal sales forecasting process.
  • Marketing, from the high funnel. Use existing MQLs and your standard conversion rates, ideally time-based time-based (not just the total rate, but the rate split by time period)
  • Marketing, from planned demandgen. Forecast responses, then use standard conversion rates and ideally time-based. (Ideally you can start with your inverted funnel model.)

This approach is preferable to looking only at pipeline generation (pipegen) because a pipegen approach:

  • Tends to ignore the oppties that are already there
  • Almost always ignores that time-based nature of close rates
  • Uses an average sales price (ASP) as the proxy value for an opportunity [7].

In the example above you can clearly see how much of the forecast comes from existing opportunities (51%), how much from the existing high funnel (36%), and how much from planned demandgen activities (13%).

Finally, I have the same problem with the word “forecast” as I do with “coverage” in the short ASC world. They’re not quite the same thing as they are in enteprise. First, let me define “forecast,” along with its cousins, “plan” and “model.”

  • The plan is about accountability. It’s what we signed up for and accountable to. Budget is a synonym [8].
  • The model is a driver-based model of the business. It’s a calculated output (e.g., opportunities generated) given assumptions for a number of inputs and the way they interact (e.g., demandgen spend, MQLs generated, conversion rates).
  • The forecast is about prediction. It’s someone’s latest prediction for an output (e.g., bookings) given all available information at the time it’s made.

The plan is what we were willing to sign up for last December (when we received board approval). The forecast is what we think is going to happen now.  We used models to help build the original plan and we can certainly re-run those models today using actuals as inputs to see what they produce.

In enterprise, the sales forecast is all about the deals in play.  What if Mike closes deals A, B, and either C or D.  The buyer at deal E promised me they’d give us the order.  Given everything we know about Sally’s deal F, what value do we think it will close at?  Sales VPs spend hours in Excel (or a modern forecasting tool like Clari) running scenarios to arrive a number.  It’s usually more about different combinations of deals than it is about probabilities and expected values.

In the velocity world, as discussed above, the forecast cannot be only about existing deals. If you want to forecast a quarter, you’ll need to include results from the high-funnel and planned demangen. I’d still call it a forecast, but I’d know that it’s not quite the same thing as a forecast in enterprise. And by presenting in the three layers above, you can remind everyone of that.

# # #

Notes

[1] Monthly-varying SaaS is a different concept, which I used in that post, featuring short contracts (e.g., month-to-month) where the spend can vary every month, usually as the result of a flexible user-based pricing model, a consumption-based pricing model, or a hybrid pricing model (e.g., base + overage).  In such environments, simple SaaS concepts like ARR can quickly lose meaning, as do the metrics that rely on them (e.g., CAC ratio).

[2] Which I think had its ancient origins in the idea that you win 33%, lose 33%, and 33% slip. (Thus assuming a 50% competitive win rate.) Regardless of its roots, 3x (starting) coverage is a widely accepted norm, so much so that I fear it’s often a self-fufilling prophecy.

[3] We’re ignoring the distribution of average sales cycle length for closed/won deals, its standard deviation, and the fact the three different outcomes (i.e., win, loss, slip) will likely have three different average opportunity cycle lengths (e.g., you usually lose faster than you win), each with its own distribution.

[4] And, most unrealistically, that deals never slip to a subsequent period. We’re also assuming that all opportunities are generated on the first day of month, an exactly 30-day lag from MQL to opportunity, and that all MQLs are generated on the first day of month, and convert in exactly 30 days. (And, for the detail-oriented, that every month is 30 days.) Overall, with these simplifying assumptions, you start every month with only the opportunities generated from MQLs generated the prior month and only those opportunities. There is no leftover pipeline sloshing around to confuse things.

[5] The reality is likely somewhat less than 1.0x because we’d normally expected to some backloading (“linearity”) of the quarterly target across the months of the quarter.  In enterprise, that backloading is severe (e.g., most enterprise cash models assume a 10/20/70 distribution). In velocity SaaS, I’ve seen from 30/30/40 (i.e., pretty flat) to 10/20/70 (i.e., as backloaded as enterprise), typically reflecting a quarterly (as opposed to a monthly) sales cadence which is usually a mistake in a velocity model.

[6] To intelligently compare pipeline across quarters we need to fix a point in time to snapshot it. In enterprise, I prefer day one of week three because it’s early enough to take actions (e.g., reducing expenses), but late enough so sales can no longer credibly claim they need more time for pipeline cleanup (aka, scrubbing).

[7] In enterprise, this is a major sin because deal sizes vary significantly and values should be inserted only after discovery and price-point socialization (e.g., “you do know that this costs $150K?”)  In velocity, it’s a lesser sin because the deal sizes tend to be more similar.  Either way, if all we’re doing is counting opportunities and multiplying by a constant, then why not just admit it and count opportunities directly? The more sophisticated the proxy, the more I like it (e.g., using $10K for SMB, $25K for MM, and $75K for ENT).

[8] Technically, I’d say budget is a synonym for the financial part of the plan. That is, a budget is only one part of a plan. A plan would also include strategic goals, objectives for attaining them, and organization structure.

Interpreting The Insight 2023 Sales KPI Report

Insight Partners recently published an excellent 2023 Sales KPI Report. As I went through it, I thought it could be educational and fun to write a companion guide for three distinct audiences:

  • The intimidated. Those who find SaaS benchmark reports as impenetrable as James Joyce. The post could serve as my Ulysses Guide for the interested but in need of assistance.
  • The cavalier. Those who are perhaps too comfortable, too quick to jump into the numbers, and ergo potentially misinterpreting the data. The post could serve to slow them down and make them think a bit more before diving into interpretation.
  • The interested.  Those who simply enjoy deeper thinking on this topic and who are curious about how someone like me (i.e., someone who spends far too much time thinking about SaaS metrics) approaches it.

So, let’s try it.  I’ll go page-by-page through the short guide, sharing impressions and questions that arise in my mind as I read this report.  As usual, this has ended up about five times as much work as I thought at the outset.

Onwards!  Grab your copy and let’s go.

Introduction (Slide 3)

Yikes, there are footnotes to the first paragraph. What do they say?

  • They’re cutting the data by size bucket (aka, “scale-up stage”). I suspect they use this specific language because Scale Up is a key element of Insight’s positioning.
  • They’re also cutting the data by go-to-market (GTM) motion: transactional, solution, or consultative. This is a cool idea, but it’s misleading because those descriptive names are simply a proxy for deal size (aka average selling price, or ASP).
  • While the names don’t really matter (they are just labels for deal size buckets), I find “transactional” clear, but I don’t see a difference between “solution” and “consultative” sales.  I’m guessing “solution” means selling a solution directly to a business buyer (e.g., selling a budgeting system to a VP of FP&A) and “consultative” means a complex sale with multiple constituents.
  • Ambiguity aside, the flaw here is the imperfect correlation between deal size and sales motion. Yes, deal size does generally imply a sales motion, but the correlation is not 100%. (I’ve seen big, rather transactional deals and small highly consultative ones). They’d be better off just saying “small, medium, and large” deals rather than trying to map them to sales motions. We need to remember that later in interpretation.

Now we can read the second paragraph of the first page.

  • Data is self-reported from 300+ software companies that Insight has worked with in the past year.
  • That’s nice, because 300 companies is a pretty large set of data.
  • But beware the “Insight has worked with.” Insight is a top-tier firm so this is not a random sample of SaaS companies. I’m guessing “working with” Insight means tried and/or succeeded in raising money from Insight. So I’d argue that this data likely contains a random blend of top-tier companies (who reasonably think they are Insight material) and non-self-aware companies (who think they are, but aren’t).
  • Nevertheless, I’m guessing this is a pretty high quality group. While some SaaS benchmarks include a broad mix of VC-backed, founder bootstrapped, and PE-owned SaaS companies, SaaS benchmarks produced by VC firms generally include only those firms who tried to raise VC — i.e., the moonshots or at least wannabe moonshots.
  • By analogy, this is the difference between comparing your SAT scores to Ivy League admittees vs. Ivy League applicants vs. all test takers. (The mid-fifty percentile for Ivy League admittees is 1468-1564, overall it’s 950-1250, and for applicants I don’t know.)
  • I’ve always felt you should, in a perfect world, cut benchmarks by aspiration. You run a company differently when you’re a VC-fueled, share-grabbing moonshot vs. a founder-bootstrap when you’re hoping to sell to a PE sponsor in 3 years. Thus, this data is most relevant when you’re trying to raise money from a firm like Insight.

Table of Contents (Slide 4)

Just kidding. Nothing to add here.

Executive Summary: Sales KPIs (Slide 5)

Here we can see key metrics, cut by size, and grouped into five areas: growth & profitability, sales efficiency, retention & churn, GTM strategy, and sales productivity.

Before we go row-by-row into the metrics, I’ll share my impressions on the table itself.

  • CAC payback period (CPP) is simply not a sales efficiency metric. While many people confuse it as one, payback periods are measured in time (e.g., months) — which is itself a clue — and they are risk metrics, not return metrics. They answer the question: how long does it take to get your money back [1]? Pathological example: CPP of 12 months and 100% churn rate means you get your money back in a year but never get anything else. It’s not measuring efficiency. It’s not measuring return. It’s measuring time to payback [2].
  • I’ve never heard of SaaS quick ratio before, but from finance class I remember that the quick ratio is a liquidity metric, so I’m curious.
  • I wouldn’t view pipeline coverage as a sales productivity metric, but agree it should be included in the list and I view its placement as harmless.

Now, I’ll share my reactions as I go row-by-row:

  • ARR growth. The rates strike me as strong, partially validating the view that these are Ivy League applicants. For example, median 106% growth between $10M and $20M is strong. For more views on best-in-class growth rate, see my post on The Rule of 56789.
  • New + expansion growth rate. This seems to reveal a common taxonomy problem. If you consider new logo ARR and expansion ARR as two independent, top-level categories you end up with no parent category or name. For this reason, I prefer new ARR to be the parent category, with new ARR having two subcategories: from existing customers (expansion ARR) and from new customers (new logo ARR). See my recent SaaS Metrics 101 talk. In Dave-speak, row 1 is ending ARR growth rate and row 2 is new ARR growth rate.
  • Efficiency rule. I haven’t heard precisely of this before but I’m guessing it’s some variation on burn multiple. We’ll review it later. Surprised they lack data for the bigger categories.
  • CAC payback period (CPP). The prior discussion aside, these numbers look very strong raising two questions: who are these companies again and are they calculating it funny?
  • SaaS quick ratio. We’ll come back to this once I know what it is. If it’s a liquidity ratio (and it turns out it’s not) then these companies would be swimming in cash.
  • Magic number. Usually this is the inverse of the CAC ratio, but sometimes (and as defined by Scale) calculated using revenue, not ARR. When I invert the magic numbers here, I see CAC ratios of 1.4, 1.1, 1.0, 1.3, and 1.3 across the five categories — which are all pretty good.
  • For fun, let’s do some metrics footing. In practice, CPP is usually around 15 / magic number [3], so I can create an implied CPP (which is 21.4, 16.7, 15.0, 18.8, and 18.8). Since those values are about 1.4x the reported CPPs, I’m pretty sure we’re not defining everything the same way. We’ll see what we find later [4].
  • S&M % of revenue. A good metric, and a quick skim again shows pretty solid numbers.  Let’s compare to RevOps Squared, which hits a broad population of SaaS companies, and shows ~35%, ~35%, 54%, 43%, and 45% across the five categories [5]. The notable difference is that Insight’s companies spend more earlier (83%, 45% in the first two categories), presumably because they’re shooting for higher growth.
  • Net revenue retention (NRR) aka net dollar retention (NDR) [6]. While there is a definitional question here, the number themselves look very strong (cf. RevOps Squared at ~103%, ~104%, 110%, 106%, and 102%). I believe this reflects Insight’s high-flying sample more than a calculation difference, but maybe we’ll learn differently later.
  • Gross revenue retention (GRR) aka gross dollar retention (GDR). This is an increasingly popular metric because investors are increasingly concerned that one train may hide another [7] in analyzing expansion and shrinkage, and thus want to see both NRR and GRR. The figures again look quite strong (cf. RevOps Squared at ~86%, ~87%, 88%, 88%, and 87%). This reinforces the point that we need to understand the sample differences behind benchmarks: Insight sets a much higher bar on NRR and GDR than RevOps Squared [8].
  • Annual revenue churn (rate). I’ve never heard it exactly this way, but this is some sort of churn rate.  It looks very close to 1 – GRR (i.e., plus or minus 1-2%), so it’s hard to understand why I need both.  More later.
  • NPS (net promoter score).  The first question is always for which role because NPS can vary widely across end users, primary users, adminstrators, and economic decision makers.  That can also lead to random weightings across those categories.  That said, the numbers here strike me as setting a very high bar.
  • New bookings as a % of total bookings.  This is a good metric, but I look at it the other way (i.e., expansion %) and use new ARR, not bookings [9].  That is, I prefer expansion ARR as a % of new ARR and I like to run around 30%, lower when you’re smaller and higher when you’re bigger.
  • Average sales cycle (ASC) (months).  This was the row that shocked me the most — with numbers like 2.5, I’d have guessed there were measuring quarters, not months.  Then again, I come from an enterprise background, but I do work with some SMB companies.  Let’s see if they drill into it later.  And remember it’s a median, I’d love to see the distribution and cut by deal size.
  • S&M as % of total opex.  I get why people do this [10] but I don’t like it as a metric, prefering S&M as a percent of revenue. (Cf. RevOps Squared where S&M as % of revenue runs 30-50%.)
  • Sales % of S&M expense.  I like this metric a lot, and it’s happily gaining in popularity.  I prefer to track sales/marketing expense ratio, which I think is more intuitive but uses the same numbers, just compared differently.  In my experience, the sales/marketing ratio runs around 2-1, equivalent to 66% when viewing sales as a percent of S&M.  More important than baseline value, companies need to watch how this changes over time; it’s often a function of sales’ superior negotiating ability and leverage more than anything else.  See my post.
  • Sales headcount as % of total headcount.  I get where they’re coming from with this metric, but I prefer to track what I call quota carrying rep (QCR) density = QCRs / sales headcount.  I’m trying to measure the percent of the sales org that is actually carrying an incremental quota [11].  See my post, the Two Engines of SaaS, which introduces both QCR density and its product equivalent, DEV density.  Because I don’t track this one, I have no intuitive reaction to the numbers.
  • Bookings per rep.  I’m imaging this is what I’d call new ARR per rep, aka sales (or AE) productivity, measured in new ARR per year.  These numbers strike me as correct for enterprise, but inconsistent with a 3 month ASC — that usually connotes smaller deals and lower sales productivity on the order of $600K ARR/year.  The key rule of thumb here is that bookings/rep is ideally 4x a rep’s on-target earnings (OTE).  So this data implies sellers with $250K OTE.
  • Pipeline coverage.  While technically speaking I don’t view pipeline coverage as a sales productivity metric, it’s an important metric and I’m glad they benchmarked it.  In my experience 2.5 to 3.0x coverage is sufficient and when I see numbers above 3x, I get worried about several things (e.g., cleaniness, win rate, sales accountability, and if marketing is being proactively thrown under the bus).  These numbers thus concern me, but sadly do not surprise me.
  • Pipeline conversion rate.  This is notionally the inverse of pipeline coverage if both are measured for the same time period.  I do track them independently because, in enteprise, starting pipeline is mix of opportunities created in the past 1-4 quarters, and the eventual (cohort-based) close rate is not the same as the week-3 current-quarter conversion rate.  The glaring inconsistency here, speaking on behalf of CMOs everywhere, is this:  sales saying they want 4.0x coverage on a pipeline that closes at 44% is buying a 1.75x insurance policy on the number.  I get that we all like cushion, but it’s expensive and such heavy cushion puts the monkey on the back of the pipeline sources (e.g., marketing, SDR, partners, and to a lesser extent, sales itself).  Think:  if we drown sales in pipeline, then we can’t miss the number!  Math:  if you close 44% of it, you need 2.3x coverage, not 4.0x.

Go-To-Market Sales Motion Definitions (Slide 6)

Holy cow.  We’re only on slide six.  Thanks for reading this far and have no fear, it’s largely downhill from here — the Insight center of excellence pitch starts on slide 12, so we have only six slides to go.

I think slide six is superfluous and confusing. 

  • In reality, they are not cutting the data by sales motion, they are cutting it by deal size (ASP). 
  • They say they are using ASP as a proxy for sales motion, but I think it’s actually the other way around:  they seem to be preparing to use sales motion as a proxy for ASP, but then they don’t present any data cut by sales motion.
  • The category names are confusing.  I’ve been doing this a while and don’t get the distinction between the solution and consultative sale based on the names alone.

The reality is simple:  if they later present data cut by sales motion remember that it’s actually cut by ASP.  But they don’t.  So much ado about nothing.

Also, the ASCs by sales type look correct in this chart yet the data has a median ASC of 2-3 months.  Ergo, one must assume it’s heavily weighted towards the transactional, but that seems inconsistent with sales (bookings) productivity numbers [12].  Hum.

Growth and Profitability Metrics (Slide 7)

OK, I now realize what’s going on.  I was expecting this report to drill down in slides 7-11, presenting key metrics by subject area cut by size and/or sales motion — but that’s not where we’re headed.  I almost feel like this is the teaser for a bigger report.

Thus, we are now in the definitions section and along with each definition they present the top quartile boundary (as opposed the medians in the summary table) for each metric.  Because these top quartiles are across the whole range  (i.e., from $0 to $100M+ companies) they aren’t terribly meaningful.  It’d be nice if Insight presented the quartiles cut by company size and ASP a la RevOps Squared.  Consider that an enhancement request.

Insight has an interesting take on the “efficiency rule,” which is what most people call the burn multiple (cash burn / net new ARR).  Insight inverts it (i.e., net new ARR / cash burn) [13] and suggests that top quartile companies score 1.0x or better. 

David Sacks suggests the following ranges for burn multiple:  <1.0 amazing (consistent with Insight’s top quartile), 1 to 1.5 great, 1.5 to 2.0 good, 2.0 to 3.0 suspect, and >3.0  bad.

Finally, Insight seems to believe that the efficiency rule is only for smaller companies and I don’t quite understand that.  Perhaps it’s because their bigger companies are all cash flow positive and they don’t burn money at all!  The math still works with a negative sign and there are plenty of big, cash-burning companies out there (where the metric’s value is admittedly more meaningful) so I apply burn multiple to cash-burning companies of all sizes.

Finally, Bessemer has a related metric called cash conversion score (CCS) which is not a period metric but an inception-to-date metric.  CCS = current ARR / net cash consumed from inception to date.  They do an interesting regression that predicts investment IRR as a function of CCS — if you need a reminder of why VCs ultimately care about these metrics [14]

Sales Efficiency Metrics (Slide 8)

Thoughts:

  • They define CAC on a per-customer basis, don’t define CAC ratio (the same but per new ARR dollar) and don’t actually present either in the summary table.  Odd.
  • They use what I believe is a non-standard definition of CAC payback period, defining it on ARR as opposed to subscription gross profit.  For most people, CAC payback period is not months of subscription revenue — it’s months of subscription gross profit — to pay back the CAC investment. This explains why their numbers look so good.  To be comparable to most other benchmarks, you need to multiple their CAC payback periods by 1.25 to 1.5.   This is a great example of why we need to understand what we’re looking at when doing benchmarking.  In this case, you learn that you’re doing much better than you thought!
  • They suggest that top quartile is <12 months for small and medium deals, and <18 months for large ones, equivalent to 15 and 22.5 months assuming the more standard formula and 80% subscription gross margins.
  • They define the SaaS quick ratio, which is a bad name [15] for a good concept.  In my parlance, it’s simply = new ARR / churn ARR, i.e., the ratio between inflows and outflows of the SaaS leaky bucket.  I generally track net customer expansion = new ARR – churn ARR, so I don’t have an intuitive sense here.  They say 4x+ is top quartile.
  • They define magic number on revenue, not ARR, as does its inventor.  I prefer CAC ratio because I think it’s more intuitive (i.e., S&M required to get $1 of new ARR) and it’s based on ARR, not revenue.  For public companies, you have to use revenue because you typically don’t have ARR; for private ones, you do.  They say a 1.0x+ magic number is top quartile.
  • They say S&M as % of revenue top quartile is 37% [16].

Retention and Churn Metrics (Slide 9)

OK, just a few more slides to go:

  • For NRR and GRR, they use a bridge approach (i.e., starting + adds – subtracts = ending) which calculates what I call lazy NRR and GRR. 
  • To me, these metrics are defined in terms of cohorts/snapshots (deliberately to float over some of the things people do in those bridges) and you should calculate them as such.  See my post for a detailed explanation.
  • Annual revenue churn, as defined, is pretty non-standard and a weak metric because it’s highly gameable.  You want to stop using the service?  Wait, let me renew you for one dollar.  The churn ARR masked as downsell would be invisible.  If you want to count logos, count logos — and do logo-based as well as dollar-based churn rates.  For more on churn rates and calculations, see Churn is Dead, Long Live Net Dollar Retention.
  • Net promoter score.  As mentioned above, I think they’re setting a high on bar NPS, saying the benchmark is 50%+.  I’d have guessed 25-30%+.  

GTM Strategy Metrics (Slide 10)

One more time, thoughts:

  • Selling motion is not really a metric yet it’s defined here.  Moreover, it’s differently and better defined on slide 6.  They try to classify a company’s sales motion by the motion that has 75% or more of its reps.  This won’t work for many companies with multiple motions because no one motion accounts for 75% of the team.
  • New (logo) ARR as % of new ARR.   I mapped this to my terminology for clarity.  They say 75% is top quartile, but that doesn’t make sense to me.  This is a Goldilocks metric, not a higher-is-better metric.  If you’re getting a lot more than 70% of your new ARR from new logos, I wonder why you’re not doing more with the installed base.  If you’re getting a lot less than 70%, I wonder why you aren’t winning more new customers.
  • Average sales cycle (ASC).  They say the benchmark is 3-6 months for a transactional motion (where just two rows above they use a different taxonomy of field, inside, and hybrid) and 9-12 months for consultative.  On slide 6 they say transactional is <3 months, solution is 3-9 months, and consultative is 6-12+ months.  It’s not shockingly inconsistent, but they need to clean it up.

Sales Productivity Metrics (Slide 11)

Last slide, here are my thoughts:

  • Bookings per rep.  Just when we thought it was safe to finish with a simple clear metric, we find an issue. They define bookings/rep = new ARR / number of fully-ramped reps.  If the intent of the metric is to know what a typical fully-ramped rep can  sell, it’s the wrong calculation.  What’s the right one?  Ramped AE productivity = new ARR from ramped reps / number of ramped reps.  As expressed, they’re including bookings from ramping reps in the numerator and that overstates the productivity number.  See my post on the rep ramp chart for more.
  • They say top quartile is $993K/year which strikes me as good in mid-market, light in enterprise, and impossibly high in SMB.
  • Here is where they really need to segment the benchmark by sales motion yet, despite the hubbub around defining sales motions, they don’t do it.
  • Pipeline coverage is somewhat misdefined in my opinion.  By default it should be calculated relative to plan, not a projection or forecast.  It should also be calculated on a to-go basis during the quarter (remaining pipeline / to-go to plan) and, in cases where the forecast is significantly different from plan, it makes sense to calculate it on a to-forecast basis as well.  
  • Conversion rate is defined correctly, providing we have a clear and consistent understanding of “starting.”  For me, it’s day 1, week 3 of the quarter — allowing sales two weeks to recover from the prior close and clean up this quarter’s pipeline.  Maybe I’m too nice, it should probably be day 1, week 2.  Also, remember that conversion rates are quite different for new and expansion ARR pipeline, so you should always segment this metric accordingly.  I look at it overall (aka blended) as well, but I’m aware that it’s really a blended average of two different rates and if the mix changes, the rate will change along with it.

Sales & CS Center of Excellence (CoE) (Slide 12)

Alas, the pitch for Insight’s CoE begins here, so our work is done.  Thanks for sticking with me thus far.  And feel free to click through the rest of Insight’s deck.

Thanks to Insight for producing this report.  I hope in this post that I’ve demonstrated that there is significantly more work than meets the eye in understanding and interpreting a seemingly simple benchmark report.

# # #

Notes

[1] Ironically, CPP doesn’t even do this well. It’s a theoretical payback period (which is very much not the intent of capital budgeting which is typically done on a cash basis). The problem? In enterprise SaaS, you typically get paid once/year so an 8-month CPP is actually a 30-60 day CPP (i.e., the time it takes to collect receivables, known as days sales outstanding) and an 18-month CPP is, on a cash basis, actually a 365-days-plus-DSO one. That is, in enterprise, your actual CPP is always some multiple of 12 plus your DSO.

[2] You can argue it’s a quasi-efficiency metric in that a faster payback period means more efficient sales, but it might also mean higher subscription gross margin. Morever, the trumping argument is simple:  if you want to measure sales efficiency look at CAC ratio — that’s exactly what it does.

[3] CPP in months = 12 * (CAC ratio / subscription gross margin), see this post. Subscription GM usually runs around 80% , so re-arranging a bit CPP = 12 * (1/0.8) * CAC ratio = 15 * CAC ratio = 15 / magic number. Neat, huh? If you prefer assuming 75% subscription GM, then it’s 18 / magic number.

[4] I like metrics footing as a quick way to reveal differences in calculation and/or definition of metrics.

[5] The tildas indicate that I’ve eyeball-rebucketed figures because the categories don’t align at the low end.

[6] Dollar is used generically here to mean value-based, not count-based. But that’s an awkward metric name for a company that reports in Euros. Hence the world is moving to saying NRR and GRR over NDR and GDR.

[7] Referring to a sign at French railroad crossings and meaning that investors are less willing to look only at NRR, because a good NRR of 115% can be the result of 20% expansion and 5% shrinkage or 50% expansion and 35% shrinkage.

[8] I doubt there is a calculation difference here because GRR is a pretty straightforward metric.

[9] I define “bookings” as turning into cash quickly (e.g., 30-60 days).  It’s a useful concept for cash modeling.  See my SaaS Metrics 101 talk.  Here, I don’t think they mean cash, and I think they’re forced into using “bookings” because they haven’t defined new ARR as inclusive of both newlogo and expansion.  

[10] Because in early-stage companies total opex is often greater than revenue, but I prefer the consistency of just doing it against revenue and knowing that the sum of S&M, G&A, and R&D as a % of revenue may well be over 100%.

[11] Not overlaid or otherwise double-counted quota, as a product overlay sales person or an alliances manager might.

[12] Bear in mind these are all medians of a distribution so it’s certainly possible there is not inconsistency, but it is suspicious.

[13] There’s a lot of “you say tomato, I say tomato” here.  Some prefer to think, “how much do I need to burn to get $1 of net new ARR?” resulting in a multiple.  Others prefer to think, “how much net new ARR do I extract from $1 of burn?” resulting in what I’d call an extraction ratio.  I prefer multiples.  The difference between Bessemer’s original CAC ratio (ARR/S&M) and what I view as today’s standard (S&M/ARR) was this same issue.

[14] Scale does a similar thing with its magic number.

[15] It’s a rotten name because the quick ratio is a liquidity ratio that compares it’s most liquid assets (e.g., cash and equivalents, marketable securities, net accounts receivable) to its current liabilities.  I think I get the intended metaphor, but it doesn’t work for me.  

[16] They actually have this wierd thing where they either put a number in black or orange.  Black means “benchmark” but with an undefined percentile.  Orange means Insight top quartile because no industry standard benchmark is available.  Which calls into question what that means because there are certainly benchmarks for some of these figures out there.