Site icon Kellblog

Interpreting The Insight 2023 Sales KPI Report

Insight Partners recently published an excellent 2023 Sales KPI Report. As I went through it, I thought it could be educational and fun to write a companion guide for three distinct audiences:

So, let’s try it.  I’ll go page-by-page through the short guide, sharing impressions and questions that arise in my mind as I read this report.  As usual, this has ended up about five times as much work as I thought at the outset.

Onwards!  Grab your copy and let’s go.

Introduction (Slide 3)

Yikes, there are footnotes to the first paragraph. What do they say?

Now we can read the second paragraph of the first page.

Table of Contents (Slide 4)

Just kidding. Nothing to add here.

Executive Summary: Sales KPIs (Slide 5)

Here we can see key metrics, cut by size, and grouped into five areas: growth & profitability, sales efficiency, retention & churn, GTM strategy, and sales productivity.

Before we go row-by-row into the metrics, I’ll share my impressions on the table itself.

Now, I’ll share my reactions as I go row-by-row:

Go-To-Market Sales Motion Definitions (Slide 6)

Holy cow.  We’re only on slide six.  Thanks for reading this far and have no fear, it’s largely downhill from here — the Insight center of excellence pitch starts on slide 12, so we have only six slides to go.

I think slide six is superfluous and confusing. 

The reality is simple:  if they later present data cut by sales motion remember that it’s actually cut by ASP.  But they don’t.  So much ado about nothing.

Also, the ASCs by sales type look correct in this chart yet the data has a median ASC of 2-3 months.  Ergo, one must assume it’s heavily weighted towards the transactional, but that seems inconsistent with sales (bookings) productivity numbers [12].  Hum.

Growth and Profitability Metrics (Slide 7)

OK, I now realize what’s going on.  I was expecting this report to drill down in slides 7-11, presenting key metrics by subject area cut by size and/or sales motion — but that’s not where we’re headed.  I almost feel like this is the teaser for a bigger report.

Thus, we are now in the definitions section and along with each definition they present the top quartile boundary (as opposed the medians in the summary table) for each metric.  Because these top quartiles are across the whole range  (i.e., from $0 to $100M+ companies) they aren’t terribly meaningful.  It’d be nice if Insight presented the quartiles cut by company size and ASP a la RevOps Squared.  Consider that an enhancement request.

Insight has an interesting take on the “efficiency rule,” which is what most people call the burn multiple (cash burn / net new ARR).  Insight inverts it (i.e., net new ARR / cash burn) [13] and suggests that top quartile companies score 1.0x or better. 

David Sacks suggests the following ranges for burn multiple:  <1.0 amazing (consistent with Insight’s top quartile), 1 to 1.5 great, 1.5 to 2.0 good, 2.0 to 3.0 suspect, and >3.0  bad.

Finally, Insight seems to believe that the efficiency rule is only for smaller companies and I don’t quite understand that.  Perhaps it’s because their bigger companies are all cash flow positive and they don’t burn money at all!  The math still works with a negative sign and there are plenty of big, cash-burning companies out there (where the metric’s value is admittedly more meaningful) so I apply burn multiple to cash-burning companies of all sizes.

Finally, Bessemer has a related metric called cash conversion score (CCS) which is not a period metric but an inception-to-date metric.  CCS = current ARR / net cash consumed from inception to date.  They do an interesting regression that predicts investment IRR as a function of CCS — if you need a reminder of why VCs ultimately care about these metrics [14]

Sales Efficiency Metrics (Slide 8)

Thoughts:

Retention and Churn Metrics (Slide 9)

OK, just a few more slides to go:

GTM Strategy Metrics (Slide 10)

One more time, thoughts:

Sales Productivity Metrics (Slide 11)

Last slide, here are my thoughts:

Sales & CS Center of Excellence (CoE) (Slide 12)

Alas, the pitch for Insight’s CoE begins here, so our work is done.  Thanks for sticking with me thus far.  And feel free to click through the rest of Insight’s deck.

Thanks to Insight for producing this report.  I hope in this post that I’ve demonstrated that there is significantly more work than meets the eye in understanding and interpreting a seemingly simple benchmark report.

# # #

Notes

[1] Ironically, CPP doesn’t even do this well. It’s a theoretical payback period (which is very much not the intent of capital budgeting which is typically done on a cash basis). The problem? In enterprise SaaS, you typically get paid once/year so an 8-month CPP is actually a 30-60 day CPP (i.e., the time it takes to collect receivables, known as days sales outstanding) and an 18-month CPP is, on a cash basis, actually a 365-days-plus-DSO one. That is, in enterprise, your actual CPP is always some multiple of 12 plus your DSO.

[2] You can argue it’s a quasi-efficiency metric in that a faster payback period means more efficient sales, but it might also mean higher subscription gross margin. Morever, the trumping argument is simple:  if you want to measure sales efficiency look at CAC ratio — that’s exactly what it does.

[3] CPP in months = 12 * (CAC ratio / subscription gross margin), see this post. Subscription GM usually runs around 80% , so re-arranging a bit CPP = 12 * (1/0.8) * CAC ratio = 15 * CAC ratio = 15 / magic number. Neat, huh? If you prefer assuming 75% subscription GM, then it’s 18 / magic number.

[4] I like metrics footing as a quick way to reveal differences in calculation and/or definition of metrics.

[5] The tildas indicate that I’ve eyeball-rebucketed figures because the categories don’t align at the low end.

[6] Dollar is used generically here to mean value-based, not count-based. But that’s an awkward metric name for a company that reports in Euros. Hence the world is moving to saying NRR and GRR over NDR and GDR.

[7] Referring to a sign at French railroad crossings and meaning that investors are less willing to look only at NRR, because a good NRR of 115% can be the result of 20% expansion and 5% shrinkage or 50% expansion and 35% shrinkage.

[8] I doubt there is a calculation difference here because GRR is a pretty straightforward metric.

[9] I define “bookings” as turning into cash quickly (e.g., 30-60 days).  It’s a useful concept for cash modeling.  See my SaaS Metrics 101 talk.  Here, I don’t think they mean cash, and I think they’re forced into using “bookings” because they haven’t defined new ARR as inclusive of both newlogo and expansion.  

[10] Because in early-stage companies total opex is often greater than revenue, but I prefer the consistency of just doing it against revenue and knowing that the sum of S&M, G&A, and R&D as a % of revenue may well be over 100%.

[11] Not overlaid or otherwise double-counted quota, as a product overlay sales person or an alliances manager might.

[12] Bear in mind these are all medians of a distribution so it’s certainly possible there is not inconsistency, but it is suspicious.

[13] There’s a lot of “you say tomato, I say tomato” here.  Some prefer to think, “how much do I need to burn to get $1 of net new ARR?” resulting in a multiple.  Others prefer to think, “how much net new ARR do I extract from $1 of burn?” resulting in what I’d call an extraction ratio.  I prefer multiples.  The difference between Bessemer’s original CAC ratio (ARR/S&M) and what I view as today’s standard (S&M/ARR) was this same issue.

[14] Scale does a similar thing with its magic number.

[15] It’s a rotten name because the quick ratio is a liquidity ratio that compares it’s most liquid assets (e.g., cash and equivalents, marketable securities, net accounts receivable) to its current liabilities.  I think I get the intended metaphor, but it doesn’t work for me.  

[16] They actually have this wierd thing where they either put a number in black or orange.  Black means “benchmark” but with an undefined percentile.  Orange means Insight top quartile because no industry standard benchmark is available.  Which calls into question what that means because there are certainly benchmarks for some of these figures out there.

Exit mobile version