Video of the Balderton SaaS Metrics That Matter Webinar

Just a quick post to share the recording of the webinar we did yesterday, where my Balderton Capital colleague Michael Lavner and I discussed the SaaS Metrics That Matter. 

You can find the slides here.  The video is available here.

Thanks to everyone who attended and for the great questions that kept it interactive.

Interpreting The Insight 2023 Sales KPI Report

Insight Partners recently published an excellent 2023 Sales KPI Report. As I went through it, I thought it could be educational and fun to write a companion guide for three distinct audiences:

  • The intimidated. Those who find SaaS benchmark reports as impenetrable as James Joyce. The post could serve as my Ulysses Guide for the interested but in need of assistance.
  • The cavalier. Those who are perhaps too comfortable, too quick to jump into the numbers, and ergo potentially misinterpreting the data. The post could serve to slow them down and make them think a bit more before diving into interpretation.
  • The interested.  Those who simply enjoy deeper thinking on this topic and who are curious about how someone like me (i.e., someone who spends far too much time thinking about SaaS metrics) approaches it.

So, let’s try it.  I’ll go page-by-page through the short guide, sharing impressions and questions that arise in my mind as I read this report.  As usual, this has ended up about five times as much work as I thought at the outset.

Onwards!  Grab your copy and let’s go.

Introduction (Slide 3)

Yikes, there are footnotes to the first paragraph. What do they say?

  • They’re cutting the data by size bucket (aka, “scale-up stage”). I suspect they use this specific language because Scale Up is a key element of Insight’s positioning.
  • They’re also cutting the data by go-to-market (GTM) motion: transactional, solution, or consultative. This is a cool idea, but it’s misleading because those descriptive names are simply a proxy for deal size (aka average selling price, or ASP).
  • While the names don’t really matter (they are just labels for deal size buckets), I find “transactional” clear, but I don’t see a difference between “solution” and “consultative” sales.  I’m guessing “solution” means selling a solution directly to a business buyer (e.g., selling a budgeting system to a VP of FP&A) and “consultative” means a complex sale with multiple constituents.
  • Ambiguity aside, the flaw here is the imperfect correlation between deal size and sales motion. Yes, deal size does generally imply a sales motion, but the correlation is not 100%. (I’ve seen big, rather transactional deals and small highly consultative ones). They’d be better off just saying “small, medium, and large” deals rather than trying to map them to sales motions. We need to remember that later in interpretation.

Now we can read the second paragraph of the first page.

  • Data is self-reported from 300+ software companies that Insight has worked with in the past year.
  • That’s nice, because 300 companies is a pretty large set of data.
  • But beware the “Insight has worked with.” Insight is a top-tier firm so this is not a random sample of SaaS companies. I’m guessing “working with” Insight means tried and/or succeeded in raising money from Insight. So I’d argue that this data likely contains a random blend of top-tier companies (who reasonably think they are Insight material) and non-self-aware companies (who think they are, but aren’t).
  • Nevertheless, I’m guessing this is a pretty high quality group. While some SaaS benchmarks include a broad mix of VC-backed, founder bootstrapped, and PE-owned SaaS companies, SaaS benchmarks produced by VC firms generally include only those firms who tried to raise VC — i.e., the moonshots or at least wannabe moonshots.
  • By analogy, this is the difference between comparing your SAT scores to Ivy League admittees vs. Ivy League applicants vs. all test takers. (The mid-fifty percentile for Ivy League admittees is 1468-1564, overall it’s 950-1250, and for applicants I don’t know.)
  • I’ve always felt you should, in a perfect world, cut benchmarks by aspiration. You run a company differently when you’re a VC-fueled, share-grabbing moonshot vs. a founder-bootstrap when you’re hoping to sell to a PE sponsor in 3 years. Thus, this data is most relevant when you’re trying to raise money from a firm like Insight.

Table of Contents (Slide 4)

Just kidding. Nothing to add here.

Executive Summary: Sales KPIs (Slide 5)

Here we can see key metrics, cut by size, and grouped into five areas: growth & profitability, sales efficiency, retention & churn, GTM strategy, and sales productivity.

Before we go row-by-row into the metrics, I’ll share my impressions on the table itself.

  • CAC payback period (CPP) is simply not a sales efficiency metric. While many people confuse it as one, payback periods are measured in time (e.g., months) — which is itself a clue — and they are risk metrics, not return metrics. They answer the question: how long does it take to get your money back [1]? Pathological example: CPP of 12 months and 100% churn rate means you get your money back in a year but never get anything else. It’s not measuring efficiency. It’s not measuring return. It’s measuring time to payback [2].
  • I’ve never heard of SaaS quick ratio before, but from finance class I remember that the quick ratio is a liquidity metric, so I’m curious.
  • I wouldn’t view pipeline coverage as a sales productivity metric, but agree it should be included in the list and I view its placement as harmless.

Now, I’ll share my reactions as I go row-by-row:

  • ARR growth. The rates strike me as strong, partially validating the view that these are Ivy League applicants. For example, median 106% growth between $10M and $20M is strong. For more views on best-in-class growth rate, see my post on The Rule of 56789.
  • New + expansion growth rate. This seems to reveal a common taxonomy problem. If you consider new logo ARR and expansion ARR as two independent, top-level categories you end up with no parent category or name. For this reason, I prefer new ARR to be the parent category, with new ARR having two subcategories: from existing customers (expansion ARR) and from new customers (new logo ARR). See my recent SaaS Metrics 101 talk. In Dave-speak, row 1 is ending ARR growth rate and row 2 is new ARR growth rate.
  • Efficiency rule. I haven’t heard precisely of this before but I’m guessing it’s some variation on burn multiple. We’ll review it later. Surprised they lack data for the bigger categories.
  • CAC payback period (CPP). The prior discussion aside, these numbers look very strong raising two questions: who are these companies again and are they calculating it funny?
  • SaaS quick ratio. We’ll come back to this once I know what it is. If it’s a liquidity ratio (and it turns out it’s not) then these companies would be swimming in cash.
  • Magic number. Usually this is the inverse of the CAC ratio, but sometimes (and as defined by Scale) calculated using revenue, not ARR. When I invert the magic numbers here, I see CAC ratios of 1.4, 1.1, 1.0, 1.3, and 1.3 across the five categories — which are all pretty good.
  • For fun, let’s do some metrics footing. In practice, CPP is usually around 15 / magic number [3], so I can create an implied CPP (which is 21.4, 16.7, 15.0, 18.8, and 18.8). Since those values are about 1.4x the reported CPPs, I’m pretty sure we’re not defining everything the same way. We’ll see what we find later [4].
  • S&M % of revenue. A good metric, and a quick skim again shows pretty solid numbers.  Let’s compare to RevOps Squared, which hits a broad population of SaaS companies, and shows ~35%, ~35%, 54%, 43%, and 45% across the five categories [5]. The notable difference is that Insight’s companies spend more earlier (83%, 45% in the first two categories), presumably because they’re shooting for higher growth.
  • Net revenue retention (NRR) aka net dollar retention (NDR) [6]. While there is a definitional question here, the number themselves look very strong (cf. RevOps Squared at ~103%, ~104%, 110%, 106%, and 102%). I believe this reflects Insight’s high-flying sample more than a calculation difference, but maybe we’ll learn differently later.
  • Gross revenue retention (GRR) aka gross dollar retention (GDR). This is an increasingly popular metric because investors are increasingly concerned that one train may hide another [7] in analyzing expansion and shrinkage, and thus want to see both NRR and GRR. The figures again look quite strong (cf. RevOps Squared at ~86%, ~87%, 88%, 88%, and 87%). This reinforces the point that we need to understand the sample differences behind benchmarks: Insight sets a much higher bar on NRR and GDR than RevOps Squared [8].
  • Annual revenue churn (rate). I’ve never heard it exactly this way, but this is some sort of churn rate.  It looks very close to 1 – GRR (i.e., plus or minus 1-2%), so it’s hard to understand why I need both.  More later.
  • NPS (net promoter score).  The first question is always for which role because NPS can vary widely across end users, primary users, adminstrators, and economic decision makers.  That can also lead to random weightings across those categories.  That said, the numbers here strike me as setting a very high bar.
  • New bookings as a % of total bookings.  This is a good metric, but I look at it the other way (i.e., expansion %) and use new ARR, not bookings [9].  That is, I prefer expansion ARR as a % of new ARR and I like to run around 30%, lower when you’re smaller and higher when you’re bigger.
  • Average sales cycle (ASC) (months).  This was the row that shocked me the most — with numbers like 2.5, I’d have guessed there were measuring quarters, not months.  Then again, I come from an enterprise background, but I do work with some SMB companies.  Let’s see if they drill into it later.  And remember it’s a median, I’d love to see the distribution and cut by deal size.
  • S&M as % of total opex.  I get why people do this [10] but I don’t like it as a metric, prefering S&M as a percent of revenue. (Cf. RevOps Squared where S&M as % of revenue runs 30-50%.)
  • Sales % of S&M expense.  I like this metric a lot, and it’s happily gaining in popularity.  I prefer to track sales/marketing expense ratio, which I think is more intuitive but uses the same numbers, just compared differently.  In my experience, the sales/marketing ratio runs around 2-1, equivalent to 66% when viewing sales as a percent of S&M.  More important than baseline value, companies need to watch how this changes over time; it’s often a function of sales’ superior negotiating ability and leverage more than anything else.  See my post.
  • Sales headcount as % of total headcount.  I get where they’re coming from with this metric, but I prefer to track what I call quota carrying rep (QCR) density = QCRs / sales headcount.  I’m trying to measure the percent of the sales org that is actually carrying an incremental quota [11].  See my post, the Two Engines of SaaS, which introduces both QCR density and its product equivalent, DEV density.  Because I don’t track this one, I have no intuitive reaction to the numbers.
  • Bookings per rep.  I’m imaging this is what I’d call new ARR per rep, aka sales (or AE) productivity, measured in new ARR per year.  These numbers strike me as correct for enterprise, but inconsistent with a 3 month ASC — that usually connotes smaller deals and lower sales productivity on the order of $600K ARR/year.  The key rule of thumb here is that bookings/rep is ideally 4x a rep’s on-target earnings (OTE).  So this data implies sellers with $250K OTE.
  • Pipeline coverage.  While technically speaking I don’t view pipeline coverage as a sales productivity metric, it’s an important metric and I’m glad they benchmarked it.  In my experience 2.5 to 3.0x coverage is sufficient and when I see numbers above 3x, I get worried about several things (e.g., cleaniness, win rate, sales accountability, and if marketing is being proactively thrown under the bus).  These numbers thus concern me, but sadly do not surprise me.
  • Pipeline conversion rate.  This is notionally the inverse of pipeline coverage if both are measured for the same time period.  I do track them independently because, in enteprise, starting pipeline is mix of opportunities created in the past 1-4 quarters, and the eventual (cohort-based) close rate is not the same as the week-3 current-quarter conversion rate.  The glaring inconsistency here, speaking on behalf of CMOs everywhere, is this:  sales saying they want 4.0x coverage on a pipeline that closes at 44% is buying a 1.75x insurance policy on the number.  I get that we all like cushion, but it’s expensive and such heavy cushion puts the monkey on the back of the pipeline sources (e.g., marketing, SDR, partners, and to a lesser extent, sales itself).  Think:  if we drown sales in pipeline, then we can’t miss the number!  Math:  if you close 44% of it, you need 2.3x coverage, not 4.0x.

Go-To-Market Sales Motion Definitions (Slide 6)

Holy cow.  We’re only on slide six.  Thanks for reading this far and have no fear, it’s largely downhill from here — the Insight center of excellence pitch starts on slide 12, so we have only six slides to go.

I think slide six is superfluous and confusing. 

  • In reality, they are not cutting the data by sales motion, they are cutting it by deal size (ASP). 
  • They say they are using ASP as a proxy for sales motion, but I think it’s actually the other way around:  they seem to be preparing to use sales motion as a proxy for ASP, but then they don’t present any data cut by sales motion.
  • The category names are confusing.  I’ve been doing this a while and don’t get the distinction between the solution and consultative sale based on the names alone.

The reality is simple:  if they later present data cut by sales motion remember that it’s actually cut by ASP.  But they don’t.  So much ado about nothing.

Also, the ASCs by sales type look correct in this chart yet the data has a median ASC of 2-3 months.  Ergo, one must assume it’s heavily weighted towards the transactional, but that seems inconsistent with sales (bookings) productivity numbers [12].  Hum.

Growth and Profitability Metrics (Slide 7)

OK, I now realize what’s going on.  I was expecting this report to drill down in slides 7-11, presenting key metrics by subject area cut by size and/or sales motion — but that’s not where we’re headed.  I almost feel like this is the teaser for a bigger report.

Thus, we are now in the definitions section and along with each definition they present the top quartile boundary (as opposed the medians in the summary table) for each metric.  Because these top quartiles are across the whole range  (i.e., from $0 to $100M+ companies) they aren’t terribly meaningful.  It’d be nice if Insight presented the quartiles cut by company size and ASP a la RevOps Squared.  Consider that an enhancement request.

Insight has an interesting take on the “efficiency rule,” which is what most people call the burn multiple (cash burn / net new ARR).  Insight inverts it (i.e., net new ARR / cash burn) [13] and suggests that top quartile companies score 1.0x or better. 

David Sacks suggests the following ranges for burn multiple:  <1.0 amazing (consistent with Insight’s top quartile), 1 to 1.5 great, 1.5 to 2.0 good, 2.0 to 3.0 suspect, and >3.0  bad.

Finally, Insight seems to believe that the efficiency rule is only for smaller companies and I don’t quite understand that.  Perhaps it’s because their bigger companies are all cash flow positive and they don’t burn money at all!  The math still works with a negative sign and there are plenty of big, cash-burning companies out there (where the metric’s value is admittedly more meaningful) so I apply burn multiple to cash-burning companies of all sizes.

Finally, Bessemer has a related metric called cash conversion score (CCS) which is not a period metric but an inception-to-date metric.  CCS = current ARR / net cash consumed from inception to date.  They do an interesting regression that predicts investment IRR as a function of CCS — if you need a reminder of why VCs ultimately care about these metrics [14]

Sales Efficiency Metrics (Slide 8)

Thoughts:

  • They define CAC on a per-customer basis, don’t define CAC ratio (the same but per new ARR dollar) and don’t actually present either in the summary table.  Odd.
  • They use what I believe is a non-standard definition of CAC payback period, defining it on ARR as opposed to subscription gross profit.  For most people, CAC payback period is not months of subscription revenue — it’s months of subscription gross profit — to pay back the CAC investment. This explains why their numbers look so good.  To be comparable to most other benchmarks, you need to multiple their CAC payback periods by 1.25 to 1.5.   This is a great example of why we need to understand what we’re looking at when doing benchmarking.  In this case, you learn that you’re doing much better than you thought!
  • They suggest that top quartile is <12 months for small and medium deals, and <18 months for large ones, equivalent to 15 and 22.5 months assuming the more standard formula and 80% subscription gross margins.
  • They define the SaaS quick ratio, which is a bad name [15] for a good concept.  In my parlance, it’s simply = new ARR / churn ARR, i.e., the ratio between inflows and outflows of the SaaS leaky bucket.  I generally track net customer expansion = new ARR – churn ARR, so I don’t have an intuitive sense here.  They say 4x+ is top quartile.
  • They define magic number on revenue, not ARR, as does its inventor.  I prefer CAC ratio because I think it’s more intuitive (i.e., S&M required to get $1 of new ARR) and it’s based on ARR, not revenue.  For public companies, you have to use revenue because you typically don’t have ARR; for private ones, you do.  They say a 1.0x+ magic number is top quartile.
  • They say S&M as % of revenue top quartile is 37% [16].

Retention and Churn Metrics (Slide 9)

OK, just a few more slides to go:

  • For NRR and GRR, they use a bridge approach (i.e., starting + adds – subtracts = ending) which calculates what I call lazy NRR and GRR. 
  • To me, these metrics are defined in terms of cohorts/snapshots (deliberately to float over some of the things people do in those bridges) and you should calculate them as such.  See my post for a detailed explanation.
  • Annual revenue churn, as defined, is pretty non-standard and a weak metric because it’s highly gameable.  You want to stop using the service?  Wait, let me renew you for one dollar.  The churn ARR masked as downsell would be invisible.  If you want to count logos, count logos — and do logo-based as well as dollar-based churn rates.  For more on churn rates and calculations, see Churn is Dead, Long Live Net Dollar Retention.
  • Net promoter score.  As mentioned above, I think they’re setting a high on bar NPS, saying the benchmark is 50%+.  I’d have guessed 25-30%+.  

GTM Strategy Metrics (Slide 10)

One more time, thoughts:

  • Selling motion is not really a metric yet it’s defined here.  Moreover, it’s differently and better defined on slide 6.  They try to classify a company’s sales motion by the motion that has 75% or more of its reps.  This won’t work for many companies with multiple motions because no one motion accounts for 75% of the team.
  • New (logo) ARR as % of new ARR.   I mapped this to my terminology for clarity.  They say 75% is top quartile, but that doesn’t make sense to me.  This is a Goldilocks metric, not a higher-is-better metric.  If you’re getting a lot more than 70% of your new ARR from new logos, I wonder why you’re not doing more with the installed base.  If you’re getting a lot less than 70%, I wonder why you aren’t winning more new customers.
  • Average sales cycle (ASC).  They say the benchmark is 3-6 months for a transactional motion (where just two rows above they use a different taxonomy of field, inside, and hybrid) and 9-12 months for consultative.  On slide 6 they say transactional is <3 months, solution is 3-9 months, and consultative is 6-12+ months.  It’s not shockingly inconsistent, but they need to clean it up.

Sales Productivity Metrics (Slide 11)

Last slide, here are my thoughts:

  • Bookings per rep.  Just when we thought it was safe to finish with a simple clear metric, we find an issue. They define bookings/rep = new ARR / number of fully-ramped reps.  If the intent of the metric is to know what a typical fully-ramped rep can  sell, it’s the wrong calculation.  What’s the right one?  Ramped AE productivity = new ARR from ramped reps / number of ramped reps.  As expressed, they’re including bookings from ramping reps in the numerator and that overstates the productivity number.  See my post on the rep ramp chart for more.
  • They say top quartile is $993K/year which strikes me as good in mid-market, light in enterprise, and impossibly high in SMB.
  • Here is where they really need to segment the benchmark by sales motion yet, despite the hubbub around defining sales motions, they don’t do it.
  • Pipeline coverage is somewhat misdefined in my opinion.  By default it should be calculated relative to plan, not a projection or forecast.  It should also be calculated on a to-go basis during the quarter (remaining pipeline / to-go to plan) and, in cases where the forecast is significantly different from plan, it makes sense to calculate it on a to-forecast basis as well.  
  • Conversion rate is defined correctly, providing we have a clear and consistent understanding of “starting.”  For me, it’s day 1, week 3 of the quarter — allowing sales two weeks to recover from the prior close and clean up this quarter’s pipeline.  Maybe I’m too nice, it should probably be day 1, week 2.  Also, remember that conversion rates are quite different for new and expansion ARR pipeline, so you should always segment this metric accordingly.  I look at it overall (aka blended) as well, but I’m aware that it’s really a blended average of two different rates and if the mix changes, the rate will change along with it.

Sales & CS Center of Excellence (CoE) (Slide 12)

Alas, the pitch for Insight’s CoE begins here, so our work is done.  Thanks for sticking with me thus far.  And feel free to click through the rest of Insight’s deck.

Thanks to Insight for producing this report.  I hope in this post that I’ve demonstrated that there is significantly more work than meets the eye in understanding and interpreting a seemingly simple benchmark report.

# # #

Notes

[1] Ironically, CPP doesn’t even do this well. It’s a theoretical payback period (which is very much not the intent of capital budgeting which is typically done on a cash basis). The problem? In enterprise SaaS, you typically get paid once/year so an 8-month CPP is actually a 30-60 day CPP (i.e., the time it takes to collect receivables, known as days sales outstanding) and an 18-month CPP is, on a cash basis, actually a 365-days-plus-DSO one. That is, in enterprise, your actual CPP is always some multiple of 12 plus your DSO.

[2] You can argue it’s a quasi-efficiency metric in that a faster payback period means more efficient sales, but it might also mean higher subscription gross margin. Morever, the trumping argument is simple:  if you want to measure sales efficiency look at CAC ratio — that’s exactly what it does.

[3] CPP in months = 12 * (CAC ratio / subscription gross margin), see this post. Subscription GM usually runs around 80% , so re-arranging a bit CPP = 12 * (1/0.8) * CAC ratio = 15 * CAC ratio = 15 / magic number. Neat, huh? If you prefer assuming 75% subscription GM, then it’s 18 / magic number.

[4] I like metrics footing as a quick way to reveal differences in calculation and/or definition of metrics.

[5] The tildas indicate that I’ve eyeball-rebucketed figures because the categories don’t align at the low end.

[6] Dollar is used generically here to mean value-based, not count-based. But that’s an awkward metric name for a company that reports in Euros. Hence the world is moving to saying NRR and GRR over NDR and GDR.

[7] Referring to a sign at French railroad crossings and meaning that investors are less willing to look only at NRR, because a good NRR of 115% can be the result of 20% expansion and 5% shrinkage or 50% expansion and 35% shrinkage.

[8] I doubt there is a calculation difference here because GRR is a pretty straightforward metric.

[9] I define “bookings” as turning into cash quickly (e.g., 30-60 days).  It’s a useful concept for cash modeling.  See my SaaS Metrics 101 talk.  Here, I don’t think they mean cash, and I think they’re forced into using “bookings” because they haven’t defined new ARR as inclusive of both newlogo and expansion.  

[10] Because in early-stage companies total opex is often greater than revenue, but I prefer the consistency of just doing it against revenue and knowing that the sum of S&M, G&A, and R&D as a % of revenue may well be over 100%.

[11] Not overlaid or otherwise double-counted quota, as a product overlay sales person or an alliances manager might.

[12] Bear in mind these are all medians of a distribution so it’s certainly possible there is not inconsistency, but it is suspicious.

[13] There’s a lot of “you say tomato, I say tomato” here.  Some prefer to think, “how much do I need to burn to get $1 of net new ARR?” resulting in a multiple.  Others prefer to think, “how much net new ARR do I extract from $1 of burn?” resulting in what I’d call an extraction ratio.  I prefer multiples.  The difference between Bessemer’s original CAC ratio (ARR/S&M) and what I view as today’s standard (S&M/ARR) was this same issue.

[14] Scale does a similar thing with its magic number.

[15] It’s a rotten name because the quick ratio is a liquidity ratio that compares it’s most liquid assets (e.g., cash and equivalents, marketable securities, net accounts receivable) to its current liabilities.  I think I get the intended metaphor, but it doesn’t work for me.  

[16] They actually have this wierd thing where they either put a number in black or orange.  Black means “benchmark” but with an undefined percentile.  Orange means Insight top quartile because no industry standard benchmark is available.  Which calls into question what that means because there are certainly benchmarks for some of these figures out there.

Slides from my Radia Accelerator Presentation: SaaS Metrics 101

Here’s a quick post to share my slides from the presentation I gave today at the Radia Accelerator, a UK-based accelerator for female SaaS entrepreneurs, as part of my work with Balderton Capital.

Our topic was SaaS Metrics 101. In building the deck, I tried to do 3 things

  • Start with the basics: definitions and such.
  • Build up in layers: start simple and layer into complex.
  • Cover the metrics you are likely to be asked about as a SaaS entrepreneur: pick based not upon what I like, but on what I think you’re likely to get asked (e.g., LTV/CAC vs. NRR)

I want to thank the audience for their attendance and engagement. To use my favorite George W. Bush malapropism, I greatly misunderestimated the time I’d need to get through the material, but by taking every minute and staying reasonably interactive we made it through. And remember, there are plenty of links in the deck for people who want to dig deeper.

For those interested in this topic, I did a podcast a while back on SaaSihimi which is the spiritual equivalent of this session, but in interview format and only slightly out of date.

Below are my Radia Accelerator slides. You can also download them on Drive.

Mark Tice Returns:  It’s Time For SaaS Companies To Do A Channel Check.

About three years ago, I had a conversation with an old friend that led to a post, Ten Pearls of Enterprise Software Startup Wisdom from My Friend Mark Tice.  In that (quite popular) post, I shared Mark’s top ten list of mistakes that enterprise software startups make in sales and go-to-market.  If you’ve not read it, take a look — particularly (in today’s environment) with an eye toward mistakes five through ten.

I enjoy talking with Mark because our skills and experience are complementary.  My core is marketing.  Mark’s is partners.   While we’ve both done bigger things from those foundations (e.g., Mark was a CEO and an operating partner at a PE firm), I believe you’re never quite as comfortable and fluent as you are in the area where you grew up. 

Moreover, since partners is generally considered even more of a dark art than marketing, it’s great to have a friend in the business.  When it comes to partners, the Twitter cliché few understand this is actually a reality.

Before diving into Mark’s guest post, below, I want to try and drain the swamp by defining some basics:

  • Partners should be used as the catch-all term to describe companies with whom you have one or more relationships that are presumably friendly and mutually beneficial.
  • Alliances are a type of partner relationship.  Alliances partners collaborate with you to help sell your software, but — and this is key — they do not sell your software.
  • Channels are another type of partner relationship.  Channel partners sell your software (i.e., “they take the paper”) and they may do so either working in collaboration with you (e.g., a local system integrator who does implementations and takes paper) or on their own (e.g., a software vendor who embeds your product in theirs and sells the composite).

The thing that took me years to learn is that we should classify relationships, not companies.  For example, Deloitte is one of several global systems integrators (GSIs).  GSI is a type of company; it describes their business.  It is not a relationship type.  In fact, a software vendor may have several different partner relationships with a GSI.  For example,

  • A North American co-sell relationship (alliance) whereby the GSI agrees to place the vendor on their recommended solutions list, work with them to sell and implement customers, and perhaps do some joint marketing programs.
  • They may have a global embedded resale relationship (channel) with the GSI’s Financial Services practice where the software vendor’s product is sold as part of a bigger vertical solution, with no involvement from the vendor.
  • There may be a services relationship (channel) where the GSI agrees to use the vendor’s strategic consultants, acquired at a wholesale price, blended into the team responsible for a project (and in order to ensure there is specific product expertise on the team).
  • There could be a value-added resale relationship (channel) in certain regions (where the vendor does not yet have a presence) where the GSI sells the software and associated services, acting as a geographic distributor in those regions.

Thus saying, “we have a GSI relationship with Deloitte,” as you can hopefully see, doesn’t make a lot of sense. GSI is a type of company. We can and often do have several different relationships with a single GSI.

To summarize, at a high level there are two types of partner relationships: channels and alliances. Channels sell software, alliances don’t.

Note that most people are not this rigorous in their thinking and tend to use partner, channel, and alliance as synonyms and refer to companies using relationship types — and confusion can sometimes result.

With those basics in place, let’s move on to Mark’s top five recommendations for how SaaS companies can do a “channel check” — well, I suppose he means partner check, but channel check does have that alliterative ring to it.

Over to Mark 🡪

The other day Dave and I were discussing how companies are adapting to the many changes in today’s landscape and I honed in on one of my favorite topics, partnerships. 

For healthy SaaS companies, 25 to 50% of ARR is positively influenced in some way by partners.  Some resell.  Some recommend.  Some create a vacuum in the market that can be exploited (e.g., technology alliance partners). And yet, in most SaaS companies, the person looking after partners has an office that’s the equivalent of Harry Potter’s bedroom underneath the stairs and their phone only rings when the company needs a quote for a press release or sponsors for the user conference.  Compound this general lack of attention with reductions in headcount and tough economic times, and partners can devolve from an afterthought to a never-thought. 

As a former channel sales manager, strategic partners executive, CEO, and operating partner at a private equity (PE) firm, I talk with a lot of companies (and investors) about how to leverage partners to grow SaaS businesses.  Here’s my list of top five partner-related issues that you can use to do a “channel check” on your SaaS company. 

Just for fun, we’ll do it in countdown format.

5.  Enablement.  More often than you might expect, partners depend not on your partner program, but on relationships with individual sales reps or marketers to get the latest news, slide decks, competitive information, and collateral. The simple fix is to spin up a portal that gives partners self-service access to basic sales tools and training.  Yes, you should be careful to keep confidential information confidential — and that might require a few edits here and there — but providing easy access to the latest and greatest information will really improve the health and abilities of your partners.

4. International.  We could dedicate the entire blog post to going international, but we’ll keep this brief by focusing on an example I recently found working with a $50M SaaS company.  They were the leader in the US market but in fourth place internationally, so I asked why they weren’t paying more attention to the international opportunity.  They answered that they needed to focus on getting another few points of market share in the US and that they viewed international as “tactical revenue.”  Now I’m not sure what they meant by tactical revenue [1], but my take is any revenue that we can get — without introducing new core product requirements [2] — is good revenue and if we can’t get it ourselves, then why not use partners to get it?  Moreover, geographic distribution is one of the cleanest forms of partnership because you can set up distributors to entirely avoid dreaded channel conflicts [3].

But in this company, the person running partners was an entry-level marketer who lacked both the experience and the influence to drive the business. They’d signed partnerships with geographic exclusivity, partners were pricing far below market, and (despite the low prices) their win rates were far below those of the direct sales force.  The worst example was when a partner who had long-term exclusivity in a major country called to inform the company that they were handing off their business to their son because he was too old to keep working in construction and who had zero software experience.  (Guess we won’t be selling anything in that geography for a while.)

Don’t do this.  Instead,

  • Leverage geographic distributors to sell your software in low-hanging-fruit countries [4].
  • Sign de facto preferred, but not exclusive distribution relationships.
  • Pick the de facto preferred partner based on who presents the best market development plan for the geography.
  • Hire a professional partners or channels manager to oversee the distributor relationships.

3. Pricing.  There are 3 keys to channel partner pricing. First, make sure the price list is appropriate for the intended market. This is most obvious in the international example above, but it also applies domestically — e.g., if partners are representing your company in the mid-market, be sure your pricing is appropriate for the value you deliver and your position in that space.  Second, be sure your pricing includes all of the elements required for success (e.g., starter kits).  Don’t give partners a partial price list that leaves customers hanging in deployment or solution development.  Third, always tie a channel partner’s discount to your price list and not net revenue.  (Or, if you have to use net revenue, then make sure there’s a sufficiently high floor price in the contract [5].)  Royalties based on net revenue almost always encourage partners to discount your product disproportionately and shift the revenue to the higher-margin portions of their solutions. 

2. Sales alignment and compensation.  Getting the right alignment and compensation structure requires a Goldilocks solution.  Paying a kicker to reps when partners resell may cost you additional commissions and lost ARR if there was an opportunity to take the deal direct.  But refusing to pay a finder’s fee when a partner brings you a big deal may cost you not only that big deal but also drive the partner (and all their other deals) to a competitor. Getting clear on how you want the sales team to interact with partners and tuning sales compensation to match is key.  It’s hard work.  There is no one right answer.  And there will always be conflicts — the goal isn’t to eliminate them, but to manage them.

By the way, if you haven’t adjusted your sales and partner compensation models for a few years, then they’re unlikely to be “just right” today.  Run a review. 

1. Partner strategy.  Most SaaS companies treat partners as tactical extensions to their business – necessary evils on bad days and nice-to-haves on good ones. The key is to get clarity on what you want from partners, identify and recruit the right partners on the right business terms, put together the right enablement program, and then execute like crazy.

The strategy and elements underneath it should be revisited once a year as your business evolves. You change over time and so do your partners.  You need to revisit strategy and relationships accordingly.

What should you do to make sure you’re maximizing your partner ecosystem?  Remember these three principles:

  • Channels are about optimizing market reach versus margin. 
  • Partnerships shouldn’t be an afterthought. Tending to the basics of partnerships should be part of business as usual.
  • You get the channels/partnerships you deserve.

There’s a simple test to see if your partner strategy and program are sound. Hop on a Zoom (or get in a conference room) with your CEO, VP Sales, VP Marketing, and head of partnerships.  Ask everyone to take five minutes to write down a short description of your partner strategy and list your top 3 partners. Then spend 30 to 60 minutes talking about your answers, how well they align, where they don’t align (you might be surprised here), and what you want to do to get on the same page.  Agree on a new set of goals and then set OKRs accordingly.

Once you get more deliberate about partners, there are three things to keep in mind:

  • The 80/20 rule definitely applies to partners. You don’t have to do anything extraordinary to make the business thrive. Do the basics and do them well and that will almost always lead to success.
  • If your company is challenged to sell direct, fix that first. Don’t fall into the trap of thinking, “we can’t sell our products, so we’ll get partners to do it for us.”  It never ends well [6].
  • Most SaaS companies will exit to larger software companies and the best exits often start with partnerships. It’s never too early to partner with big players and form executive-level relationships so when the acquiring General Manager hopefully signs up to pay above-market price for your company, they’ll do so with the full confidence that you can deliver.

# # #

Notes (by Dave)

[1] They probably meant non-strategic or opportunistic revenue which is a reasonable concept when considering target markets.  That is, if you have a strategic focus on financial services, that revenue is strategic in the sense that you are actively looking for more of it and thus eager to hear about product requirements that will enhance the product for other financial services customers.  But, at the same time, if a pharma company wants to buy the product “as is,” then you should be happy to sell it to them.  With strategic revenue, you can entertain new product requirements discussions.  With opportunistic revenue, sales need to sell what’s on the truck.

[2] And if the product is properly built, localization isn’t really a core product requirement.  Moreover, localization isn’t even always required.

[3] By using either contractually exclusive (undesirable) or de facto preferred partners in any given geography.  

[4] That is, where it’s easy, where localization requirements are limited, and you don’t need to build language skills within your company to work with local staff.

[5] They can always call to request a special discount below the floor.  In practice, this usually acts as a highly desirable big deal detector.

[6] Amen to that.

Mark is currently working as an advisor.  If you want to reach him, shoot him a LinkedIn message or Inmail. If you have questions or comments, you can also post them as blog comments here and I’ll make sure Mark sees them.

Slides from my SaaStock Presentation: How To Connect Your C-Suite to the Ground Truth

Earlier today I spoke at SaaStock USA in Austin and gave a presentation on connecting your c-suite to the ground truth. I think it’s important that startup founders and leaders understand how easy it is to get disconnected from the ground truth (i.e., the reality of the field, deals, and customer conversations) and how that problem only gets worse as you scale. Largely it’s caused by layers of management, but it starts early — with just one.

It’s also caused by process, specifically the review process that most startups use. On messaging, you see the blueprint (in a QBR session), but not the house. In sales, you review the playbook, but don’t see the play. Thus, constantly seeking the ground truth — what’s actually happening on the ground — is an important part of any startup leader’s job, for both operational and strategic reasons.

In this presentation, I dive into why the problem occurs, why it’s worth solving, and what you can do about it. Specifically, I offer three ways you can connect yourself and your c-suite to the ground truth:

  • Deploy conversation intelligence (CI) software. If you’ve done so already, try to climb the value stack I outline.
  • Run third-party win/loss analysis.
  • Perform an annual or event-driven proprietary market study where you answer both fairly standard questions and ideally, a list of special questions you’ve accumulated using a hypothesis file.

The slides are embedded below [1]. You can download them on Drive [2].

Thanks to everyone who attended and to SaaStock for having me.

# # #

Notes

See my blog bio and FAQ for relevant affiliations.

[1] As a series of images using a WordPress blocktype. This is new for me so please excuse any bugs. I had to stop using Slideshare because they have become amazingly intrusive with advertising (so as to make the site unuseable IMHO) and I no longer want them monetizing my content.

[2] For the time being, slide downloads are only available on Drive. Of late, I had been making them available on Drive and Slideshare, but I can no longer use the latter, see note [2].