Category Archives: Customer Success

Lazy NRR is Not NRR. Accept No Imitations or Subtitutes.

The other day I was looking at an ARR bridge [1] with a young finance ace.  He made a few comments and concluded with, “and net revenue retention (NRR) is thus 112%, not bad.”

I thought, “Wait, stop!  You can’t calculate NRR from an ARR bridge [2].”  It’s a cohort-based measure.  You need to look at the year-ago cohort of customers, get their year-ago ARR, get that same group’s current ARR, and then divide the current ARR by the year-ago.

“Yes, you can,” he said.  “Just take starting ARR, add net expansion, and divide by starting ARR.  Voila.”

Expecto patronum!  Protect me from this dark magic.  I don’t know what that is, I thought, but that’s not NRR.

Then I stewed on it for a bit.  In some ways, we were both right.

  • Under the right circumstances, I think you can calculate NRR using an ARR bridge [3]. But the whole beauty of the metric is to float over that definitional swamp and just divide two numbers — so I inherently don’t want to.
  • My friend’s definition, one I suspect is common in finance whiz circles, was indeed one shortcut too short. But, under the right circumstances, you can improve it to work better in certain cases.

The Trouble with Churn Rates
For a long time, I’ve been skeptical of calculations related to churn rates.  While my primary problems with churn rates were in the denominator [4], there are also potential problems with the numerator [5].  Worse yet, once churn rates get polluted, all downstream metrics get polluted along with them – e.g., customer lifetime (LT), lifetime value (LTV), and ergo LTV/CAC.  Those are key metrics to measure the value of the installed base — but they rely on churn rates which are easily gamed and polluted.

What if there were a better way to measure the value of the installed base?

There is.  That’s why my SaaStr 2019 session title was Churn is Dead, Long Live Net Dollar Retention [6].  The beauty of NRR is that it tells you want you want to know – once you acquire customers, what happens to them? – and you don’t have to care which of four churn rates were used.  Or how churn ARR itself was defined.  Or if mistakes were made in tracking flows.

You just need to know two things:  ARR-now and ARR-then for “then” cohort of customers [7].

A Traditional ARR Bridge
To make our point, let’s review a traditional ARR bridge.

Nothing fancy here.  Starting ARR plus new ARR of two types:  new logo customers (aka, new logo ARR) and existing customers (aka, expansion ARR).  We could have broken churn ARR into two types as well (shrinkage and lost), but we didn’t need that breakout for this exercise.

Now, let’s add my four favorite rows to put beneath an ARR bridge [8]:

Here’s a description:

  • Net new ARR = New ARR – churn ARR. How much the water level increased in the SaaS leaky bucket of ARR.  Here in 1Q21, imagine we spent $2,250K in S&M in the prior quarter.  Our CAC ratio would be a healthy 1.0 on a new ARR basis, but a far less healthy 2.1 on a net new ARR basis.  That’s due to our quarterly churn of 8%, which when annualized to 32%, flies off the charts.
  • Expansion as a percent of new ARR = expansion ARR / new ARR. My sense is 30% is a magic number for an established growth-phase startup.  If you’re only at 10%, you’re likely missing the chance to expand your customers (which will also show up in NRR).  If you’re at 50%, I wonder why you can’t sell more new logo customers.  Has something changed in the market or the salesforce?
  • Net expansion = expansion ARR – churn ARR. Shows the net expansion or contraction of the customer base during the quarter.  How much of the bucket increase was due to existing (as opposed to new) customers?
  • Churn rate, quarterly. I included this primarily because it raises a point we’ll hit when discussing lazy NRR.  Many people calculate this as = churn ARR / starting ARR (quarter).  That’s what I call “simple quarterly,” and you’ll note that it’s always lower than just “quarterly,” which I define as = churn ARR / starting ARR (year) [9].  The trace-precedents arrows below show the difference.

Lazy NRR vs. Cohort-Based NRR
With that as a rather extensive warm-up, let’s discuss what I call lazy NRR.

Lazy NRR is calculated as described above = (starting ARR + net expansion) / starting ARR.  Lazy NRR is a quarterly expansion metric.

Let’s look at a detailed example to see what’s really being measured.

This example shows the difference between cohort-based NRR and Lazy NRR:

  • Cohort-based NRR, a year-over-year metric that shows expansion of the two year-ago customers (customers 1 and 2).  This is, in my book, “real NRR.”
  • Lazy NRR, simple quarterly, which compares net expansion within the current quarter to starting ARR for that quarter.

The point of the trace-precendents arrows shows you that while the result coincidentally might be similiar (and in this case it is not), that they are measuring two completely different things.

Let’s talk about the last row, lazy NRR, cohort-based approximation, which takes starting ARR from year-ago customers, then adds (all) net expansion over the year and divides by the year-ago starting ARR. The problem?  Customer 3.  They are not in the year-ago cohort, but contribute expansion to the numerator because, with only an ARR bridge, you can’t separate year-ago cohort net expansion from new-customer net expansion.  To do that, you’d need to have ARR by customer [10].

Lazy NRR is not NRR.  NRR is defined as snapshot- and cohort-based.  Accept no substitutes or imitations.  Always calculate NRR using snapshots and cohorts and you’ll never go wrong.

Layer Cakes Tell No Lies
While I’m usually quite comfortable with tables of numbers and generally prefer receiving them in board reports, this is one area where I love charts, such as this layer cake that stacks annual cohorts atop each other.  I like these layer cakes for several reasons:

  • They’re visual and show you what’s happening with annual cohorts.
  • Like snapshot- and cohort-based NRR, they leave little to no room for gaming.  (They’re even harder to survivor bias as you’d have to omit the prior-period ARR.)
  • Given my now-distant geophysics background, they sometimes remind me of sedimentary rock.  (Hopefully yours don’t look like that, as unmetamorphized, sedimentary rock represents an NRR of only 100%!)

The spreadsheet for this post is available here.

(The post was revised a few times after initial publication to fix mistakes and clarify points related to the cohort-based approximation.  In the end, the resultant confusion only convinced me more to only and always calcuate NRR using cohorts and snapshots.)

# # #

Notes
Edited 10/8/22 to replace screenshots and fix spreadsheet bug in prior version.

[1] Starting ARR + new ARR (from new logo and expansion) – churn ARR (from shrinkage and lost) = ending ARR

[2] I probably should have said “shouldn’t.”  Turns out, I think you can, but I know you shouldn’t.  We’ll elaborate on both in this post.

[3] Those conditions include a world where customers expand or contract only on an annual basis (as you are unable to exclude expansion or contraction from customers signed during the year since they’re not sepearated in an ARR bridge) and, of course, a clear and consistent definition of churn, playing fairly with no gaming designed understate churn or overstate expansion, and avoidance of mistakes in calculations.

[4] Churn rates based off the whole ARR pool can halve (or more than halve) those based on the available to renew (ATR) pool, for example if a company’s mean contract duration is 2 or 3 years.  ARR churn rates are probably better for financial calculations, but ATR churn rates are a better indicator of customer satisfaction

[5] Examples of potential problems, not all strictly related to calculation of churn ARR, but presented for convenience.

  • Expansion along the way. Consider a customer who buys 100-unit contract, expands to 140 the next quarter (without signing a new one-year agreement that extends the contract), and then at the annual renewal renews for 130.  The VP of CS wants to penalize the account’s CSM for 10 units of churn whereas the CFO wants to tell investors its 30 units of expansion.  Which is it?  Best answer IMHO is 40 units of expansion in the second quarter and 10 units of churn at the renewal, but I’ve seen people/systems that don’t do it that way.   NRR sees 130% rate regardless of how you count expansion and churn.
  • Potential offsets and the definition of customer – division 1 has 100 units and shrinks to 80 at renewal while a small 40-unit new project starts at division 2. Is that two customers, one with 20 units of churn and one new 40-unit customer or is it one customer with 20 units of expansion?  NRR sees either 80% rate or 120% rate as function of customer definition, but I’d hope the NRR framing would make you challenge yourself to ask:  was division 2 really a customer and ergo belong in the year-ago cohort?
  • Potential offsets and the definition of product – a customer has 100 units of product A, is unhappy, and shrinks to A to 60 units while buying your new product B for 40. Did any churn happen?  In most systems, the answer is no because churn is calculated at the account level.  Unless you’re also tracking product-level churn, you might have trouble seeing that your new product is simply being given away to placate customers unhappy with your first one.  NRR is inherently account-level and doesn’t solve this problem – unless you decide to calculate product-level NRR, to see which products are expanding and which are shrinking.
  • Adjustments.  International companies need to adjust ARR for fluctuations in exchange rates.  Some companies adjust ARR for bad debt or non-standard contracts.  Any and all of these adjustments complicate the calculation of churn ARR and churn rates.
  • Gaming.  Counting trials as new customers and new ARR, but excluding customers <$5K from churn ARR calculations (things won’t foot but few people check).  Renewing would-be churning customers at $1 for two years to delay count-based churn reporting (ARR churn rates and NRR will see through this).  Survivor biasing calculations by excluding discontinuing customers.  Deferring ARR churn by renewing would-be churning customers with net 360 payables and a handshake (e.g., side letter) to not collect unless thing XYZ can be addressed (NRR won’t see through this, but cash and revenue won’t align).

[6] Since I now work frequently with Europe as part of my EIR job with Balderton Capital, I increasingly say “NRR” instead of “NDR” (net dollar retention), because for many of the companies I work with it’s actually net Euro retention.  The intent of “dollar” was never to indicate a currency, but instead to say:  “ARR-based, not count-based.”  NRR accomplishes that.

[7] Some companies survivor bias their NRR calculation by using the now-value and then-value of the now cohort, eliminating discontinuing customers from the calculation.   Think:  of the mutual funds we didn’t shut down, the average annual return was 12%.

[8] If you download the spreadsheet and expand the data groups you can see some other interesting rows as well.

[9] The flaw in “simple quarterly” churn is that, in a world that assumes pure annual contracts, you’re including people who were not customers at the start of the year and ergo cannot possibly churn in the calculations.  While you use the same numerator in each case, you’re using an increasing denominator and with no valid reason for doing so.  See here for more.

[10] In which case you might as well calculate NRR as defined, using the current and year-ago snapshots.

 

Crash Course in Customer Success SaaS Metrics: Appearance on the ChurnZero podcast.

Earlier this week I appeared on a webinar with You Mon Tsang, founder and CEO of ChurnZero, a SaaS application aimed at helping subscription businesses reduce churn.

In this post, I will share the video of event, provide a link to the slides, provide a link to the Q&A wrap-up they posted, embed the video below, embed the slides below that, and finally provide a quick summary below that.

Here’s the video:

Here’s a copy of the slides:

Here’s a quick list of the topics we discussed:

  • ARR and MRR, and when to use which
  • Logo retention rate, why a count-based rate works best when your customers are more or less “all the same” on deal size, and that you should use a dollar-based rate when they’re not.
  • Available-to-renew (ATR) logo retention rate, which factors in only those customers who had a chance to renew or not.  If you’re an ARR-based company but do multi-year contracts not every customer has the chance to get out every year.
  • Gross revenue retention rate, and why it’s gathering steam as an important metric.  (Sometimes great expansion is hiding major churn and just looking at churn before expansion will reveal that.)
  • Net revenue retention (NRR), aka net dollar retention (NDR) for those who work only in dollars, which is probably the hottest SaaS metrics after ARR and ARR growth.
  • Lifetime value (LTV), and its fairly severe limitations.  I gave a talk on this at SaaStr two years back.
  • Customer acquisition cost (CAC) and the CAC ratio.  How it differs for new customer and expansion ARR.
  • LTV/CAC ratio.  An attempt to measure what something costs against what it’s worth, but one that has generally failed and is now being replaced by NRR.
  •  Benchmarks for many of these metrics from the KeyBanc 2021 SaaS Survey.

Thanks to all those who attended and thanks to You Mon for inviting me and Cori for executing it so well.

Appearance on the Precursive Podcast: The Role of Services in Today’s SaaS Market

A few weeks back, I sat down with Jonathan Corrie, cofounder and CEO of Precursive — a Salesforce-native professional services (PS) delivery cloud that provides PS automation, task, and resource management — to discuss one of my favorite topics, the role of professional services in today’s SaaS businesses.

Jonathan released the 48-minute podcast today, available on both Apple and Spotify.

Topics we discussed included:

  • The Hippocratic oath and executive compensation plans (do no harm).
  • How to frame the sales / services working relationship (i.e., no chucking deals over the fence).
  • Why to put an andon cord in place to stop zero-odds-of-success deals early in the sales process.
  • How to package services, including the risks of tshirt-sized QuickStart packages.
  • How to market methodology instead of packages to convince customers of what matters:  success.
  • The myth of services cannibalization of ARR.  (This drives me crazy.)
  • The alternatives test:  would a customer pay someone else to be successful with your software?
  • Selling mistake-avoidance to IT vs. selling success to line-of-business executives.
  • How and why to bridge “air gaps” between functions (e.g., sales, customer success, services).
  • How to position the sales to CSM “handoff” as à la prochaine and not adieu.
  • The perils of checklist-driven onboarding approaches.
  • The beauty of defining organizational roles with self-introductions (e.g., “my name is Dave and my job is to get your renewal”).
  • The three types of CSMs — the best friend, the seller, and the consultant — and how to blend them and build career paths within the organization.
  • Top professional services metrics.  Caring about (versus maximizing) services margin via compensation plan gates.
  • The loose coupling between NPS and renewal.

Thanks again to Jonathan for having me, and the episode is available here.

Pulse 2021 Slides: Net Dollar Retention (NDR) Benchmarks and Thoughts

This is a quick post to share the slides I presented today at the GainSight Pulse Everywhere 2021 conference in a session entitled Net Dollar Retention, Key Benchmarks at $50M, $200M, and $1B in annual recurring revenue (ARR).

In the session we discuss:

  • The answer, which is 104%.  (Median NDR which is surprisingly invariant across size.  Exception:  public company NDR median is 111%.)
  • Problems with historical installed-base valuation metrics such as churn, customer lifetime (CLT), and lifetime value (LTV), building on my SaaStr 2020 presentation, Churn is Dead, Long Live NDR.
  • The rise of NDR as the SaaS metric of choice.
  • How NDR is currently the most powerful predictor (among common alternatives) of a company’s revenue multiple (EV/R).
  • The “dollar” in net dollar retention and why global companies should look at NDR using constant currencies, not dollars converted at a spot rate.
  • How NDR should vary as a function of stage, expansion model, business model, target market, sales motion, and pricing model.
  • How usage-based (aka, consumption-based) pricing models will be as transformation to subscription SaaS as subscription SaaS was to perpetual license software.

The deck has an rich appendix with interesting information clipped from a variety of my favorite sources, including RevOps^2, Meritech Enterprise Public Comps, OpenView Expansion SaaS Benchmarks, OpenView Usage-Based Playbook, Bessemer State of the Cloud, KeyBanc SaaS Survey (PDF), SEC filings, and others.

Here are the slides and I’ve embedded them below:

I’d like to thank Ray Rike at RevOps^2 for giving me early access to his upcoming FY20 B2B SaaS Benchmarks report.

If GainSight makes a video available online, I’ll add a link to it, here.  Meantime, thanks to GainSight for having me and hope you enjoy the presentation.

Should Customer Success Report into the CRO or the CEO?

The CEO.  Thanks for reading.

# # #

I was tempted to stop there because I’ve been writing a lot of long posts lately and because I do believe the answer is that simple.  First let me explain the controversy and then I’ll explain my view on it.

In days of yore, chief revenue officer (CRO) was just a gussied-up title for VP of Sales.  If someone was particularly good, particularly senior, or particularly hard to recruit you might call them CRO.  But the job was always the same:  go sell software.

Back in the pre-subscription era, basically all the revenue — save for a little bit of services and some maintenance that practically renewed itself — came from sales anyway.  Chief revenue officer meant chief sales officer meant VP of Sales.  All basically the same thing.  By the way, as the person responsible for effectively all of the company’s revenue, one heck of a powerful person in the organization.

Then the subscription era came along.  I remember the day at Salesforce when it really hit me.  Frank, the head of Sales, had a $1B number.  But Maria, the head of Customer Success [1], had a $2B number.  There’s a new sheriff in SaaS town, I realized, the person who owns renewals always has a bigger number than the person who runs sales [2], and the bigger you get the larger that difference.

Details of how things worked at Salesforce aside, I realized that the creation of Customer Success — particularly if it owned renewals — represented an opportunity to change the power structure within a software company. It meant Sales could be focused on customer acquisition and that Customer Success could be, definitionally, focused on customer success because it owned renewals.  It presented the opportunity to have an important check and balance in an industry where companies were typically sales-dominated to a fault.  Best of all, the check would be coming not just from a well-meaning person whose mission was to care about customer success, but from someone running a significantly larger amount of revenue than the head of Sales.

Then two complications came along.

The first complication was expansion ARR (annual recurring revenue).  Subscriptions are great, but they’re even better when they get bigger every year — and heck you need a certain amount of that just to offset the natural shrinkage (i.e., churn) that occurs when customers unsubscribe.  Expansion take two forms

  • Incidental:  price increases, extra seats, edition upsells, the kind of “fries with your burger” sales that are a step up from order-taking, but don’t require a lot of salespersonship.
  • Non-incidental:  cross-selling a complementary product, potentially to a different buyer within the account (e.g., selling Service Cloud to a VP of Service where the VP of Sales is using Sales Cloud) or an effectively new sale into different division of an existing account (e.g., selling GE Lighting when GE Aviation is already a customer).

While it was usually quite clear that Sales owned new customer acquisition and Customer Success owned renewals, expansion threw a monkey wrench in the machinery.  New sales models, and new metaphors to go with them, emerged. For example:

  • Hunter-only.  Sales does everything, new customer acquisition, both types of expansion, and even works on renewals.  Customer success is more focused on adoption and technical support.
  • Hunter/farmer.  Sales does new customer acquisition and non-incidental expansion and Customer Success does renewals and incidental expansion.
  • Hunter/hunter.  Where Sales itself is effectively split in two, with one team owning new customer acquisition after which accounts are quickly passed to a very sales-y customer success team whose primary job is to expand the account.
  • Farmers with shotguns.  A variation of hunter/hunter where an initial penetration Sales team focuses on “land” (e.g, with a $25K deal) and then passes the account to a high-end enterprise “expand” team chartered with major expansions (e.g., to $1M).

While different circumstances call for different models, expansion significantly complicated the picture.

The second complication was the rise of the chief revenue officer (CRO).  Generally speaking, sales leaders:

  • Didn’t like their diminished status, owning only a portion of company revenue
  • Were attracted to the buffer value in managing the ARR pool [3]
  • Witnessed too many incidents where Customer Success (who they often viewed as overgrown support people) bungled expansion opportunities and/or failed to maximize deals
  • Could exploit the fact that the check-and-balance between Sales and Customer Success resulted in the CEO getting sucked into a lot of messy operational issues

On this basis, Sales leaders increasingly (if not selflessly) argued that it was better for the CEO and the company if all revenue rolled up under a single person (i.e., me).  A lot of CEOs bought it.  While I’ve run it both ways, I was never one of them.

I think Customer Success should report into the CEO in early- and mid-stage startups.  Why?

  • I want the sales team focused on sales.  Not account management.  Not adoption.  Not renewals.  Not incidental expansion.  I want them focused on winning new deals either at new customers or different divisions of existing customers (non-incidental expansion).  Sales is hard.  They need to be focused on selling.  New ARR is their metric.
  • I want the check and balance.  Sales can be tempted in SaaS companies to book business that they know probably won’t renew.  A smart SaaS company does not want that business.  Since the VP of Customer Success is going to be measured, inter alia, on gross churn, they have a strong incentive call sales out and, if needed, put processes in place to prevent inception churnThe only thing worse than dealing with the problems caused by this check and balance is not hearing about those problems.  When one exec owns pouring water into the bucket and a different one owns stopping it from leaking out, you create a healthy tension within the organization.
  • They can work together without reporting to a single person.  Or, better put, they are always going to report to a single person (you or the CRO) so the question is who?  If you build compensation plans and operational models correctly, Customer Success will flip major expansions to Sales and Sales will flip incidental expansions back to Customer Success.  Remember the two rules in building a Customer Success model — never pair our farmer against the competitor’s hunter, and never use a hunter when a farmer will do.
  • I want the training ground for sales.  A lot of companies take fresh sales development reps (SDRs) and promote them directly to salesreps.  While it sometimes works, it’s risky.  Why not have two paths?  One where they can move directly into sales and one where they can move into Customer Success, close 12 deals per quarter instead of 3, hone their skills on incidental expansion, and, if you have the right model, close any non-incidental expansion the salesrep thinks they can handle?
  • I want the Customer Success team to be more sales-y than support-y.  Ironically, when Customer Success is in Sales you often end up with a more support-oriented Customer Success team.  Why?  The salesreps have all the power; they want to keep everything sales-y to themselves, and Customer Success gets relegated to a more support-like role.  It doesn’t have to be this way; it just often is.  In my generally preferred model, Customer Success is renewals- and expansion-focused, not support-focused, and that enables them to add more value to the business.  For example, when a customer is facing a non-support technical challenge (e.g., making a new set of reports), their first instinct will be to sell them professional services, not simply build it for the customer themselves.  To latter is to turn Customer Success into free consulting and support, starting a cycle that only spirals.  The former is keep Customer Success focused on leveraging the resources of the company and its partners to drive adoption, successful achievement of business objectives, renewals, and expansion.

Does this mean a SaaS company can’t have a CRO role if Customer Success does not report into them?  No.  You can call the person chartered with hitting new ARR goals whatever you want to — EVP of Sales, CRO, Santa Claus, Chief Sales Officer, or even President/CRO if you must.  You just shouldn’t have Customer Success report into them.

Personally, I’ve always preferred Sales leaders who like the word “sales” in their title.  That way, as one of my favorites always said, “they’re not surprised when I ask for money.”

# # #

[1] At Salesforce then called Customers for Life.

[2] Corner cases aside and assuming either annual contracts or that ownership is ownership, even if every customer technically isn’t renewing every year.

[3] Ending ARR is usually a far less volatile metric than new ARR.