Category Archives: Metrics

Video of my SaaStr 2018 Presentation: Ten Non-Obvious Things About Scaling SaaS

While I’ve blogged about this presentation before, I only recently stumbled into this full-length video of this very popular session — a 30-minute blaze through some subtle SaaS basics.  Enjoy!

I look forward to seeing everyone again at SaaStr Annual 2019.

The Big Mistake You Might Be Making In Calculating Churn: Failing to Annualize Multi-Year ATR Churn Rates

Most of the thinking, definitions, and formulas regarding SaaS unit economics is based on assumptions that no longer reflect the reality of the enterprise SaaS environment.  For example, thinking in terms of MRR (monthly recurring revenue) is outdated because most enterprise SaaS companies run on annual contracts and thus we should think in terms of ARR (annual recurring revenue) instead.

Most enterprise SaaS companies today do a minimum one-year contract and many do either prepaid or non-prepaid multi-year contracts beyond that. In the case of prepaid multi-year contracts, metrics like the CAC payback period break (or at the very least, get difficult to interpret).  In the case of multi-year contacts, calculating churn correctly gets a lot more complicated – and most people aren’t even aware of the issue, let alone analyze it correctly.

If your company does multi-year contracts and you are not either sidestepping this issue (by using only ARR-pool-based rates) or correcting for it in your available-to-renew (ATR) churn calculations, keep reading.  You are possibly making a mistake and overstating your churn rate.

A Multi-Year Churn Example
Let’s demonstrate my point with an example where Company A does 100% one-year deals and Company B does 100% three-year deals.  For simplicity’s sake, we are going to ignore price increases and upsell [1].  We’re also not going to argue the merits of one- vs. three-year contracts; our focus is simply how to calculate churn in a world of them.

In the example below, you can see that Company A has an available-to-renew-based (ATR-based) [2] churn rate of 10%.  Company B has a 27% ATR-based churn rate.  So we can quickly conclude that Company A’s a winner, and Company B is a loser, right?

Capture

Not so fast.

At the start of year 4, a cohort of Company A customers is worth 72.9 units, the exact same as a cohort of Company B customers.  In fact, if you look at lifetime value (LTV), the Company B cohort is worth nearly 10% more than the Company A cohort [3].

my churn1

Wait a minute!  How can a company with 27% churn rate be “better” than a company with 10% churn rate?

It’s All About Exposure:  How Often are Deals Exposed to the Churn Rate?
One big benefit of multi-year deals is that they are exposed to the churn rate less frequently than one-year deals.  When you exclude the noise (e.g., upsell, discounts, and price increases), and look at churn solely as a decay function, you see that the N-year retention rate [4] is (1-churn rate)^N.  With 10% churn, your 2-year retention rate is (1-0.1)^2 = 0.9^2 = 0.81.  Your 3-year retention rate is (1-0.1)^3 = 0.9^3 = 0.729, or a retention rate of 73%, equivalent to a churn rate of 27%.

Simply put, churn compounds so exposing a contract to the churn rate less often is a good thing:  multi-year deals do this by excluding contracts from the ATR pool, typically for one or two years, before they come up for renewal [5].  This also means that you cannot validly compare churn rates on contracts with different duration.

This is huge.  As we have just shown, a 10% churn rate on one-year deals is equivalent to a 27% churn rate on three-year deals, but few people I know recognize this fact.

I can imagine two VCs talking:

“Yo, Trey.”

“Yes.”

“You’re not going to believe it, I saw a company today with a 27% churn rate.”

“No way.”

“Yep, and it crushed their LTV/CAC — it was only 1.6.”

“Melting ice cube.  Run away.”

“I did.”

Quite sad, in fact, because with a correct (annualized) churn rate of 10% and holding the other assumptions constant [6], the LTV/CAC jumps to healthy 4.4.  But any attempt to explain a 27% churn rate is as likely to be seen as a lame excuse for a bad number as it is to be seen as valid analysis.

Best Alternative Option:  Calculate Churn Rates off the Entire ARR Pool
I’m going to define the 27% figure as the nominal ATR-based churn rate.  It’s what you get when you take churn ARR / ATR in any given period.  I call it a nominal rate because it’s not annualized and it doesn’t reflect the varying distribution of 1Y, 2Y, and 3Y deals that are mixed in the ATR pool in any given quarter.  I call it nominal because you can’t validly compare it to anything [7].

Because correcting this to a more meaningful rate is going to involve a lot of brute force math, I’ll first advise you to do two things:

  • Banish any notion from your mind that ATR rates are somehow “more real” than churn rates calculated against the entire ARR pool [8].
  • Then use churn rates calculated against the entire ARR pool and sidestep the mess we’re about to enter in the next section [9] where we correct ATR-based churn rates.

In a world of mixed-duration contracts calculating churn rates off the entire ARR pool effectively auto-corrects for the inability of some contracts to churn.  I have always believed that if you were going to use the churn rate in a math function (e.g., as the discount rate in an NPV calculation) that you should only use churn rates calculated against the entire ARR pool because, in a mixed multi-year contract world, only some of the contracts come up for renewal in any given period.  In one sense you can think of some contracts as “excluded from the available-to-churn (ATC) pool.”  In another, you can think of them as auto-renewing.  Either way, it doesn’t make sense in a mixed pool to apply the churn rate of those contracts up for renewal against the entire pool which includes contracts that are not.

If you want to persist in using ATR-based churn rates, then we must correct for two problems:  we need to annualize the multi-year rates, and we then need to calculate ATR churn using an ATR-weighted average of the annualized churn rates by contract duration.

Turning Nominal ATR Churn into Effective, Annualized ATR Churn
Here’s how to turn nominal ATR churn into an effective, annualized ATR churn rate [10] [11]:

Step 1:  categorize your ATR and churn ARR by contract duration.  Calculate a 1Y churn rate and nominal 2Y and 3Y ATR churn rates.

Step 2:  annualize the nominal multi-year (N-year) churn rates by flipping to retention rates and taking the Nth root of the retention rate.  For example, our 27% 3-year churn rate is equivalent to a 73% 3-year retention rate, so take the cube root of 0.73 to get 0.9.  Then flip back to churn rates and get 10%.

Step 3:  do an ATR-weighted average of the 1Y and annualized 2Y and 3Y churn rates.  Say your ATR was 50% 1Y, 25% 2Y, and 25% 3Y contracts and your annualized churn rates were 10%, 12%, and 9%.  Then the weighted average would be (0.5*0.10) + (0.25*0.12) + (0.25*0.09) = 10.25%, as your annualized, effective ATR churn rate.

That’s it.  You’ve now produced an ATR churn rate that is comparable to a one in a company that does only 1-year contracts.

Conclusion
If nothing else, I hope I have convinced that you it is invalid to compare churn rates on contracts of different duration and ergo that is simpler to generally calculate churn rates off the entire ARR pool.  If, however, you still want to see ATR-based churn rates, then I hope I’ve convinced you that you must do the math and calculate ATR churn as a weighted average of annualized one-, two-, and three-year ATR churn rates.

# # #

Notes
[1] In a world of zero upsell there is no difference between gross and net churn rates, thus I will simply say “churn rate” in this post.

[2] As soon as you start doing multi-year contracts then the entire ARR base is no longer up for renewal each year.  You therefore need a new concept, available to renew (ATR), which reflects only that ARR up for renewal in a given period.

[3] Thanks to its relatively flatter step-wise decay compared to Company A’s more linear decay.

[4] Retention rate = 1 – churn rate.

[5] If it helps, you can think of the ATR pool in a glass half-empty way as the available-to-churn pool.

[6] Assuming CAC ratio of 1.8 and subscription gross margins of 80%.

[7] Unless your company has a fixed distribution of deals by contract duration – e.g., a degenerate case being 100% 3Y deals.  For most companies the average contract duration in the inbound ATR pool is going to vary each quarter.  Ergo, you can’t even validly compare this rate to itself over time without factoring in the blending.

[8] Most people I meet seem to think ATR rates are more real than rates based on the entire ARR pool.  Sample conversation  — “what’s your churn rate?”  “6%.”  “Gross or net?  “Gross.”  “No, I mean your real churn rate – what gets churned divided only by what was up for renewal.”    The mistake here is in thinking that using ATR makes it comparable to a pure one-year churn rate – and it doesn’t.

[9] Gross churn = churn / starting period ARR.  Net churn = (gross churn – upsell) / starting period ARR.

[10] I thought about trying a less brute-force way using average contract duration (ACD) of the ATR pool, but decided against it because this method, while less elegant, is more systematic.

[11] Note that this method will still understate the LTV advantage of the more step-wise multi-year contract decay because it’s not integrating the area under the curve, but instead intersecting what’s left of the cohort after N years.  In our first example, the 1Y and 3Y cohorts both had 73 units of ARR, but because the multi-year cohort decayed more slowly it’s LTV to that point was about 10% higher.

The Use of Ramped Rep Equivalents (RREs) in Sales Analytics and Modeling

[Editor’s note:  revised 7/18, 6:00 PM to fix spreadsheet error and change numbers to make example easier to follow, if less realistic in terms of hiring patterns.]

How many times have you heard this conversation?

VC:  how many sales reps do you have? 

CEO:  Uh, 25.  But not really.

VC:  What do you mean, not really?

CEO:  Well, some of them are new and not fully productive yet.

VC:  How long does it take for them to fully ramp?

CEO:  Well, to full productivity, four quarters.

VC:  So how many fully-ramped reps do you have?

CEO:  9 fully ramped, but we have 15 in various stages of ramping, and 1 who’s brand new …

There’s a better way to have this conversion, to perform your sales analytics, and to build your bookings capacity waterfall model.  That better way involves creating a new metric called ramped rep equivalents (RREs). Let’s build up to talking about RREs by first looking at a classical sales bookings waterfall model.

ramped rep equivalents, picture 1, revised

I love building these models and they’re a lot of fun to play with, doing what-if analysis, varying the drivers (which are in the orange cells) and looking at the results.  This is a simplified version of what most sales VPs look at when trying to decide next year’s hiring, next year’s quotas [1], and next year’s targets.  This model assumes one type of salesrep [2]; a distribution of existing reps by tenure as 1 first-quarter, 3 second-quarter, 5 third-quarter, 7 fourth-quarter, and 9 steady-state reps; a hiring pattern of 1, 2, 4, 6 reps across the four quarters of 2019; and a salesrep productivity ramp whereby reps are expected to sell 0% of steady-state productivity in their first quarter with the company, and then 25%, 50%, 75% in quarters 2 through 4 and then become fully productive at quarter 5, selling at the steady-state productivity level of $1,000K in new ARR per year [3].

Using this model, a typical sales VP — provided they believed the productivity assumptions [4] and that they could realistically set quotas about 20% above the target productivity — would typically sign up for around a $22M new ARR bookings target for the coming year.

While these models work just fine, I have always felt like the second block (bookings capacity by tenure), while needed for intermediate calculations, is not terribly meaningful by itself.  The lost opportunity here is that we’re not creating any concept to more easily think about, discuss, and analyze the productivity we get from reps as they ramp.

Enter the Ramped Rep Equivalent (RRE)
Rather than thinking about the partial productivity of whole reps, we can think about partial reps against whole productivity — and build the model that way, instead.  This has the by-product of creating a very useful number, the RRE.  Then, to get bookings capacity just multiply the number of RREs times the steady-state productivity.  Let’s see an example below:

ramped rep equivalents, picture 2, revised

This provides a far more intuitive way of thinking about salesrep ramping.  In 1Q19, the company has 25 reps, only 9 of whom are fully ramped, and rest combine to give the productivity of 8.5 additional reps, resulting in an RRE total of 17.5.

“We have 25 reps on board, but thanks to ramping, we only have the capacity equivalent to 17.5 fully-ramped reps at this time.”

This also spits out three interesting metrics:

  • RRE/QCR ratio:  an effective vs. nominal capacity ratio — in 1Q19, nominally we have 25 reps, but we have only the effective capacity of 17.5 reps.  17.5/25 = 70%.
  • Capacity lost to ramping (dollars):  to make the prior figure more visceral, think of the sales capacity lost due to ramping (i.e., the delta between your nominal and effective capacity) expressed in dollars.  In this case, in 1Q19 we’re losing $1,875K of our bookings capacity due to ramping.
  • Capacity lost to ramping (percent):  the same concept as the prior metric, simply expressed in percentage terms.  In this case, in 1Q19 we’re losing 30% of our bookings capacity due to ramping.

Impacts and Cautions
If you want to move to an RRE mindset, here are a few tips:

  • RREs are useful for analytics, like sales productivity.  When looking at actuals you can measure sales productivity not just by starting-period or average-period reps, but by RRE.  It will provide a much more meaningful metric.
  • You can use RREs to measure sales effectiveness.  At the start of each quarter recalculate your theoretical capacity based on your actual staffing.  Then divide your actuals by that start-of-quarter theoretical capacity and you will get a measure of how well you are performing, i.e., the utilization of the quarterly starting capacity in your sales force.  When you’re missing sales targets it is typically for one of two reasons:  you don’t have enough capacity or you’re not making use of the capacity you have.  This helps you determine which.
  • Beware that if you have multiple types of reps (e.g., corporate and field), you be tempted to blend them in the same way you do whole reps today –i.e., when asked “how many reps do you have?” most people say “15” and not “9 enterprise plus 6 corporate.”  You have the same problem with RREs.  While it’s OK to present a blended RRE figure, just remember that it’s blended and if you want to calculate capacity from it, you should calculate RREs by rep type and then get capacity by multiplying the RRE for each rep type by their respective steady-state productivity.

I recommend moving to an RRE mindset for modeling and analyzing sales capacity.  If you want to play with the spreadsheet I made for this post, you can find it here.

Thanks to my friend Paul Albright for being the first person to introduce me to this idea.

# # #

Notes
[1] This is actually a productivity model, based on actual sales productivity — how much people have historically sold (and ergo should require little/no cushion before sales signs up for it).  Most people I know work with a productivity model and then uplift the desired productivity by 15 to 25% to set quotas.

[2] Most companies have two or three types (e.g., corporate vs. field), so you typically need to build a waterfall for each type of rep.

[3] To build this model, you also need to know the aging of your existing salesreps — i.e., how many second-, third-, fourth-, and steady-state-quarter reps you have at the start of the year.

[4] The glaring omission from this model is sales turnover.  In order to keep it simple, it’s not factored in here. While some people try to factor in sales turnover by using reduced sales productivity figures, I greatly prefer to model realistic sales productivity and explicitly model sales turnover in creating a sales bookings capacity model.

[5] This is one reason it’s so expensive to build an enterprise software sales force.  For several quarters you often get 100% of the cost and 50% of the sales capacity.

[6] Which should be an weighted average productivity by type of rep weighted by number of reps of each type.

Important Subtleties in Calculating Quarterly, Annual, and ATR-based Churn Rates

This post won’t save your life, or your company.  But it might save you a few precious hours at 2:00 AM if you’re working on your company’s SaaS metrics and can’t foot your quarterly and annual churn rates while preparing a board or investor deck.

The generic issue is a lot of SaaS metrics gurus define metrics in a generic way using “periods” without paying attention to some subtleties that can arise in calculating these metrics for a quarter vs. a year.  The specific issue is, if you do what many people do, that your quarterly and annual churn rates won’t foot — i.e., the sum of your quarterly churn rates won’t equal your annual churn rate.

Here’s an example to show why.

saas churn subtle

If I asked you to calculate the annual churn rate in the above example, virtually everyone would get it correct.  You’d look at the rightmost column, see that 2018 started with 10,000 in ARR, see that there were 1,250 dollars of churn on the year, divide 1,250 by 10,000 and get 12.5%.  Simple, huh?

However, if I hid the last column, and then asked you to calculate quarterly churn rates, you might come up with churn rate 1, thinking churn rate = period churn / starting period ARR.  You might then multiply by 4 to annualize the quarterly rates and make them more meaningful.  Then, if I asked you to add an annual column, you’d sum the quarterly (non-annualized) rates for the annual churn and either average the annualized quarterly rates or simply gray-out the box as I did because it’s redundant [1].

You’d then pause, swear, and double-check the sheet for errors because the sum of your quarterly rates (10.2%) doesn’t equal your annual rate (12.5%).

What’s going on?  The trap is thinking churn rate = period churn / starting period ARR.

That works in a world of one-year contracts when you look at churn on an annual basis (every contract in the starting ARR base of 10,000 faces renewal at some point during the year), but it breaks on a quarterly basis.  Why?  Because starting ARR is increasing every quarter due to new sales that aren’t in the renewal base for the year.  This depresses your churn rates relative to churn rate 2, which defines quarterly churn as churn in the quarter divided by starting-year ARR.  When you use churn rate 2, the sum of the quarterly rates equals the annual rate, so you can mail out that board deck and go back to bed [2].

Available to Renew (ATR-based) Churn Rates

While we’re warmed up, let’s have some more fun.  If you’ve worked in enterprise software for more than a year, you’ll know that the 10,000 dollars of starting ARR is most certainly not distributed evenly across quarters:  enterprise software sales are almost always backloaded, ergo enterprise software renewals follow the same pattern.

So if we want more accurate [3] quarterly churn rates, shouldn’t we do the extra work, figure out how much ARR we have available to renew (ATR) in each quarter, and then measure churn rates on an ATR basis?  Why not!

Let’s first look at an example, that shows available to renew (ATR) split in a realistic, backloaded way across quarters [4].

ATR churn 1

In some sense, ATR churn rates are cleaner because you’re making fewer implicit assumptions:  here’s what was up for renewal and here’s what we got (or lost).  While ATR rates get complicated fast in a world of multi-year deals, for today, we’ll stay in a world of purely one-year contracts.

Even in that world, however, a potential footing issue emerges.  If I calculate annual ATR churn by looking at annual churn vs. starting ARR, I get the correct answer of 12.5%.  However, if I try to average my quarterly rates, I get a different answer of 13.7%, which I put in red because it’s incorrect.

Quiz:  what’s going on?

Hint:  let me show the ATR distributed in a crazy way to demonstrate the problem more clearly.

atr churn 2

The issue is you can’t get the annual rate by averaging the quarterly ATR rates because the ATR is not evenly distributed.  By using the crazy distribution above, you can see this more clearly because the (unweighted) average of the four quarterly rates is 53.6%, pulled way up by the two quarters with 100% churn rates.  The correct way to foot this is to instead use a weighted average, weighting on an ATR basis.  When you do that (supporting calculations in grey), the average then foots to the correct annual number.

# # #

Notes:

[1] The sum of the quarterly rates (A, B, C, D) will always equal the average of the annualized quarterly rates because (4A+4B+4C+4D)/4 = A+B+C+D.

[2] I won’t go so far as to say that churn rate 1 is “incorrect” while churn rate 2 is “correct.”  Churn rate 1 is simple and gives you what you asked for “period churn / starting period ARR.”  (You just need to realize that the your quarterly rates will only sum to your annual rate if you have zero new sales and ergo you should calculate the annual rate off the yearly churn and starting ARR.)  Churn rate 2 is somewhat more complicated.  If you live in a world of purely one-year contracts, I’d recommend churn rate 2.  But in a world of mixed one- and multi-year contracts, then lots of contracts are in starting period ARR aren’t in the renewal base for the year, so why would I exclude only some of them (i.e,. those signed in the year) as opposed to others.

[3] Dividing by the whole ARR base basically assumes that the base renews evenly across quarters.  Showing churn rates based on available-to-renew (ATR) is more accurate but becomes complicated quickly in a world of mixed, multi-year contracts of different duration (where you will need to annualize the rates on multi-year contracts and then blend the average to get a single, meaningful, annualized rate).  In this post, we’ll assume a world of exclusively one-year contracts, which sidesteps that issue.

[4] ATR is normally backloaded because enterprise sales are normally backloaded.  Here the linearity is 15%, 17.5%, 25%, 42.5% or a 32.5/67.5 split across the first vs. second half of the year (which is pretty backloaded even for enterprise software).

[5] The spreadsheet I used is available here if you want to play with it.

The Two Engines of SaaS: QCRs and DEVs

I remember one day, years ago, when I was a VP at $10M startup and Larry, the head of sales, came in one day handing out t-shirts that said:

“Code, sell, or get out of the way.”

Neither I, nor the rest of marketing team, took this particularly well because the shirt obviously devalued the contributions of F&A, HR, and marketing.  But, ever seeking objectivity, I did concede that the shirt had a certain commonsense appeal.  If you could only hire one person at a startup, it would be someone to write the product.  And if you could only hire one more, it would be someone to sell it.

This became yet another event that reconfirmed my belief in my “marketing exists to make sales easier” mantra.  After all, if you’re not coding or selling, at least you can help someone who is.

Over time, Larry’s t-shirt morphed in my mind into a new mantra:

“A SaaS company is a two-engine plane.  The left engine is DEVs.  The right is QCRs.”

QCR meaning quota-carrying (sales) representative and DEV meaning developer (or, for symmetry and emphasis, storypoint-burning developer).  People who sell with truly incremental quota, and people who write code and burndown storypoints in the process.

It’s a much nicer way of saying “code, sell, or get out of the way,” but it’s basically the same idea.  And it’s true.  While Larry was coming from a largely incorrect “protest overhead and process” viewpoint, I’m coming from a different one:  hiring.

The two hardest lines in a company headcount plan to keep at-plan are guess which two?  QCRs and DEVs.  Forget other departments for a minute — I’m saying is the the hardest line for the VP of Engineering to stay fully staffed on is DEVs, and the hardest line for the VP of Sales to stay fully staffed on is QCRs.

Why is this?

  • They are two, critical highly in-demand positions, so the market is inherently tight.
  • Given their importance, the hiring VPs can be gun-shy about making mistakes and lose candidates due to hesitation or indecision.
  • Both come with a short-term tax and mid-term payoff because on-boarding new hires slows down the rest of the team, a possible source of passive resistance.
  • Sales managers dislike splitting territories because it makes them unpopular, which could drive more foot-dragging.
  • It’s just plain easier to find the associated support functions — (e.g,. program managers, QA engineers, techops, salesops, sales productivity, overlays, CSMs, managers in general) than it is find the QCRs and DEVs.

Let me be clear:  this is not to say that all the supporting functions within sales and engineering do not add value, nor is this to say that supporting corporate functions beyond sales and engineering do not add value — it is to say, however, that far too often companies take their eye off the ball and staff the support functions before, not after, those they are supporting.  That’s a mistake.

What happens if you manage this poorly?  On the sales side, for example, you end up with an organization that has 1 SVP of Sales, 1 VP of sales consulting, 4 sales consultants, 1 director of sales ops, 1 director of sales productivity, 1 manager of sales development reps (SDRs), 4 SDRs, an executive assistant, and 4 quota-carrying salespeople.  So only 22% of the people in your sales organization actually carry a quota.

“Uh, other than QCRs, we’re doing great on sales hiring,”  says the sales VP.  “Other than that, Mrs. Lincoln, how did you find the play?” thinks the board.

Because I’ve seen this happen so often, and because I’ve seen companies accused of it both rightfully and unjustly, I’d decided to create two new metrics:

  • QCR density = number of QCRs / total sales headcount
  • DEV density = numbers of DEVs / total engineering headcount

The bad news is I don’t have a lot of benchmark data to share here.  In my experience, both numbers want to run in the 40% range.

The good news is that if you run a ratio-driven staffing model (which you should do for both sales and engineering), you should be able to calculate what these densities should be when you are fully staffed.

Let’s conclude with a simple model that does just that on the sales side, producing a result in the 38% to 46% range.

qcr dens

Finally, let me add that having such a model helps you understand whether, for example, your QCR density is low due to slow QCR hiring (and/or bad retention) against a good model, or on-pace hiring against a “fat” model.  The former is an execution problem, the latter is a problem with your model.

“Always Scrubbing the Pipeline” Means “Never Scrubbing the Pipeline.”

Perhaps you’ve seen this movie:

CEO:  “Wow the quarterly pipeline dropped 20% this week.  What’s going on sales VP?”

Sales VP:  “Well, that’s because we cleaned it up this week.”

CEO:  “That sounds great, but you said that last week.”

VP of Sales: “Well, that’s because we scrubbed it then, too.”

CEO:  “So shouldn’t it have been clean after last week’s cleaning?  Why did it require so much more cleaning that it dropped another 20% this week.”

VP of Sales:  “Well, you know it’s a big job and you can’t clean up the whole pipeline in a week.”

CEO:  “Should I expect it to drop another 20% next week?”

VP of Sales:  “Uh.”

CEO:  “Soon you’re going to say that we don’t have enough to make our numbers.”

VP of Sales:  “Well, I did mean to mention that I’ve been thinking of cutting the forecast because we just don’t have enough opportunities to work on.”

CEO:  “But we started the quarter with 3.2x pipeline coverage, shouldn’t that be enough?”

VP of Sales:  “Normally, yes.  But the pipeline wasn’t really clean.  Some of those opportunities weren’t real opportunities.” [1]

CEO:  “What does ‘clean’ mean?  When does it get clean?  Once clean, how long does it stay clean.”

VP of Sales:  “Well, look our view here is that we should always be scrubbing, so we’re constantly scrubbing the pipeline, always finding new things.”

What’s wrong with this conversation?  A lot. This Sales VP:

  • Has no clear definition of a scrubbed pipeline.
  • Has no process for scrubbing the pipeline.
  • Takes no accountability for the pipeline and its quality.

In my experience, the statement “we always scrub the pipeline” means precisely one thing:  “we never scrub the pipeline.”

Should that matter?  Well, using some quick assumptions [2], the average first-line enterprise sales manager is managing pipeline that cost $50,000 to generate per rep, so if they’re managing 6-8 reps they are managing pipeline that cost the company $300,000 – $400,000.  Sales managers need to manage that pipeline.  The way to manage it is through periodic, disciplined scrubs [3].

Now some managers don’t play the “always scrubbing” card.  Instead, they say “we scrub the pipeline every week on my sales forecast call.”  But once understand what a pipeline scrub looks like and remember the purpose of a forecast call [4], you realize that it’s impossible to do both at once.

How to Properly Scrub the Pipeline

While everyone will want to take their own unique angle on how to approach this, the core of a pipeline scrub is to review all the opportunities (this quarter and out quarters) in every sales rep’s pipeline to ensure that they are classified correctly with respect to:

  • Close date (which determines what quarter pipeline it’s in)
  • Stage (along a series of well defined and verifiable stages)
  • Forecast category (e.g., forecast, commit, upside)
  • Value (following specific rules about how and when to value opportunities)

These rules should be documented in a living document called something like Pipeline Management Rules (PMR) to which managers should refer during the pipeline scrub (e.g., “Jimmy, tell me what’s the rule for picking a close date in the PMR document”).

The other important thing about pipeline scrubs is timing, because pipeline scrubs will affect your sales analytics (e.g., pipeline coverage ratios, pipeline conversion rates, stage- and forecast-category weighted expected values).  Ergo, I picked a few fixed weeks per quarter (weeks 3, 6, and 9) to present scrubbed pipeline and then we typically use the week 3 snapshot for most of our early-quarter pipeline analytics [5].

The goal of the pipeline scrub is to ensure that the entire pipeline is fairly represented with respect to those rules.  By following this disciplined procedure you can ensure that your sales forecasting and analytics are not a castle built on a sand foundation, but an edifice built on bedrock.

Notes

[1] If you haven’t gone insane yet, this one should push you over.  Wait, whose job it is to accept opportunities into the pipeline?  Sales!  Once an opportunity gets into what’s known as either “stage 2” or “sales accepted lead” status, sales doesn’t get to play that card.  This represents a total failure to accept accountability.

[2] 10 this-quarter and 10 out-quarter opportunities per rep * $2,500 mean cost per opportunity = $50,000.

[3]  I am not arguing that you can’t also clean up opportunities along the way, but that needs to be a supplement to, not a substitute for, a proper pipeline scrubbing process.

[4] A forecast call is usually focused on the current quarter and on the opportunities that are expected to close in order to make the forecast.  Thus, low-probability and out-quarter opportunities are easily overlooked.

[5] Implying of course that sales perform the scrubs during weeks 2, 5, and 8 so the resulted can be presented on Monday morning of weeks 3, 6, and 9.

The Leaky Bucket, Net New ARR, and the SaaS Growth Efficiency Index

My ears always perk up when I hear someone say “net new ARR” — because I’m trying to figure out which, of typically two, ways they are using the term:

  • To mean ARR from net new customers, in which case, I don’t know why they need the word “net” in there.  I call this new business ARR (sometimes abbreviated to newbiz ARR), and we’ll discuss this more down below.
  • To mean net change in ARR during a period, meaning for example, if you sold $2,000K of new ARR and churned $400K during a given quarter, that net new ARR would be $1,600K.  This is the correct way to use this term.

Let’s do a quick review of what I call leaky bucket analysis.  Think of a SaaS company as a leaky bucket full of ARR.

  • Every quarter, sales dumps new ARR into the bucket.
  • Every quarter, customer success does its best to keep water from leaking out.

Net new ARR is the change in the water level of the bucket.  Is it a useful metric?  Yes and no.  On the yes side:

  • Sometimes it’s all you get.  For public companies that either release (or where analysts impute) ARR, it’s all you get.  You can’t see the full leaky bucket analysis.
  • It’s useful for measuring overall growth efficiency with metrics like cash burn per dollar of net new ARR or S&M expense per dollar of net new ARR.  Recall that customer acquisition cost (CAC) focuses only on sales efficiency and won’t detect the situation where it’s cheap to add new ARR only to have it immediately leak out.

If I were to define an overall SaaS growth efficiency index (GEI), I wouldn’t do it the way Zuora does (which is effectively an extra-loaded CAC), I would define it as:

Growth efficiency index = -1 * (cashflow from operations) / (net new ARR)

In English, how much cash are you burning to generate a dollar of net new ARR.  I like this because it’s very macro.  I don’t care if you’re burning cash as a result of inefficient sales, high churn, big professional services losses, or high R&D investment.  I just want to know how much cash you’re burning to make the water level move up by one dollar.

So we can see already that net new ARR is already a useful metric, if a sometimes confused term.  However, on the no side, here’s what I don’t like about it.

  • Like any compound metric, as they say at French railroad crossings, un train peut en cacher un autre (one train can hide another).  This means that while net new ARR can highlight a problem you won’t immediately know where to go fix it — is weak net new ARR driven by a sales problem (poor new ARR), a product-driven churn problem, a customer-success-driven churn problem, or all three?

Finally, let’s end this post by taking a look and then a deeper look at the SaaS leaky bucket and how I think it’s best presented.

leaky1

For example, above, you can quickly see that a massive 167% year-over-year increase in churn ARR was the cause for weak 1Q17 net new ARR.  While this format is clear and simple, one disadvantage of this simpler format is that it hides the difference between new ARR from new customers (newbiz ARR) and new ARR from existing customers (upsell ARR).  Since that can be an important distinction (as struggling sales teams often over-rely on sales to existing customers), this slightly more complex form breaks that out as well.

leaky2

In addition to breaking out new ARR into its two sub-types, this format adds three rows of percentages, the most important of which is upsell % of new ARR, which shows to what extent your new ARR is coming from existing versus new customers.  While the “correct” value will vary as a function of your market, your business model, and your evolutionary phase, I generally believe that figures below 20% indicate that you may be failing to adequately monetize your installed base and figures above 40% indicate that you are not getting enough new business and the sales force may be too huddled around existing customers.