Category Archives: Metrics

How to Train Your VP of Sales to Think About the Forecast

Imagine a board meeting.

Director:  What’s the forecast for new ARR this quarter?

Sales VP:  $4.3M, with a best case of $5.0M.

Director:  So what’s the most likely outcome?

Sales VP:  $4.3M.

Director:  What are you really going to do?  (The classic noob trap question.)

Sales VP:  I think we can come in North of that.

Director:  What’s the worst case?

Sales VP:  $3.5M.

Director:  What are the odds of coming in at or above the forecast? 

Sales VP:  I always make my forecast.

Director:   What do you mean by worst case?

Sales VP:  You know, well, if the stars align in a bad way – a lot of stuff would have to go wrong – but if that happened, then we could end up at $3.5M.

Director:  So, let’s say a 10% chance of being at/below the worst case?

Sales VP:  I’d say more like 5%.

Director:  What do you mean by best case?

Sales VP:  Well, if we really struck it rich and everything lined up just the way I wanted, that would be best case.

Director:  You mean if all the deals came in — so best case basically equals pipeline?

Sales VP:  No, that never happens, I’ve made about 10 scenarios of different deal closing combinations and in 2 of them I can get to the best case.

You see the problem?  Does it sound familiar?  Do you realize how much time we spend talking in board meetings about “forecast,” “best case,” and “worst case” without every discussing what we mean by those terms?

Do you see how this is compounded by the sales VP’s natural, intuitive view of the outcomes?  Do you see the obvious mathematical contradictions?  “I always make my forecast” says it’s a 100% number, but then the VP says it’s the “most likely” number which implies 50%.  Then the VP says there’s a 5% chance of coming in at/less than worst case (which is much lower) and then kind of implies that there’s a 20% chance of beating best case – but the 2 out of 10 is meaningless because it’s not a probability, it’s just a count of scenarios.  Nothing adds up.

The result is, if you’re not careful, the board ends up counting angels on pinheads.  What can we do to fix this?  It’s simple:  teach (and if need be, force) your sales VP to think probabilistically.  Ask him/her how often:

  • It is reasonable to miss the forecast.  A typical answer might be 10%.
  • It is likely to come in at/below the worst case? Typical answer, 5%.
  • It is likely to meet/beat the best case? Typical answer, 20%.

So, with those three questions, we’ve now established that we want the sales VP to give us:

  • A 90% number on being at/above the forecast
  • A 20% number on being at/above the best case
  • A 5% number on being at/below the worst case

Put differently, when the sales VP decides what number to forecast that they should be thinking:

  • I should come in under my forecast once every 2.5 years (10 quarters).
  • I should hit/beat the best case about once every 5 quarters (a bit less than once a year).
  • I should come in/under the worst case once every 20 quarters (once every 5 years, or for most minds, basically never).

The beauty here is that when you work at a company a long time you can get enough quarters under your belt, to start really seeing how you’re doing relative to these frequencies.  What’s more, by converting the probabilities into frequencies (e.g., once every 10 quarters) you make it more intuitive for the sales VP and the organization to think this way.

In addition, you have a basis for conversations like this one which, among other things, is about overconfidence:

CEO:  You need to work on your forecasting.

Sales VP:  You know it’s hard out there, very competitive, and we don’t have much deal flow.  Back when I was at { Salesforce | Oracle | SAP }, I was much better at forecasting because we had more volume.

CEO:  But we agreed your forecast should be a 90% number and you’ve missed it 2 out of the past 4 quarters.

Sales VP:  Yes, but as I’ve said it’s tough to forecast in this market.

CEO:  Then forecast a lower number so you can beat it 90% of the time.  I’m asking you for a 90% number and empirically you’re giving me a 50% number. 

Sales VP:  OK.

CEO:  Plus, when those two big deals slipped last quarter you didn’t drop your forecast, why?

Sales VP:  Because where I grew up, you don’t cut the forecast.  You try like crazy to hold it.  Do you know the morale problems it causes when I cut the forecast – especially if it’s below plan? So, yes, when those two deals slipped it added more risk to the forecast – and I told you and the board that — but I didn’t cut forecast, no. 

CEO:  But “adding risk” here is meaningless.  In reality, “adding risk” means it’s not a 90% number anymore.  You’ve taken what was a 90% number and it’s now more like a 60% or 70% number.  So I want you to forget what they taught you growing up in sales and always – every week – give me a number that based on all available information you are 90% sure you can beat.  If that means dropping the forecast so be it.

sales forecast

This also helps with the board and the inevitable sandbagger issue.  In my experience (and with a bit of exaggeration) you always seem to be in one of two situations:  (1) intermittently missing plan and in trouble or (2) consistently making plan and a “sandbagger” – it feels like there’s nothing in between.

Well, if you establish with the board that your company forecast is a 90% number it means you are supposed to beat it 9 times out of 10 so you can only really be labelled a sandbagger when you’re 15 for 15 or 20 for 20.  It also reminds them that you’re supposed to arrive at the forecast so that you miss once every 10 quarters so they shouldn’t freak out if once every 2.5 years if that happens — it’s supposed to happen in this system.  (Just don’t let a once-in-ten-quarter event happen twice in a row.)

I like this quantitative basis for sales forecasting and I carry it down to the salesrep and pipeline level.  I believe that each “forecast category” should have a probability associated with it.  For example, at the opportunity level, you should link probabilities to categories, such as:

  • Commit = 90%
  • Forecast = 70%
  • Upside = 30%

This, in turn, means that over time, a given salesrep should close 90% of their committed deals, 70% of their forecast deals, and 30% of their upside.  Deviations from this over time indicate that the rep is mis-categorizing the deals because the probability should be the basis for the forecast category assignment [1].

Finally, I do believe that salesreps should give quarterly forecasts [2] that reflect their sense for how things will come in given all the odd things that can happen to deals (e.g., size changes, acceleration, slippage).  I believe those forecasts should be a 70% number because the sales manager will be managing across a  portfolio of them and while there is little room for a company to miss at the VP of Sales level, there is more room for and more variance in performance across salesreps.

While I know this will not necessarily come naturally to all sales VPs — and some may push-back hard — this is a simple, practical, and rigorous way to think about the forecast.

# # #

[1] Some people do this through an independent (orthogonal) field in the CRM system called probability.  I think that’s unnecessary because in my mind forecast category should effectively equal probability and your options for picking a probability should be bucketed.  No one can say a deal is 43% vs. 52% and forecast category doesn’t indicate some probability of closing, then … what use is it and on what basis should you classify something as forecast vs. upside?

[2] Some people believe that only managers should make forecasts, but I believe both reps and managers should forecast for two reasons:  (1) provided it’s left independent and not “managed” by the managers, the aggregated salesrep-level forecast provides another, Wisdom of Crowds-y, view into the sales forecast and (2) it’s never too early to teach salesreps how to forecast which is best learned through the experience of trial and error over many quarters.

In-Memory Analytics: The Other Kind – A Key Success Factor for Your Career

I’m not going to talk about columnar databases, compression, horizontal partitioning, SAP Hana, or real-time vs. pre-aggregated summarization in this post on in-memory analytics.  I’m going to talk about the other kind of in-memory analytics.  The kind that can make or break your career.

What do you mean, the other kind of in-memory analytics?  Quite simply, the kind you keep in your head (i.e., in human memory).  Or, better put, the kind you should be expected to keep in your head and be able to recite on demand in any business meeting.

I remember when I worked at Salesforce, I covered for my boss a few times at the executive staff meeting when he was traveling or such.  He told me:  “Marc expects everyone to know the numbers, so before you go in there, make sure you know them.”  And I did.  On the few times I attended in his place, I made a cheat sheet and studied it for an hour to ensure that I knew every possible number that could reasonably be asked.  I’d sit in the meeting, saying little, and listening to discussion not directly related to our area.  Then, boom, out of left field, Marc asked:  “what is the Service Cloud pipeline coverage ratio for this quarter in Europe?”

“3.4,” I replied succinctly.  If I hadn’t have known the number I’m sure it would been an exercise in plucking the wings off a butterfly.  But I did, so the conversation quickly shifted to another topic, and I lived to fight another day.

Frankly, I was happy to work in an organization where executives were expected to know — in their heads, in an instant — the values of the key metrics that drive their business.  I weak organizations you constantly hear “can I get back to you on that” or “I’m going to need to look that one up.”

If you want to run a business, or a piece of one,  and you want to be a credible leader — especially in a metrics-driven organization — you need to have “in-memory” the key metrics that your higher-ups and peers would expect you to know.

This is as true of CEO pitching a venture capitalist and being asked about CAC ratios and churn rates as it is of a marketing VP being asked about keywords, costs, and conversions in an online advertising program.  Or a sales manager being asked about their forecast.

In fact, as I’ve told my sales directors a time or two:  “I should be able to wake you up at 3:00 AM and ask your forecast, upside, and pipeline and you should be able to answer, right then, instantly.”

That’s an in-memory metric.  No “let me check on that.”  No “I’ll get back to you.”  No “I don’t know, let me ask my ops guy,” which always makes me think: who runs the department, you or the ops guy — and if you need to ask the ops guy all the numbers maybe he/she should be running the department and not you?

I have bolded the word “expect” four times above because this issue is indeed about expectations and expectations are not a precise science.  So, how can you figure out the expectations for which analytics you should hold in-memory?

  • Look at your department’s strategic goals and determine which metrics best measure progress on them.
  • Ask peers inside the company what key metrics they keep in-memory and design your set by analogy.
  • Ask peers who perform the same job at different companies what key metrics they track.
  • When in doubt, ask the boss or the higher-ups what metrics they expect you to know.

Finally, I should note that I’m not a big believer in the whole “cheat sheet” approach I described above.  Because that was a special situation (covering for the boss), I think the cheat sheet was smart, but the real way to burn these metrics into your memory is to track them every week at your staff meeting, watching how they change week by week and constantly comparing them to prior periods and to a plan/model if you have one.

The point here is not “fake it until you make it” by running your business in a non-metrics-focused way and memorizing figures before a big meeting, but instead to burn the metrics review into your own weekly team meeting and then, naturally, over time you will know these metrics so instinctively that someone can wake you up at 3:00 AM and you can recite them.

That’s the other kind of in-memory analytics.  And, much as I love technology, the more important kind for your career.

A Fresh Look at How to Measure SaaS Churn Rates

[Editor’s note:  revised 3/27/17 with changes to some definitions.]

It’s been nearly three years since my original post on calculating SaaS renewal rates and I’ve learned a lot and seen a lot of new situations since then.  In this post, I’ll provide a from-scratch overhaul on how to calculate churn in an enterprise SaaS company [1].

While we are going to need to “get dirty” in the detail here, I continue to believe that too many people are too macro and too sloppy in calculating these metrics.  The details matter because these rates compound over time, so the difference between a 10% and 20% churn rate turns into a 100% difference in cohort value after 7 years [2].  Don’t be too busy to figure out how to calculate them properly.

The Leaky Bucket Full of ARR

I conceptualize SaaS companies as leaky buckets full of annual recurring revenue (ARR).  Every time period, the sales organization pours more ARR into the bucket and the customer success (CS) organization tries to prevent water from leaking out [3].

This drives the leaky bucket equation, which I believe should always be the first four lines of any SaaS company’s financial statements:

Starting ARR + new ARR – churn ARR = ending ARR

Here’s an example, where I start with those four lines, and added two extra (one to show a year over year growth rate and another to show “net new ARR” which offsets new vs. churn ARR):

leaky

For more on how to present summary SaaS startup financials, go here.

Half-Full or Half-Empty:  Renewals or Churn?

Since the renewal rate is simply one minus the churn rate, the question is which we should calculate?  In the past, I favored splitting the difference [4], whereas I now believe it’s simpler just to talk about churn.  While this may be the half-empty perspective, it’s more consistent with what most people talk about and is more directly applicable, because a common use of a churn rate is as a discount rate in a net present value (NPV) formula.

Thus, I now define the world in terms of churn and churn rates, as opposed to renewals and renewal rates.

Terminology: Shrinkage and Expansion

For simplicity, I define the following two terms:

  • Shrinkage = anything that makes ARR decrease. For example, if the customer dropped seats or was given a discount in return for signing a multi-year renewal [5].
  • Expansion = anything that makes ARR increase, such as price increases, seat additions, upselling from a bronze to a gold edition, or cross-selling new products.

Key Questions to Consider

The good news is that any churn rate calculation is going to be some numerator over some denominator.  We can then start thinking about each in more detail.

Here are the key questions to consider for the numerator:

  • What should we count? Number of accounts, annual recurring revenue (ARR), or something else like renewal bookings?
  • If we’re counting ARR should we think at the product-level or account-level?
  • To what extent should we offset shrinkage with expansion in calculating churn ARR? [6]
  • When should we count what? What about early and late renewals?  What about along-the-way expansion?  What about churn notices or non-payment?

Here are the key questions to consider for the denominator:

  • Should we use the entire ARR pool, that portion of the ARR pool that is available to renew (ATR) in any given time period, or something else?
  • If using the ATR pool, for any given renewing contract, should we use its original value or its current value (e.g., if there has been upsell along the way)?

What Should We Count?  Logos and ARR

I believe the two metrics we should count in churn rates are

  • Logos (i.e., number of customers). This provides a gross indication of customer satisfaction [7] unweighted by ARR, so you can answer the question:  what percent of our customer base is turning over?
  • This provides a very important indication on the value of our SaaS annuity.  What is happening to our ARR pool?

I would stay completely away from any SaaS metrics based on bookings (e.g., a bookings CAC, TCV, or bookings-based renewals rate).  These run counter to the point of SaaS unit economics.

Gross and Net Shrinkage; Account-Level Churn

Let’s look at a quick example to demonstrate how I now define gross and net shrinkage as well as account-level churn [8].

gross and net shrinkage

Gross shrinkage is the sum of all the shrinkage. In the example, 80 units.

Net shrinkage is the sum of the shrinkage minus the sum of the expansion. In the example, 80-70 = 10 units.

To calculate account-level churn, we proceed, account by account, and look at the change in contract value, separating upsell from the churn.  The idea is that while it’s OK to offset shrinkage with expansion within an account that we should not do so across accounts when working at the account level [9].  This has the effect of splitting expansion into offset (used to offset shrinkage within an account) and upsell (leftover expansion after all account-level shrinkage has been offset).  In the example, account-level churn is 30 units.

Make the important note here that how we calculate you churn – and specifically how we use expansion ARR to offset shrinkage—not only affects our churn rates, but our reported upsell rates as well.  Should we proudly claim 70 units of upsell (and less proudly 80 units of churn), 30 units of churn and 20 of upsell, or simply 10 units of churn?  I vote for the second.

While working at the account-level may seem odd, it is how most SaaS companies work operationally.  First, because they charter customer success managers (CSMs) to think at the account level, working account by account doing everything they can to preserve and/or increase the value of the account.  Second, because most systems work at and finance people think at the account level – e.g., “we had a customer worth 100 units last year, and they are worth 110 units this year so that means upsell of 10 units.  I don’t care how much is price increase vs. swapping some of product A for product B.” [11]

So, when a SaaS company reports “churn ARR,” in its leaky bucket analysis, I believe they should report neither gross churn nor net churn, but account-level churn ARR.

Timing Issues and the Available to Renew (ATR) Concept

Churn calculations bring some interesting challenges such as early/late renewals, churn notices, non-payment, and along-the-way expansion.

A renewals booking should always be taken in the period in which it is received.  If a contract expires on 6/30 and the renewal is received in on 6/15 it should show up in 2Q and if received on 7/15 it should up in 3Q.

For churn rate calculations, however, the customer success team needs to forecast what is going to happen for a late renewal.  For example, if we have a board meeting on 7/12 and a $150K ARR renewal due 6/30 has not yet been happened, we need to proceed based on what the customer has said.  If the customer is actively using the software, the CFO has promised a renewal but is tied up on a European vacation, I would mark the numbers “preliminary” and count the contract as renewed.  If, however, the customer has not used the software in months and will not return our phone calls, I would count the contract as churned.

Suppose we receive a churn notice on 5/1 for a contract that renews on 6/30.  When should we count the churn?  A Bessemer SaaS fanatic would point to their definition of committed monthly recurring revenue (CMRR) [12] and say we should remove the contact from the MRR base on 5/1.  While I agree with Bessemer’s views in general — and specifically on things like on preferring ARR/MRR to ACV and TCV — I get off the bus on the whole notion of “committed” ARR/MRR and the ensuing need to remove the contract on 5/1.  Why?

  • In point of fact the customer has licensed and paid for the service through 6/30.
  • The company will recognize revenue through 6/30 and it’s much easier to do so correctly when the ARR is still in the ARR base.
  • Operationally, it’s defeatist. I don’t want our company to give up and say “it’s over, take them out of the ARR base.” I want our reaction to be, “so they think they don’t want to renew – we’ve got 60 days to change their mind and keep them in.” [13]

We should use the churn notice (and, for that matter, every other communication with the customer) as a way of improving our quarterly churn forecast, but we should not count churn until the contract period has ended, the customer has not renewed, and the customer has maintained their intent not to renew in coming weeks.

Non-payment, while hopefully infrequent, is another tricky issue.  What do we do if a customer gives us a renewal order on 6/30, payable in 30 days, but hasn’t paid after 120?  While the idealist in me wants to match the churn ARR to the period in which the contract was available to renew, I would probably just show it as churn in the period in which we gave up hope on the receivable.

Expansion Along the Way (ATW)

Non-payment starts to introduce the idea of timing mismatches between ARR-changing events and renewals cohorts.  Let’s consider a hopefully more frequent case:  ARR expansion along the way (ATW).  Consider this example.

ATW expansion

To decide how to handle this, let’s think operationally, both about how our finance team works and, more importantly, about how we want our customer success managers (CSMs) to think.  Remember we want CSMs to each own a set of customers, we want them to not only protect the ARR of each customer but to expand it over time.  If we credit along-the-way upsell in our rate calculations at renewal time, we shooting ourselves in the foot.  Look at customer Charlie.  He started out with 100 units and bought 20 more in 4Q15, so as we approach renewal time, Charlie actually has 120 units available to renew (ATR), not 100 [14].  We want our CSMs basing their success on the 120, not the 100.  So the simple rule is to base everything not on the original cohort but on the available to renew (ATR) entering the period.

This begs two questions:

  • When do we count the along-the-way upsell bookings?
  • How can we reflect those 40 units in some sort of rate?

The answer to the first question is, as your finance team will invariably conclude, to count them as they happen (e.g., in 4Q15 in the above example).

The answer to the second question is to use a retention rate, not a churn rate.  Retention rates are cohort-based, so to calculate the net retention rate for the 2Q15 cohort, we divide its present value of 535 by its original value of 500 and get 107%.

Never, ever calculate a retention rate in reverse – i.e., starting a group of current customers and looking backwards at their ARR one year ago.  You will produce a survivor biased answer which, stunningly, I have seen some public companies publish.  Always run cohort analyses forwards to eliminate survivor bias.

Off-Cycle Activity

Finally, we need to consider how to address off-cycle (or extra-cohort) activity in calculating churn and related rates.  Let’s do this by using a big picture example that includes everything we’ve discussed thus far, plus off-cycle activity from two customers who are not in the 2Q16 ATR cohort:  (1) Foxtrot, who purchased in 3Q14, renewed in 3Q15, and who has not paid, and (2) George, who purchased in 3Q15, who is not yet up for renewal, but who purchased 50 units of upsell in 2Q16.

big picture

Foxtrot should count as churn in 2Q16, the period in which we either lost hope of collection (or our collections policy dictated that collection we needed to de-book the deal). [15]

George should count as expansion in 2Q16, the period in which the expansion booking was taken.

The trick is that neither Foxtrot nor George is on a 2Q renewal cycle, so neither is included in the 2Q16 ATR cohort.  I believe the correct way to handle this is:

  • Both should be factored into gross, net, account-level churn, and upsell.
  • For rates where we include them in the numerator, for consistency’s sake we must also include them in the denominator. That means putting the shrinkage in the numerator and adding the ATR of a shrinking (or lost) account in denominator of a rate calculation.  I’ll call this the “+” concept, and define ATR+ as inclusive of such additional logos or ARR resulting from off-cycle accounts [16].

Rate Calculations

We are now in the position to define and calculate the churn rates that I use and track:

  • Simple churn rate = net shrinkage / starting period ARR * 4.  Or, in English, the net change in ARR from existing customers divided by starting period ARR (multiplied by 4 to annualize the rate which is measured against the entire ARR base). As the name implies, this is the simplest churn rate to calculate. This rate will be negative whenever expansion is greater than shrinkage. Starting period ARR includes both ATR and non-ATR contracts (including potentially multi-year contracts) so this rate takes into account the positive effects of the non-cancellability of multi-year deals.  Because it takes literally everything into account, I think this is the best rate for valuing the annuity of your ARR base.
  • Logo churn rate = number of discontinuing logos / number of ATR+ logos. This rate tells us the percent of customers who, given the chance, chose to discontinue doing business with us.  As such, it provides an ARR-unweighted churn rate, providing the best sense of “how happy” our customers are, knowing that there is a somewhat loose correlation between happiness and renewal [16].  Remember that ATR+ means to include any discontinuing off-cycle logos, so the calculation is 1/16 = 6.3% in our example.
  • Retention rate = current ARR [time cohort] / time-ago ARR [time cohort]. In English, the current ARR from some time-based cohort (e.g., 2Q15) divided by the year-ago ARR from that same cohort.  Typically we do this for the one-year-ago or two-years-ago cohorts, but many companies track each quarter’s new customers as a cohort which they measure over time.  Like simple churn, this is a great macro metric that values the ARR annuity, all in.
  • Gross churn rate = gross shrinkage / ATR+. This churn rate is important because it reveals the difference between companies that have high shrinkage offset by high expansion and companies which simply have low shrinkage.  Gross churn is a great metric because it simply shows the glass half-empty view:  at what rate is ARR leaking out of your bucket before offset it with refills in the form of expansion ARR.
  • Account-level churn rate = account-level churn / ATR+. This churn rate foots to the reported churn ARR in our leaky bucket analysis (which uses account-level churn), partially offsets shrinkage with expansion at an account-level, and is how most SaaS companies actually calculate churn.  While perhaps counter-intuitive, it reflects a philosophy of examining, at an account basis, what happens to value of our each of our customers when we allow shrinkage to be offset by expansion (which is what we want our CSM reps doing) leaving any excess as upsell.  This should be our primary churn metric.
  • Net churn rate = net shrinkage / ATR+.  This churn rate offsets shrinkage with expansion not at the account level, but overall.  This is similar to the simple churn rate but with the disadvantage of looking only at ATR and not factoring in the positive effects of non-cancellability of multi-year deals.    Ergo, I prefer using the simple churn rate to the net churn rate in valuing the SaaS annuity.

# # #

Notes

[1] Replacing these posts in the process.

[2] The 10% churn group decays from 100 units to 53 in value after 7 years, while the 20% group decays to 26.

[3] We’ll sidestep the question of who is responsible for installed-based expansion in this post because companies answer it differently (e.g., sales, customer success, account management) and the good news is we don’t need to know who gets credited for expansion to calculate churn rates.

[4] Discussing churn in dollars and renewals in rates.

[5] For example, if a customer signed a one-year contract for 100 units and then was offered a 5% discount to sign a three-year renewal, you would generate 5 units of ARR churn.

[6] Or, as I said in a prior post, should I net first or sum first?

[7] And yes, sometimes unhappy customers do renew (e.g., if they’ve been too busy to replace you) and happy customers don’t (e.g., if they get a new key executive with different preferences) but counting logos still gives you a nice overall indication.

[8] Note that I have capitulated to the norm of saying “gross” churn means before offset and thus “net” churn means after netting out shrinkage and expansion.  (Beware confusion as this is the opposite of my prior position where I defined “net” to mean “net of expansion,” i.e., what I’d now call “gross.”)

[9] Otherwise, you can just look at net shrinkage which offsets all shrinkage by all expansion.  The idea of account-level churn is to restrict the ability to offset shrinkage with expansion across accounts, in effect, telling your customer success reps that their job is to, contract by contract, minimize shrinkage and ensure expansion.

[10] “Offset” meaning ARR used to offset shrinkage that ends up neither churn nor upsell.

[11] While this approach works fine for most (inherently single-product) SaaS startups it does not work as well for large multi-product SaaS vendors where the failure of product A might be totally or partially masked by the success of product B.  (In our example, I deliberately had all the shrinkage coming from downsell of product A to make that point.  The product or general manager for product A should own the churn number that product and be trying to find out why it churned 80 units.)

[12] MRR = monthly recurring revenue = 1/12th of ARR.  Because enterprise SaaS companies typically run on an annual business rhythm, I prefer ARR to MRR.

[13] Worse yet, if I churn them out on 5/1 and do succeed in changing their mind, I might need to recognize it as “new ARR” on 6/30, which would also be wrong.

[14] The more popular way of handling this would have been to try and extend the original contract and co-terminate with the upsell in 4Q16, but that doesn’t affect the underlying logic, so let’s just pretend we tried that and it didn’t work for the customer.

[15] Whether you call it a de-booking or bad receivable, Foxtrot was in the ARR base and needs to come out.  Unlike the case where the customer has paid for the period but is not using the software (where we should churn it at the end of the contract), in this case the 3Q15 renewal was effectively invalid and we need to remove Foxtrot from the ARR base at some defined number of days past due (e.g., 90) or when we lose hope of collection (e.g., bankruptcy).

[16] I think the smaller you are the more important this correction is to ensure the quality of your numbers.  As a company gets bigger, I’d just drop the “+” concept whenever it’s only changing things by a rounding error.

[17] Use NPS surveys for another, more precise, way of measuring happiness.  See [7] as well.

The Four Levers of SaaS

There are a lot of SaaS posts out there with some pretty fancy math in them.  I’m a math guy, so I like to geek on SaaS metrics myself.  But, in the heat of battle running a SaaS company, sometimes you just need to keep it simple.

Here’s the picture I keep on my wall to help me do that.

It reminds me that new ARR in any given period is the product of four levers.

  • The MQL to stage 2 opportunity conversion rate (MTS2CR), the rate at which MQLs convert to stage 2, or sales-accepted, opportunities.  Typically they pass through a stage 1 phase first when a sales development rep (SDR) believes there is a real opportunity, but a salesperson has not yet agreed.
  • The stage 2 to close rate (S2TCR), the rate at which stage 2 opportunities close into deals, and avoid being lost to a competitor or derailed (e.g., having the evaluation project cancelled).
  • The annual recurring revenue average sales price (ARR ASP), the average deal size, expressed in ARR.

That’s it.  Those four levers will predict your quarterly new ARR every time.

Aside:  before diving into each of the four levers, let me note that sales velocity is omitted from this model.  That keeps it simple, but it does overlook a potentially important lever.  So if you think you have a sales velocity (i.e., sales cycle length) problem, go look at a different model that includes this lever and suggests ways to decrease it.

So now that we have identified the four levers, let’s focus on what we can do about them in order to increase our quarterly new ARR.

Marketing Qualified Leads (MQLs)

Getting MQLs is the domain of marketing, which should be constantly measuring the cost effectiveness of various marketing programs in terms of generating MQLs (cost/MQL).  This isn’t easy because most leads will require numerous touches over time in order to graduate to MQL status, but marketing needs to stay atop that complexity (e.g., by assigning credits to various programs as MQL-threshold points accumulate).

The best marketers understand the demand is variable and have designed their programs mix so they can scale spending quickly in response to increased needs.  Nothing is worse than an MQL shortage and a marketing department that’s not ready to spend incremental money to address it.

The general rule is to constantly A/B test your programs and nurture streams and do more of what’s working and less of what isn’t.

MQL to Stage 2 Opportunity Conversion Rate

Increasing the MQL to stage 2 opportunity conversion rate (MTS2CR) requires either generating better MQLs or doing a better job handling them so that they convert into stage 2 opportunities.

Generating better MQLs can be accomplished by analyzing past programs to determine which generated the best-converting MQLs and increasing them, putting a higher gate on what you pass over to sales (using predictive or behavioral scoring), or using buyer personas to optimize what you say to buyers, when, and through which channels.

Do a better job handling your existing MQLs comes down ensuring your operational processes work and you don’t let leads fall between the cracks.  Basic activity and aging reports are a start.  Establishing a formal service-level agreement between sales and marketing is a common next step.

Moving up a level and checking that your whole process fits well with the customer’s buying journey is also key.  While each step of your process might individually make sense, when assembled the process may not — e.g., are you irritating customers by triple-qualifying them with an SDR, a salesrep, and a solution consultant each doing basic discovery?

The Stage 2 to Close Rate

Once created, one of three things can happen to a stage 2 opportunity:  you can win it, you can lose it, or it can derail (i.e., anything else, such as project cancellation or “slips” to the distant future).

Increasing your win rate can be accomplished through better product positioning, sales tools, and sales training, improved competitive intelligence, improved buzz/aura, improved case studies and customer references, and better pricing and discounting strategy.  That’s not to mention more strategic approaches via improved sales methodology and process or product improvements, in terms of functionality, non-functional requirements, and product design.

Decreasing your loss rate can be accomplished through better up-front sales qualification, better sales tools and training, improved competitive strategy and tactics, and better pricing and discounting.  Improved sales management can also play a key role in catching in-trouble deals early and escalating to get the necessary resources deployed to win.

Reducing your derail rate is hard because project slips or cancellations seem mostly out of your control.  What’s the best way to reduce your derail rate?  Focus on velocity — take deals off the table before the company has a chance to prioritize another project, do a reorganization, or hire a new executive that kills it.  The longer a deal hangs around, the more likely something bad happens to it.  As the adage goes, time kills all deals.

ARR ASP

The easiest way to increase ARR ASP is to not shrink it through last-minute discounting.  Adopt a formal discount policy with approvals so that, in the words of one famous sales leader, “your rep is more afraid of his/her sales manager than the customer” when it comes to speaking about discounts.

Selling value and product differentiation are two other discount reduction strategies.  The more customers see real value and a concrete return for their business the less they will focus on price.  Additionally, the more they see your offering as unique, the less price pressure you will face from the competition.  Conversely, the more they see your product as a cost and your company as one of several suppliers from whom they can buy the same capabilities, the more discount pressure you will face.

Up-selling to a higher edition or cross selling (“fries with your burger?”) are both ways to increase your ASP as well.  Just be careful to avoid customers feeling nickled and dimed in the process.

For SaaS businesses, remember that multi-year deals typically do not help your ARR ASP (though, if prepaid, they do help with year-one cash).  In fact, it’s usually the opposite — a small ARR discount is typically traded for the multi-year commitment.  My general rule of thumb is to offer a multi-year discount that’s less than your churn rate and everybody wins.

Conclusion

Hopefully this framework will make it easier for you to diagnose and act upon the problems that can impede achieving your company’s new ARR goals.  Always remember that any new ARR problem can be broken down into some combination of an MQL problem, an MQL to stage 2 conversion rate problem, a stage 2 to close rate problem, or an average sales price problem.  By focusing on these four levers, you should be able to optimize the productivity of your SaaS sales model.

 

 

CAC Payback Period:  The Most Misunderstood SaaS Metric

The single most misunderstood software-as-a-service (SaaS) metric I’ve encountered is the CAC Payback Period (CPP), a compound metric that is generally defined as the months of contribution margin to pay back the cost of acquiring a customer.   Bessemer defines the CPP as:

bess cac

I quibble with some of the Bessemerisms in the definition.  For example, (1) most enterprise SaaS companies should use annual recurring revenue (ARR), not monthly recurring revenue (MRR), because most enterprise companies are doing annual, not monthly, contracts, (2) the “committed” MRR concept is an overreach because it includes “anticipated” churn which is basically impossible to measure and often unknown, and (3) I don’t know why they use the prior period for both S&M costs and new ARR – almost everybody else uses prior-period S&M divided by current-period ARR in customer acquisition cost (CAC) calculations on the theory that last quarter’s S&M generated this quarter’s new ARR.

Switching to ARR nomenclature, and with a quick sleight of mathematical hand for simplification, I define the CAC Payback Period (CPP) as follows:

kell cac

Let’s run some numbers.

  • If your company has a CAC ratio of 1.5 and subscription gross margins of 75%, then your CPP = 24 months.
  • If your company has a CAC ratio of 1.2 and subscription gross margins of 80%, then your CPP = 18 months.
  • If you company has a CAC ratio of 0.8 and subscription gross margins of 80%, then your CPP = 12 months.

All seems pretty simple, right?  Not so fast.  There are two things that constantly confound people when looking at CAC Payback Period (CPP).

  • They forget payback metrics are risk metrics, not return metrics
  • They fail to correctly interpret the impact of annual or multi-year contracts

Payback Metrics are for Risk, Not Return

Quick, basic MBA question:  you have two projects, both require an investment of 100 units, and you have only 100 units to invest.  Which do you pick?

  • Project A: which has a payback period of 12 months
  • Project B: which has a payback period of 6 months

Quick, which do you pick?  Well, project B.  Duh.  But wait — now I tell you this:

  • Project A has a net present value (NPV) of 500 units
  • Project B has an NPV of 110 units

Well, don’t you feel silly for picking project B?

Payback is all about how long your money is committed (so it can’t be used for other projects) and at risk (meaning you might not get it back).  Payback doesn’t tell you anything about return.  In capital budgeting, NPV tells you about return.  In a SaaS business, customer lifetime value (LTV) tells you about return.

There are situations where it makes a lot of sense to look at CPP.  For example, if you’re running a monthly SaaS service with a high churn rate then you need to look closely how long you’re putting your money at risk because there is a very real chance you won’t recoup your CAC investment, let alone get any return on it.  Consider a monthly SaaS company with a $3500 customer acquisition cost, subscription gross margin of 70%, a monthly fee of $150, and 3% monthly churn.  I’ll calculate the ratios and examine the CAC recovery of a 100 customer cohort.

saas fail

While the CPP formula outputs a long 33.3 month CAC Payback Period, reality is far, far worse.  One problem with the CPP formula is that it does not factor in churn and how exposed a cohort is to it — the more chances customers have to not renew during the payback period, the more you need to consider the possibility of non-renewal in your math [1].  In this example, when you properly account for churn, you still have $6 worth of CAC to recover after 30 years!  You literally never get back your CAC.

Soapbox:  this is another case where using a model is infinitely preferable to back-of-the-envelope (BOTE) analysis using SaaS metrics.  If you want to understand the financials of a SaaS company, then build a driver-based model and vary the drivers.  In this case and many others, BOTE analysis fails due to subtle complexity, whereas a well-built model will always produce correct answers, even if they are counter-intuitive.

Such cases aside, the real problem with being too focused on CAC Payback Period is that CPP is a risk metric that tells you nothing about returns.  Companies are in business to get returns, not simply to minimize risk, so to properly analyze a SaaS business we need to look at both.

The Impact of Annual and Multi-Year Prepaid Contracts on CAC Payback Period

The CPP formula outputs a payback period in months, but most enterprise SaaS businesses today run on an annual rhythm.  Despite pricing that is sometimes still stated per-user, per-month, SaaS companies realized years ago that enterprise customers preferred annual contracts and actually disliked monthly invoicing.  Just as MRR is a bit of a relic from the old SaaS days, so is a CAC Payback Period stated in months.

In a one-hundred-percent annual prepaid contract world, the CPP formula should output in multiples of 12, rounding up for all values greater than 12.  For example, if a company’s CAC Payback Period is notionally 13 months, in reality it is 24 months because the leftover 1/13 of the cost isn’t collected until the a customer’s second payment at month 24.  (And that’s only if the customer chooses to renew — see above discussion of churn.)

In an annual prepaid world, if your CAC Payback Period is less than or equal to 12 months, then it should be rounded down to one day because you are invoicing the entire year up-front and at-once.  Even if the formula says the CPP is notionally 12.0 months, in an annual prepaid world your CAC investment money is at risk for just one day.

So, wait a minute.  What is the actual CAC Payback Period in this case?  12.0 months or 1 day?  It’s 1 day.

Anyone who argues 12.0 months is forgetting the point of the metric.  Payback periods are risk metrics and measured by the amount of time it takes to get your investment back [2].  If you want to look at S&M efficiency, look at the CAC ratio.  If you want to know about the efficiency of running the SaaS service, look at subscription gross margins.  If you want to talk about lifetime value, then look at LTV/CAC.  CAC Payback Period is a risk metric that measures how long your CAC investment is “on the table” before getting paid back.  In this instance the 12 months generated by the standard formula is incorrect because the formula misses the prepayment and the correct answer is 1 day.

A lot of very smart people get stuck here.  They say, “yes, sure, it’s 1 day – but really, it’s not.  It’s 12 months.”  No.  It’s 1 day.

If you want to look at something other than payback, then pick another metric.  But the CPP is 1 day.  You asked how long it takes for the company to recoup the money it spends to acquire a customer.  For CPPs less than or equal to 12 in a one-hundred percent annual prepaid world, the answer is one day.

It gets harder.  Imagine a company that sells in a sticky category (e.g., where typical lifetimes may be 10 years) and thus is a high-consideration purchase where prospective customers do deep evaluations before making a decision (e.g., ERP).  As a result of all that homework, customers are happy to sign long contracts and thus the company does only 3-year prepaid contracts.  Now, let’s look at CAC Payback Period.  Adapting our rules above, any output from the formula greater than 36 months should be rounded up in multiples of 36 months and, similarly, any output less than or equal to 36 months should be rounded down to 1 day.

Here we go again.  Say the CAC Payback Period formula outputs 33 months.  Is the real CPP 33 months or 1 day?  Same argument.  It’s 1 day.  But the formula outputs 33 months.  Yes, but the CAC recovery time is 1 day.  If you want to look at something else, then pick another metric.

It gets even harder.  Now imagine a company that does half 1-year deals and half 3-year deals (on an ARR-weighted basis).  Let’s assume it has a CAC ratio of 1.5, 75% subscription gross margins, and thus a notional CAC Payback Period of 24 months.  Let’s see what really happens using a model:

50-50

Using this model, you can see that the actual CAC Payback Period is 1 day. Why?  We need to recoup $1.5M in CAC.  On day 1 we invoice $2.0M, resulting in $1.5M in contribution margin, and thus leaving $0 in CAC that needs to be recovered.

While I have not yet devised general rounding rules for this situation, the model again demonstrates the key point – that the mix of 1-year and 3-year payment structure confounds the CPP formula resulting in a notional CPP of 24 months, when in reality it is again 1 day.  If you want to make rounding rules beware the temptation to treat the average contract duration (ACD) as a rounding multiple because it’s incorrect — while the ACD is 2 years in the above example, not a single customer is paying you at two-year intervals:  half are paying you every year while half are paying you every three.  That complexity, combined with the reality that the mix is pretty unlikely to be 50/50, suggests it’s just easier to use a model than devise a generalized rounding formula.

But pulling back up, let’s make sure we drive the key point home.  The CAC Payback Period is the single most often misunderstood SaaS metric because people forget that payback metrics are about risk, not return, and because the basic formulas – like those for many SaaS metrics – assume a monthly model that simply does not apply in today’s enterprise SaaS world, and fail to handle common cases like annual or multi-year prepaid contracts.

# # #

Notes

[1] This is a huge omission for a metric that was defined in terms of MRR and which thus assumes a monthly business model.  As the example shows, the formula (which fails to account for churn) outputs a CAC payback of 33 months, but in reality it’s never.  Quite a difference!

[2] If I wanted to be even more rigorous, I would argue that you should not include subscription gross margin in the calculation of CAC Payback Period.  If your CAC ratio is 1.0 and you do annual prepaid contracts, then you immediately recoup 100% of your CAC investment on day 1.  Yes, a new customer comes with a future liability attached (you need to bear the costs of running the service for them for one year), but if you’re looking at a payback metric that shouldn’t matter.  You got your money back.  Yes, going forward, you need to spend about 30% (a typical subscription COGS figure) of that money over the next year to pay for operating the service, but you got your money back in one day.  Payback is 1 day, not 1/0.7 = 17 months as the formula calculates.

What Marketing Costs Should be Included in CAC Calculations?

Dear Kellblog:

I’m working on my CAC calculations and I’m trying to determine if I should include all marketing costs or just my direct demand generation costs?  I’ve talked to many of my CMO peers and can’t get a consistent answer to the question?

Thanks / Bewildered CMO

Dear Bewildered CMO:

My gut reaction is that you should include all marketing costs.  Don’t try to argue that PR and product marketing don’t work on customer acquisition.  Don’t try to argue that people aren’t programs and try to exclude the cost of your demandgen team.

Why?  Three reasons:

  • Demandgen people and programs dollars should be fungible.  PR and product marketing better be doing things that help acquire customers., even if indirectly.
  • Playing counting games can hurt your credibility.  VCs aren’t just trying to compare metrics, they’re trying to get to know you by seeing how you think about and/or calculate them.  I’d think you were a weasel if I found you excluding these costs without really good reason.
  • To the extent that people try to compare these things between private and public companies, remember that there is no way to split marketing apart (or split customer success from sales) with public companies which should suggest that by default you include things.

Best / Kellblog

For fun, let’s go quickly look at some sources for CAC definitions and see what we find regarding this issue:

Kellblog defines the CAC as:

dk-cac-pic3

S&M, by default, needs to include all S&M costs, so you can’t cut anything out.

(Side note:  to the extent you amortize commissions, I would prefer to say cash sales expense as opposed to GAAP sales expense, because the latter will hide some costs — but that has nothing to do with marketing.)

The 2015 Pacific Crest Private SaaS Company Survey defines the CAC as:

How much do you spend on a fully-loaded sales & marketing cost basis to acquire $1 of new ACV from a new customer.

This seems to close one door (i.e., you better include IT and facilities allocations to your sales costs — as GAAP would require anyway), but open another because it defines the CAC not in terms of total new ACV, but new ACV from new customers.  So if, for example, you had installed base upsell marketing programs, then I would not count those costs in the CAC calculation because they are not marketing costs spent to win new ARR from new customers.  Is PR?  Is product marketing?  It’s a slippery slope.  I’m not in love with this definition for that reason.  You could never do it for public companies.

David Skok defines the CAC as:

Note that while Skok is calculating a cost to acquire a new customer as opposed to $1 of new ARR, his definition is clear when it comes to splitting marketing costs:  include all S&M costs.

Bessemer prefers talking about a CAC payback period and defines it as:

bess cac

Again, this definition is clear — include all S&M costs.

The Perils of Measuring a SaaS Business on Total Contract Value (TCV)

It’s a frothy time and during such times people can develop a tendency to get sloppy about their numbers.  The first sign of froth is when people routinely discuss company size using market capitalization instead of revenue.  This happened constantly during Bubble 1.0 and started again several years ago – e.g., all the talk of unicorns, private companies with $1B+ valuations.

Oneupsmanship becomes the name of the game in frothy times.  If your competitor’s site had 1M pageviews to your own site’s 750K, marketing quickly came up with a new metric on which you could win:  “we had 1.5M eyeballs.”  This kind of gaming, pardon the pun, is seen through rather easily.

The more disturbing distortions are those intended to impress industry influencers to validate strategy.  Analysts – whose job is supposedly to analyze – have a troubling tendency to not judge strategies on their logical merits but on their results.  So if vendor has a silly, unfocused, or simply bad strategy, the vendor doesn’t need to argue that it actually makes sense, they just need find a way to show that it is producing results – and the ensuing Halo Effects will serve as validation.

Public companies try to demonstrate results through revenue allocation games, robbing from non-strategic SKUs to pump up strategic ones (e.g., “cloudwashing” as the megavendors are now often accused).   Private companies have free reign and can either point to unverifiable lofty financing valuations as supposed proof that their strategy is working, or to unverifiable sales growth figures where sales is typically defined as the metric that looked best last quarter.

Most people would quickly agree that at a SaaS business, the best metric for measuring sales is growth in new annual recurring revenue (ARR).  They’d also agree that the best metric for valuing the business is ending ARR and its growth.  (LTV/CAC would come in right behind.)  Using my leaky bucket analogy, the best way to measure sales is by how fast they pour water in the bucket.  The best way to measure the value of the business is the water level of the bucket and how fast it is going up.

But it’s a frothy time, and sometimes the numbers produced using the correct SaaS measures don’t produces numbers that, well, sufficiently impress.  So what’s a poor CEO to do?  Embellish.  The Wall Street Journal recently ran a piece that compared company claims about size/growth made while the company was still private to those later revealed in the S-1.  The results were disappointing, if not perhaps surprising.

Put differently, what’s the SaaS equivalent of “eyeballs”?

The answer is simple:  bookings or, more precisely, total contract value (TCV) bookings.  To show this, we’ll need to define some terms.

  • ARR = annual recurring revenue, the annual subscription fee
  • NSB = new subscription bookings, the prepaid (and – no gaming — quickly collectible) portion of the contract. Since enterprise SaaS contracts are often multi-year and can be fully, partially, or only first-year prepaid, we need a metric to understand the cash implications of the deal.
  • TCV = total contract value, including both prepaid and non-prepaid subscription as well as services. TCV is the largest metric because it includes everything.  Some people exclude services but, to me, total means total.

Now, let’s look at several ways to transform a simple $100K ARR deal in the following spreadsheet:

peril1

Note that in each case, the ARR is $100K.  But by varying deal terms the TCV can vary from $150K to $750K.  Now in the real world if someone was going to pay you $100K for one-year deal, they are unlikely to pay $300K for a three-year prepay or contractual commitment.  They will want something in return; typically a discount.

Let’s combine these ideas in one more example.  Say you run a SaaS company and want to impress everyone that you’re doing really well.  The trouble is you’re not.  You sold $10M in new ARR in 2014 (all one-year, prepaid) and think you can sell $10M again in 2015 on those same terms.   If you measure yourself on new ARR growth, that’s 0% and no one is going to think you are cool or write you up on the tech blogs.  But if you switch to TCV and increase your contract duration, you get a lot more flexibility:

peril2

If you switch to TCV, the good news is you can grow literally as fast as you want just by playing with contract terms.  Want to grow at 60%?  Switch to 2-year prepaids and give a 20% discount.  That’s not fast enough and you want to grow at 101%?  Move to 3-year prepaids by effectively doing a year-long “buy 2 get 1 free” promotion.   That’s not good enough?  Move to 5-year non-prepaids and you can grow at a dazzling 235% and get nice TechCrunch articles about your strategic vision, your hypergrowth, and your unique culture (that is, most probably, just like everyone else’s unique culture).

This is great.  Why doesn’t everybody do it?  Because you’re mortgaging the future:

  • The discounts you’re giving to get multi-year deals are crushing ARR; new ARR growth is shrinking in all cases.
  • You are therefore crushing both revenue and cash collections over the time period(s)
  • The prepaid deals create a drug addiction problem because you’re not collecting cash in the out years. So you build a dependency either on lots of capital or lots more prepaid deals.
  • Worse yet, on the non-prepaid deals you may not ever collect the money at all.

Wait, what did he say?

In my opinion, non-prepaid multi-year deals are often not worth the paper they are written on.  Why?  Just look at it from the customer’s perspective.  Say you sign a $100K five-year deal with only the first year paid up-front.  And say the software’s not delivering.  It took more work to implement than you thought.  You’ve fallen short on the requirements.  It’s not performing very well.  You’ve called for help but the company can’t fix it because they’re too busy doing other 5-year non-prepaid deals with other customers.

What do you do?  Simple:  you don’t pay the invoice when it comes.  Technically,  yes, you are very much breaking the contract that you signed — but if the software really isn’t delivering, when the vendor calls you say:  “sue me.”

Since software companies generally don’t like suing customers, the vendor – especially if they know the implementation failed – will generally walk away and write it your receivable as bad debt.   If they are particularly devious (and incorrect) they might not even take it as churn until the end of the five-year period when the contract is supposed to renew.   I wouldn’t be shocked if you could find a company that did it this way.

Most sophisticated SaaS people know that SaaS companies shouldn’t be run on TCV or bookings and are well aware of the problems doing so creates with ARR, revenue, and cash.

However, I have never heard anyone make the simple additional point I’m making here:  in a frothy environment dubious companies can create a fictitious bubble around themselves using TCV.  However, because non-prepaid multi-year deals only work when the customers are happy, if the company is out over its skis on promises and implementations, then many of the customers will not end up happy, and the company will never collect much of that TCV.  Meaning, that it was never really “value” in the first place.

Beware Greeks bearing gifts and SaaS vendors talking TCV.