Reunited with old friend Tracy Eiler on the speaker page
The SaaStr Annual conference was delayed this year, but Jason & crew know that the show must go on. So this year’s event has been rechristened SaaStr Annual @ Home and is being held in virtual, online format on September 2nd and 3rd. The team at SaaStr have assembled a strong, diverse line-up of speakers to provide what should be another simply amazing program.
I think for many sales-aggressive enterprise SaaS startups, a fair amount of churn actually happens at inception. For example, back in 2013, shortly after I joined Host Analytics, I discovered that there were a number of deals that sales had signed with customers that our professional services (PS) team had flat out refused to implement. (Huh?) Sales being sales, they found partners willing to do the implementations and simply rode over the objections of our quite qualified PS team.
When I asked our generally sales-supportive PS team why they refused to do these implementations, they said, “because there was a 0% chance that the customer could be successful.” And they, of course, were right. 100% of those customers failed in implementation and 100% of them churned.
I call this “inception churn,” because it’s churn that’s effectively built-in from inception — the customer is sent, along with a partner, on a doomed journey to solve a problem that the system was never designed to solve. Sales may be in optimistic denial. Pre-sales consulting knows deep down that there’s a problem, but doesn’t want to admit it — after all, they usually work in the Sales team. Professional services can see the upcoming trainwreck but doesn’t know how to stop it so they are either forced to try and catch the falling anvil or, better yet, duck out and a let partner — particularly a new one who doesn’t know any better — try to do so themselves.
In startups that are largely driven by short-term, sales-oriented metrics, there will always be the temptation to take a high-risk deal today, live to fight another day, and hope that someone can make it work before it renews. This problem is compounded when customers sign two- or three-year deals  because the eventual day of reckoning is pushed into the distant future, perhaps beyond the mean survival expectation of the chief revenue officer (CRO) .
Quality startups simply cannot allow these deals to happen:
They burn money because you don’t earn back your CAC. If your customer acquisition cost ratio is 1.5 and your gross margins are 75%, it takes you two years simply to breakeven on the cost of sale. When a 100-unit customer fails to renew after one year, you spent 175 units , receive 100 units, and thus have lost 75 units on the transaction — not even looking at G&A costs.
They burn money in professional services. Let’s say your PS can’t refuse to the implementation. You take a 100-unit customer, sell them 75 units of PS to do the implementation, probably spend 150 units of PS trying to get the doomed project to succeed, eventually fail, and lose another 75 units in PS. (And that’s only if they actually pay you for the first 75.) So on a 100-unit sale, you are now down 150 to 225 units.
They destroy your reputation in the market. SaaS startup markets are small. Even if the eventual TAM is large, the early market is small in the sense that you are probably selling to a close-knit group of professionals, all in the same geography, all doing the same job. They read the same blogs. They talk to the same analysts and consultants. They meet each other at periodic conferences and cocktail parties. You burn one of these people and they’re going to tell their friends — either via these old-school methods over drinks or via more modern methods such as social media platforms (e.g., Twitter) or software review sites (e.g., G2).
They burn out your professional services and customer success teams. Your PS consultants get burned out trying to make the system do something they know it wasn’t designed to do. Your customer success managers (CSMs) get tired of being handed customers who are DOA (dead on arrival) where there’s virtually zero chance of avoiding churn.
They wreck your SaaS metrics and put future financings in danger. These deals drive up your churn rate, reduce your expansion rate, and reduce your customer lifetime value. If you mix enough of them into an otherwise-healthy SaaS business, it starts looking sick real fast.
So what can we do about all this? Clearly, some sort of check-and-balance is needed, but what?
Pay salespeople on the renewal, so they care if the customer is successful? Maybe this could work, but most companies want to keep salespeople focused on new sales.
Pay the CRO on renewal, so he/she keeps an honest eye on sales and sales management? This might help, but again, if a CRO is missing new sales targets, he/she is probably in a lot more trouble than missing renewals — especially if he/she can pin the renewal failures on the product, professional services, or partners.
Separate the CRO and CCO (Chief Customer Officer) jobs as two independent direct reports to the CEO. I am a big believer in this because now you have a powerful, independent voice representing customer success and renewals outside of the sales team. This is a great structure, but it only tells you about the problems after, sometimes quarters or years after, they occur. You need a process that tells you about them before they occur.
The Prospective Customer Success Review Committee
Detecting and stopping inception churn is hard, because there is so much pressure on new sales in startups and I’m proposing to literally create the normally fictitious “sales prevention team” — which is how sales sometimes refers to corporate in general, making corporate the butt of many jokes. More precisely, however, I’m saying to create the badsales prevention team.
To do so, I’m taking an idea from Japanese manufacturing, the Andon Cord, and attaching a committee to it . The Andon Cord is a cord that runs the length of an assembly line that gives the power to anyone working along the line to stop it in order to address problems. If you see a car where the dashboard is not properly installed, rather than letting it just move down the line, you can pull the cord, stop the line, and get the problem fixed upstream, rather than hoping QA finds it later or shipping a defective product to a customer.
To prevent inception churn, we need two things:
A group of people who can look holistically at a high-risk deal and decide if it’s worth taking. I call that group the Prospective Customer Success Review Committee (the PCSRC). It should have high-level members from sales, presales, professional services, customer success, and finance.
And a means of flagging a deal for review by that committee — that’s the Andon Cord idea. You need to let everyone who works on deals know that there is a mechanism (e.g., an email list, a field in SFDC) by which they can flag a deal for PCSRC review. Your typical flaggers will be in either pre-sales or post-sales consulting.
I know there are lots of potential problems with this. The committee might fail to do its job and yield to pressure to always say yes. Worse, sales can start to punish those who flag deals such that suspect deals are never flagged and/or that people feel they need an anonymous way to flag them . But these are manageable problems in a healthy culture.
Moreover, simply calling the group together to talk about high-risk deals has two, potentially non-obvious, benefits:
In some cases, lower risk alternatives can be proposed and presented back to the customer, to get the deal more into the known success envelope.
In other cases, sales will simply stop working on bad deals early, knowing that they’ll likely end up in the PCSRC. In many ways, I think this the actual success metric — the number of deals that we not only didn’t sign, but where we stopped work early, because we knew the customer had little to no chance of success.
I don’t claim to have either fully deployed or been 100% successful with this concept. I do know we made great strides in reducing inception churn at Host and I think this was part of it. But I’m also happy to hear your ideas on either approaching the problem from scratch and/or improving on the basic framework I’ve started here.
# # #
 Especially if they are prepaid.
 If CROs last on average only 19 to 26 months, then how much does a potentially struggling CRO actually care about a high-risk deal that’s going to renew in 24 months?
 150 units in S&M to acquire them and 25 units in cost of goods sold to support their operations.
 I can’t claim to have gotten this idea working at more than 30-40% at Host. For example, I’m pretty sure you could find people at the company who didn’t know about the PCSR committee or the Andon Cord idea; i.e., we never got it fully ingrained. However, we did have success in reducing inception churn and I’m a believer that success in such matters is subtle. We shouldn’t measure success by how many deals we reject at the meeting, but instead by how much we reduce inception churn by not signing deals that we never should have been signed.
 Anonymous can work if it needs to. But I hope in your company it wouldn’t be required.
[Editor’s note: revised 3/27/17 with changes to some definitions.]
It’s been nearly three years since my original post on calculating SaaS renewal rates and I’ve learned a lot and seen a lot of new situations since then. In this post, I’ll provide a from-scratch overhaul on how to calculate churn in an enterprise SaaS company .
While we are going to need to “get dirty” in the detail here, I continue to believe that too many people are too macro and too sloppy in calculating these metrics. The details matter because these rates compound over time, so the difference between a 10% and 20% churn rate turns into a 100% difference in cohort value after 7 years . Don’t be too busy to figure out how to calculate them properly.
The Leaky Bucket Full of ARR
I conceptualize SaaS companies as leaky buckets full of annual recurring revenue (ARR). Every time period, the sales organization pours more ARR into the bucket and the customer success (CS) organization tries to prevent water from leaking out .
This drives the leaky bucket equation, which I believe should always be the first four lines of any SaaS company’s financial statements:
Starting ARR + new ARR – churn ARR = ending ARR
Here’s an example, where I start with those four lines, and added two extra (one to show a year over year growth rate and another to show “net new ARR” which offsets new vs. churn ARR):
For more on how to present summary SaaS startup financials, go here.
Half-Full or Half-Empty: Renewals or Churn?
Since the renewal rate is simply one minus the churn rate, the question is which we should calculate? In the past, I favored splitting the difference , whereas I now believe it’s simpler just to talk about churn. While this may be the half-empty perspective, it’s more consistent with what most people talk about and is more directly applicable, because a common use of a churn rate is as a discount rate in a net present value (NPV) formula.
Thus, I now define the world in terms of churn and churn rates, as opposed to renewals and renewal rates.
Terminology: Shrinkage and Expansion
For simplicity, I define the following two terms:
Shrinkage = anything that makes ARR decrease. For example, if the customer dropped seats or was given a discount in return for signing a multi-year renewal .
Expansion = anything that makes ARR increase, such as price increases, seat additions, upselling from a bronze to a gold edition, or cross-selling new products.
Key Questions to Consider
The good news is that any churn rate calculation is going to be some numerator over some denominator. We can then start thinking about each in more detail.
Here are the key questions to consider for the numerator:
What should we count? Number of accounts, annual recurring revenue (ARR), or something else like renewal bookings?
If we’re counting ARR should we think at the product-level or account-level?
To what extent should we offset shrinkage with expansion in calculating churn ARR? 
When should we count what? What about early and late renewals? What about along-the-way expansion? What about churn notices or non-payment?
Here are the key questions to consider for the denominator:
Should we use the entire ARR pool, that portion of the ARR pool that is available to renew (ATR) in any given time period, or something else?
If using the ATR pool, for any given renewing contract, should we use its original value or its current value (e.g., if there has been upsell along the way)?
What Should We Count? Logos and ARR
I believe the two metrics we should count in churn rates are
Logos (i.e., number of customers). This provides a gross indication of customer satisfaction  unweighted by ARR, so you can answer the question: what percent of our customer base is turning over?
ARR. This provides a very important indication on the value of our SaaS annuity. What is happening to our ARR pool?
I would stay completely away from any SaaS metrics based on bookings (e.g., a bookings CAC, TCV, or bookings-based renewals rate). These run counter to the point of SaaS unit economics.
Gross and Net Shrinkage; Account-Level Churn
Let’s look at a quick example to demonstrate how I now define gross and net shrinkage as well as account-level churn .
Gross shrinkage is the sum of all the shrinkage. In the example, 80 units.
Net shrinkage is the sum of the shrinkage minus the sum of the expansion. In the example, 80-70 = 10 units.
To calculate account-level churn, we proceed, account by account, and look at the change in contract value, separating upsell from the churn. The idea is that while it’s OK to offset shrinkage with expansion within an account that we should not do so across accounts when working at the account level . This has the effect of splitting expansion into offset (used to offset shrinkage within an account) and upsell (leftover expansion after all account-level shrinkage has been offset). In the example, account-level churn is 30 units.
Make the important note here that how we calculate you churn – and specifically how we use expansion ARR to offset shrinkage—not only affects our churn rates, but our reported upsell rates as well. Should we proudly claim 70 units of upsell (and less proudly 80 units of churn), 30 units of churn and 20 of upsell, or simply 10 units of churn? I vote for the second.
While working at the account-level may seem odd, it is how most SaaS companies work operationally. First, because they charter customer success managers (CSMs) to think at the account level, working account by account doing everything they can to preserve and/or increase the value of the account. Second, because most systems work at and finance people think at the account level – e.g., “we had a customer worth 100 units last year, and they are worth 110 units this year so that means upsell of 10 units. I don’t care how much is price increase vs. swapping some of product A for product B.” 
So, when a SaaS company reports “churn ARR,” in its leaky bucket analysis, I believe they should report neither gross churn nor net churn, but account-level churn ARR.
Timing Issues and the Available to Renew (ATR) Concept
Churn calculations bring some interesting challenges such as early/late renewals, churn notices, non-payment, and along-the-way expansion.
A renewals booking should always be taken in the period in which it is received. If a contract expires on 6/30 and the renewal is received in on 6/15 it should show up in 2Q and if received on 7/15 it should up in 3Q.
For churn rate calculations, however, the customer success team needs to forecast what is going to happen for a late renewal. For example, if we have a board meeting on 7/12 and a $150K ARR renewal due 6/30 has not yet been happened, we need to proceed based on what the customer has said. If the customer is actively using the software, the CFO has promised a renewal but is tied up on a European vacation, I would mark the numbers “preliminary” and count the contract as renewed. If, however, the customer has not used the software in months and will not return our phone calls, I would count the contract as churned.
Suppose we receive a churn notice on 5/1 for a contract that renews on 6/30. When should we count the churn? A Bessemer SaaS fanatic would point to their definition of committed monthly recurring revenue (CMRR)  and say we should remove the contact from the MRR base on 5/1. While I agree with Bessemer’s views in general — and specifically on things like on preferring ARR/MRR to ACV and TCV — I get off the bus on the whole notion of “committed” ARR/MRR and the ensuing need to remove the contract on 5/1. Why?
In point of fact the customer has licensed and paid for the service through 6/30.
The company will recognize revenue through 6/30 and it’s much easier to do so correctly when the ARR is still in the ARR base.
Operationally, it’s defeatist. I don’t want our company to give up and say “it’s over, take them out of the ARR base.” I want our reaction to be, “so they think they don’t want to renew – we’ve got 60 days to change their mind and keep them in.” 
We should use the churn notice (and, for that matter, every other communication with the customer) as a way of improving our quarterly churn forecast, but we should not count churn until the contract period has ended, the customer has not renewed, and the customer has maintained their intent not to renew in coming weeks.
Non-payment, while hopefully infrequent, is another tricky issue. What do we do if a customer gives us a renewal order on 6/30, payable in 30 days, but hasn’t paid after 120? While the idealist in me wants to match the churn ARR to the period in which the contract was available to renew, I would probably just show it as churn in the period in which we gave up hope on the receivable.
Expansion Along the Way (ATW)
Non-payment starts to introduce the idea of timing mismatches between ARR-changing events and renewals cohorts. Let’s consider a hopefully more frequent case: ARR expansion along the way (ATW). Consider this example.
To decide how to handle this, let’s think operationally, both about how our finance team works and, more importantly, about how we want our customer success managers (CSMs) to think. Remember we want CSMs to each own a set of customers, we want them to not only protect the ARR of each customer but to expand it over time. If we credit along-the-way upsell in our rate calculations at renewal time, we shooting ourselves in the foot. Look at customer Charlie. He started out with 100 units and bought 20 more in 4Q15, so as we approach renewal time, Charlie actually has 120 units available to renew (ATR), not 100 . We want our CSMs basing their success on the 120, not the 100. So the simple rule is to base everything not on the original cohort but on the available to renew (ATR) entering the period.
This begs two questions:
When do we count the along-the-way upsell bookings?
How can we reflect those 40 units in some sort of rate?
The answer to the first question is, as your finance team will invariably conclude, to count them as they happen (e.g., in 4Q15 in the above example).
The answer to the second question is to use a retention rate, not a churn rate. Retention rates are cohort-based, so to calculate the net retention rate for the 2Q15 cohort, we divide its present value of 535 by its original value of 500 and get 107%.
Never, ever calculate a retention rate in reverse – i.e., starting a group of current customers and looking backwards at their ARR one year ago. You will produce a survivor biased answer which, stunningly, I have seen some public companies publish. Always run cohort analyses forwards to eliminate survivor bias.
Finally, we need to consider how to address off-cycle (or extra-cohort) activity in calculating churn and related rates. Let’s do this by using a big picture example that includes everything we’ve discussed thus far, plus off-cycle activity from two customers who are not in the 2Q16 ATR cohort: (1) Foxtrot, who purchased in 3Q14, renewed in 3Q15, and who has not paid, and (2) George, who purchased in 3Q15, who is not yet up for renewal, but who purchased 50 units of upsell in 2Q16.
Foxtrot should count as churn in 2Q16, the period in which we either lost hope of collection (or our collections policy dictated that collection we needed to de-book the deal). 
George should count as expansion in 2Q16, the period in which the expansion booking was taken.
The trick is that neither Foxtrot nor George is on a 2Q renewal cycle, so neither is included in the 2Q16 ATR cohort. I believe the correct way to handle this is:
Both should be factored into gross, net, account-level churn, and upsell.
For rates where we include them in the numerator, for consistency’s sake we must also include them in the denominator. That means putting the shrinkage in the numerator and adding the ATR of a shrinking (or lost) account in denominator of a rate calculation. I’ll call this the “+” concept, and define ATR+ as inclusive of such additional logos or ARR resulting from off-cycle accounts .
We are now in the position to define and calculate the churn rates that I use and track:
Simple churn rate = net shrinkage / starting period ARR * 4. Or, in English, the net change in ARR from existing customers divided by starting period ARR (multiplied by 4 to annualize the rate which is measured against the entire ARR base). As the name implies, this is the simplest churn rate to calculate. This rate will be negative whenever expansion is greater than shrinkage. Starting period ARR includes both ATR and non-ATR contracts (including potentially multi-year contracts) so this rate takes into account the positive effects of the non-cancellability of multi-year deals. Because it takes literally everything into account, I think this is the best rate for valuing the annuity of your ARR base.
Logo churn rate = number of discontinuing logos / number of ATR+ logos. This rate tells us the percent of customers who, given the chance, chose to discontinue doing business with us. As such, it provides an ARR-unweighted churn rate, providing the best sense of “how happy” our customers are, knowing that there is a somewhat loose correlation between happiness and renewal . Remember that ATR+ means to include any discontinuing off-cycle logos, so the calculation is 1/16 = 6.3% in our example.
Retention rate = current ARR [time cohort] / time-ago ARR [time cohort]. In English, the current ARR from some time-based cohort (e.g., 2Q15) divided by the year-ago ARR from that same cohort. Typically we do this for the one-year-ago or two-years-ago cohorts, but many companies track each quarter’s new customers as a cohort which they measure over time. Like simple churn, this is a great macro metric that values the ARR annuity, all in.
Gross churn rate = gross shrinkage / ATR+. This churn rate is important because it reveals the difference between companies that have high shrinkage offset by high expansion and companies which simply have low shrinkage. Gross churn is a great metric because it simply shows the glass half-empty view: at what rate is ARR leaking out of your bucket before offset it with refills in the form of expansion ARR.
Account-level churn rate = account-level churn / ATR+. This churn rate foots to the reported churn ARR in our leaky bucket analysis (which uses account-level churn), partially offsets shrinkage with expansion at an account-level, and is how most SaaS companies actually calculate churn. While perhaps counter-intuitive, it reflects a philosophy of examining, at an account basis, what happens to value of our each of our customers when we allow shrinkage to be offset by expansion (which is what we want our CSM reps doing) leaving any excess as upsell. This should be our primary churn metric.
Net churn rate = net shrinkage / ATR+. This churn rate offsets shrinkage with expansion not at the account level, but overall. This is similar to the simple churn rate but with the disadvantage of looking only at ATR and not factoring in the positive effects of non-cancellability of multi-year deals. Ergo, I prefer using the simple churn rate to the net churn rate in valuing the SaaS annuity.
 The 10% churn group decays from 100 units to 53 in value after 7 years, while the 20% group decays to 26.
 We’ll sidestep the question of who is responsible for installed-based expansion in this post because companies answer it differently (e.g., sales, customer success, account management) and the good news is we don’t need to know who gets credited for expansion to calculate churn rates.
 Discussing churn in dollars and renewals in rates.
 For example, if a customer signed a one-year contract for 100 units and then was offered a 5% discount to sign a three-year renewal, you would generate 5 units of ARR churn.
 And yes, sometimes unhappy customers do renew (e.g., if they’ve been too busy to replace you) and happy customers don’t (e.g., if they get a new key executive with different preferences) but counting logos still gives you a nice overall indication.
 Note that I have capitulated to the norm of saying “gross” churn means before offset and thus “net” churn means after netting out shrinkage and expansion. (Beware confusion as this is the opposite of my prior position where I defined “net” to mean “net of expansion,” i.e., what I’d now call “gross.”)
 Otherwise, you can just look at net shrinkage which offsets all shrinkage by all expansion. The idea of account-level churn is to restrict the ability to offset shrinkage with expansion across accounts, in effect, telling your customer success reps that their job is to, contract by contract, minimize shrinkage and ensure expansion.
 “Offset” meaning ARR used to offset shrinkage that ends up neither churn nor upsell.
 While this approach works fine for most (inherently single-product) SaaS startups it does not work as well for large multi-product SaaS vendors where the failure of product A might be totally or partially masked by the success of product B. (In our example, I deliberately had all the shrinkage coming from downsell of product A to make that point. The product or general manager for product A should own the churn number that product and be trying to find out why it churned 80 units.)
 MRR = monthly recurring revenue = 1/12th of ARR. Because enterprise SaaS companies typically run on an annual business rhythm, I prefer ARR to MRR.
 Worse yet, if I churn them out on 5/1 and do succeed in changing their mind, I might need to recognize it as “new ARR” on 6/30, which would also be wrong.
 The more popular way of handling this would have been to try and extend the original contract and co-terminate with the upsell in 4Q16, but that doesn’t affect the underlying logic, so let’s just pretend we tried that and it didn’t work for the customer.
 Whether you call it a de-booking or bad receivable, Foxtrot was in the ARR base and needs to come out. Unlike the case where the customer has paid for the period but is not using the software (where we should churn it at the end of the contract), in this case the 3Q15 renewal was effectively invalid and we need to remove Foxtrot from the ARR base at some defined number of days past due (e.g., 90) or when we lose hope of collection (e.g., bankruptcy).
 I think the smaller you are the more important this correction is to ensure the quality of your numbers. As a company gets bigger, I’d just drop the “+” concept whenever it’s only changing things by a rounding error.
 Use NPS surveys for another, more precise, way of measuring happiness. See  as well.
I was chatting with a fellow SaaS executive the other day and the conversation turned to churn and renewal rates. I asked how he calculated them and he said:
Well, we take every customer who was also a customer 12 months ago and then add up their ARR 12 months ago and add up their ARR today, and then divide today’s ARR by year-ago ARR to get an overall retention or expansion rate.
Well, that sounds dandy until you think for a minute about survivor bias, the often inadvertent logical error in analyzing data from only the survivors of a given experiment or situation. Survivor bias is subtle, but here are some common examples:
I first encountered survivor bias in mutual funds when I realized that look-back studies of prior 5- or 10-year performance include only the funds still in existence today. If you eliminate my bogeys I’m actually an below-par golfer.
My favorite example is during World War II, analysts examined the pattern of anti-aircraft fire on returning bombers and argued to strengthen them in the places that were most often hit. This was exactly wrong — the places where returning bombers were hit were already strong enough. You needed to reinforce them in the places that the downed bombers were hit.
So let’s turn back to churn rates. If you’re going to calculate an overall expansion or retention rate, which way should you approach it?
Start with a list of customers today, look at their total ARR, and then go compare that to their ARR one year ago, or
Start with a list of customers from one year ago and look at their ARR today.
Number 2 is the obvious answer. You should include the ARR from customers who choose to stop being customers in calculating an overall churn or expansion rate. Calculating it the first way can be misleading because you are looking at the ARR expansion only from customers who chose to continue being customers.
Let’s make this real via an example.
The ARR today is contained in the boxed area. The survivor bias question comes down to whether you include or exclude the orange rows from year-ago ARR. The difference can be profound. In this simple example, the survivor-biased expansion rate is a nice 111%. However, the non-biased rate is only 71% which will get you a quick “don’t let the door hit your ass on the way out” at most VCs. And while the example is contrived, the difference is simply one of calculation off identical data.
Do companies use survivor-biased calculations in real life? Let’s look at my post on the Hortonworks S-1 where I quote how they calculate their net expansion rate:
We calculate dollar-based net expansion rate as of a given date as the aggregate annualized subscription contract value as of that date from those customers that were also customers as of the date 12 months prior, divided by the aggregate annualized subscription contract value from all customers as of the date 12 months prior.
When I did my original post on this, I didn’t even catch it. But therein lies the subtle head of survivor bias.
# # #
I have not tracked the Hortonworks in the meantime so I don’t know if they still report this metric, at what frequency, how they currently calculate it, etc.
To the extent that “everyone calculates it this way” is true, then companies might report it this way for comparability, but people should be aware of the bias. One approach is to create a present back-looking and a past forward-looking metric and show both.
See my FAQ for additional disclaimers, including that I am not a financial analyst and do not make recommendations on stocks.
One thing that amazes me is when I hear people talk about how they analyze churn in a cloud, software as a service (SaaS), or other recurring revenue business.
You hear things like:
“17% of our churn comes from emerging small business (ESB) segment, which is normal because small businesses are inherently unstable.”
“22% of our churn comes from companies in the $1B+ revenue range, indicating that we may have a problem meeting enterprise needs.”
“40% of the customers in the residential mortgage business churned, indicating there is something wrong our product for that vertical.”
There are three fallacies at work here.
The first is assumed causes. If you that 17% of your churn comes from the ESB segment, you know one and only one thing: that 17% of your churn comes from the ESB segment. Asserting small business stability as the cause is pure speculation. Maybe they did go out of business or get bought. Or maybe they didn’t like your product. Or maybe they did like your product, but decided it was overkill for their needs. If you want to how much of your churn came from a given segment, ask a finance person. If you want to know why a customer churned, ask them. Companies with relatively small customer bases can do it via a phone. Customers with big bases can use an online survey. It’s not hard. Use metrics to figure out where your churn comes from. Use surveys to figure out why.
The second is not looking at propensities and the broader customer base. If I said that 22% of your annual recurring revenue (ARR) comes from $1B+ companies, then you shouldn’t be surprised that 22% of your churn comes from them as well. If I said that 50% of your ARR comes from $1B+ companies (and they were your core target market), then you’d be thrilled that only 22% of your churn comes from them. The point isn’t how much of your churn comes from a given segment: it’s how much of your churn comes from a given segment relative to how much of your overall business comes from that segment. Put differently, what is the propensity of someone to churn in one segment versus another.
And you can’t perform that analysis without getting a full data set — of both customers who did churn and customers who didn’t. That’s why I say you can’t analyze churn by analyzing churn. Too many people, when tasked with churn analysis: say, “quick, get me a list of all the customers who churned in the past 6 months and we’ll look for patterns.” At that instant you are doomed. All you can do is decompose churn into buckets, but know nothing of propensities.
For example, if you noticed that in one country that a stunning 99% of churn came from customers with blue eyes, you might be prompted to launch an immediate inquiry into how your product UI somehow fails for blue-eyed customers. Unless, of course, the country was Estonia where 99% of the population has blue eyes, and ergo 99% of your customers do. Bucketing churn buys you nothing without knowing propensities.
The last is correlation vs. causation. Knowing that a large percentage of customers in the residential mortgage segment churned (or even have higher propensity to churn) doesn’t tell you why they are churning. Perhaps your product does lack functionality that is important in that segment. Or perhaps it’s 2008, the real estate crisis is in full bloom, and those customers aren’t buying anything from anybody. The root cause is the mortgage crisis, not your product. Yes, there is a high correlation between customers in that vertical and their churn rate. But the cause isn’t a poor product fit for that vertical, it’s that the vertical itself is imploding.
A better, and more fun, example comes from The Halo Effect, which tells the story that a famous statistician once showed a precise correlation between the increase in the number of Baptist preachers and the increase in arrests for public drunkenness during the 19th Century. Do we assume that one caused the other? No. In fact, the underlying driver was the general increase in the population — with which both were correlated.
So, remember these two things before starting your next churn analysis
If you want to know why someone churned, ask them.
If you want to analyze churn, don’t just look at who churned — compare who churned to who didn’t
I’m Dave Kellogg, advisor, director, consultant, angel investor, and blogger focused on enterprise software startups. I am an executive-in-residence (EIR) at Balderton Capital and principal of my own eponymous consulting business.
I bring an uncommon perspective to startup challenges having 10 years’ experience at each of the CEO, CMO, and independent director levels across 10+ companies ranging in size from zero to over $1B in revenues.
From 2012 to 2018, I was CEO of cloud EPM vendor Host Analytics, where we quintupled ARR while halving customer acquisition costs in a competitive market, ultimately selling the company in a private equity transaction.
Previously, I was SVP/GM of the $500M Service Cloud business at Salesforce; CEO of NoSQL database provider MarkLogic, which we grew from zero to $80M over 6 years; and CMO at Business Objects for nearly a decade as we grew from $30M to over $1B in revenues. I started my career in technical and product marketing positions at Ingres and Versant.
I love disruption, startups, and Silicon Valley and have had the pleasure of working in varied capacities with companies including Bluecore, FloQast, GainSight, Hex, MongoDB, Pigment, Recorded Future, and Tableau.