Category Archives: SaaS

Survivor Bias in Churn Calculations: Say It’s Not So!

I was chatting with a fellow SaaS executive the other day and the conversation turned to churn and renewal rates.  I asked how he calculated them and he said:

Well, we take every customer who was also a customer 12 months ago and then add up their ARR 12 months ago and add up their ARR today, and then divide today’s ARR by year-ago ARR to get an overall retention or expansion rate.

Well, that sounds dandy until you think for a minute about survivor bias, the often inadvertent logical error in analyzing data from only the survivors of a given experiment or situation.  Survivor bias is subtle, but here are some common examples:

  • I first encountered survivor bias in mutual funds when I realized that look-back studies of prior 5- or 10-year performance include only the funds still in existence today.  If you eliminate my bogeys I’m actually an below-par golfer.
  • My favorite example is during World War II, analysts examined the pattern of anti-aircraft fire on returning bombers and argued to strengthen them  in the places that were most often hit.  This was exactly wrong — the places where returning bombers were hit were already strong enough.  You needed to reinforce them in the places that the downed bombers were hit.

So let’s turn back to churn rates.  If you’re going to calculate an overall expansion or retention rate, which way should you approach it?

  1. Start with a list of customers today, look at their total ARR, and then go compare that to their ARR one year ago, or
  2. Start with a list of customers from one year ago and look at their ARR today.

Number 2 is the obvious answer.  You should include the ARR from customers who choose to stop being customers in calculating an overall churn or expansion rate.  Calculating it the first way can be misleading because you are looking at the ARR expansion only from customers who chose to continue being customers.

Let’s make this real via an example.

survivor bias

The ARR today is contained in the boxed area.  The survivor bias question comes down to whether you include or exclude the orange rows from year-ago ARR.  The difference can be profound.  In this simple example, the survivor-biased expansion rate is a nice 111%.  However, the non-biased rate is only 71% which will get you a quick “don’t let the door hit your ass on the way out” at most VCs.  And while the example is contrived, the difference is simply one of calculation off identical data.

Do companies use survivor-biased calculations in real life?  Let’s look at my post on the Hortonworks S-1 where I quote how they calculate their net expansion rate:

We calculate dollar-based net expansion rate as of a given date as the aggregate annualized subscription contract value as of that date from those customers that were also customers as of the date 12 months prior, divided by the aggregate annualized subscription contract value from all customers as of the date 12 months prior.

When I did my original post on this, I didn’t even catch it.  But therein lies the subtle head of survivor bias.

# # #

Disclaimers:

  • I have not tracked the Hortonworks in the meantime so I don’t know if they still report this metric, at what frequency, how they currently calculate it, etc.
  • To the extent that “everyone calculates it this way” is true, then companies might report it this way for comparability, but people should be aware of the bias.  One approach is to create a present back-looking and a past forward-looking metric and show both.
  • See my FAQ for additional disclaimers, including that I am not a financial analyst and do not make recommendations on stocks.

Joining the Granular Board of Directors

I’m very happy to say that I’ve joined the Board of Directors of Granular.  In this post, I’ll provide some commentary that goes beyond the formal announcement.

I think all CEOs should sit on boards because it makes you a better CEO.  You get take remove the blinders that come from your own (generally all-consuming) company, you build the network of people you can rely upon for answering typical CEO questions, and most importantly, you get to turn the tables and better understand how things might look when seen from the board perspective of your own company.

Let’s share a bit about Granular.

  • Granular is a cloud computing company, specifically a vertical SaaS company, aimed at improving the efficiency of farms.
  • They have a world-class team with the usual assortment of highly intelligent overachievers and with an unusual number of physicists on the executive team, which is always a good thing in a big data company.  (While you might think data scientists are computer science or stats majors, a large number of them seem to come from physics.)

To get a sense of the team’s DNA, here’s a word cloud of the leadership page.

wordle 2

Finally, let’s share a bit about why I decided to join the board.

  • As mentioned, they have a world-class team and I love working with supersmart people.
  • I like vertical strategies.  At MarkLogic, we built the company using a highly vertical strategy.  At Versant, a decade earlier, we turned the company around with a vertical strategy.  At BusinessObjects, while we grew to $1B largely horizontally, as we began to hit scale we used verticals as “+1″ kickers to sustain growth.  As a marketeer by trade, I love getting into the mind of and focusing on the needs of the customer, and verticals are a great way to do that.
  • I love the transformational power of the cloud. (Wait, do I sound like too much like @Benioff?)  While cloud computing has many benefits, one of my favorites is that the cloud can bring software to markets and businesses where the technology was previously inaccessible.  This is particularly true with farming, which is a remote, fragmented, and “non-sexy” industry by Silicon Valley standards.
  • I like their angle.  While a lot of farming technology thus far has been focused on precision ag, Granular is taking more of financial and operations platform approach that is a layer up the stack.  Granular helps farmers make better operational decisions (e.g., which field to harvest when), tracks those decisions, and then as a by-product produces a bevy of data that can be used for big data analysis.
  • I love their opportunity.  Not only is this a huge, untapped market, but there is a two-fer opportunity:  [1] a software service that helps automate operations and [2] an information service opportunity derived from the collected big data.
  • Social good.  The best part is that all these amazing people and great technology comes packaged with a built-in social good.  Helping farmers be more productive not only helps feed the world but helps us maximize planetary resource efficiency in so doing.

I thank the Granular team for taking me on the board, and look forward to a bright, transformational future.

Don’t Be a Metrics Slave

I love metrics.  I live for metrics.  Every week and every quarter I drown my team in metrics reviews.  Why?  Because metrics are the instrumentation — the flight panel — of our business.   Good metrics provide clear insights.  They cut through politics, spin, and haze.  They spark amazing debates.   They help you understand your business and compare it to others.

I love metrics, but I’ll never be a slave to them.  Far too often in business I see people who are metrics slaves.  Instead of mastering metrics to optimize the business, the metrics become the master and the manager a slave.

I define metrics slavery as the case when managers stop thinking and work blindly towards achieving a metric regardless of whether they believe doing so leads to what they consider is best for the business.

One great thing about sports analytics is that despite an amazing slew of metrics, everyone remembers it’s the team with the most goals that wins, not the one who took the most shots.  In business, we often get that wrong in both subtle and not-so-subtle ways.

Here are metrics mistakes that often lead to metrics slavery.

  1. Dysfunctional compensation plans, where managers actively and openly work on what they believe are the wrong priorities in response to a compensation plan that drives them to do so. The more coin-operated the type of people in a department, the more carefully you must define incentives.  While strategic marketers might challenge a poorly aligned compensation plan, most salespeople will simply behave exactly as dictated by the compensation plan.  Be careful what you ask for, because you will often get it.
  1. Poor metric selection. Marketers who count leads instead of opportunities are counting shots instead of goals.  I can’t stand to see tradeshow teams giving away valuable items so they can run the card of every passing attendee.  They might feel great about getting 500 leads by the end of the day, but if 200 are people who will never buy, then they are not only useless but actually have negative value because the company’s nurture machine is going to invest fruitless effort in converting them.
  1. Lack of leading indicators. Most managers are more comfortable with solid lagging indicators than they are with squishier leading indicators.  For example, you might argue that leads are a great leading indicator of sales, and you’d be right to the extent that they are good leads.  This then requires you to define “good,” which is typically done using some ABC-style scoring system.  But because the scoring system is complex, subjective, and requires iteration and regression to define, some managers find the whole thing too squishy and say “let’s just count leads.” That’s the equivalent of counting shots, including shots off-goal that never could have scored.  While leading indicators require a great deal of thought to get right, you must include them in your key metrics, lest you create a company of backwards-looking managers.
  1. Poorly-defined metrics. The plus/minus metric in hockey is one of my favorite sports metrics because it measures teamwork, something I’d argue is pretty hard to measure [1].  However, there is a known problem with the plus/minus rating.  It includes time spent on power plays [2] and penalty kills [3].  Among other problems, this unfairly penalizes defenders on the penalty-killing unit, diluting the value of the metric.  Yet, far as I know, no one has fixed this problem.   So while it’s tracked, people don’t take it too seriously because of its known limitations.  Do you have metrics like this at your company?  If so, fix them.
  1. Self-fulfilling metrics. These are potential leading metrics where management losses sight of the point and accidentally makes their value a self-fulfilling prophecy.  Pipeline coverage (value of oppties in the pipeline / plan) is such a metric.  Long ago, it was good leading indicator of plan attainment, but over the past decade literally every sales organization I know has institutionalized beating salespeople unless they have 3x coverage.  What’s happened?  Today, everyone has 3x coverage. It just doesn’t mean anything anymore.  See this post for a long rant on this topic.
  1. Ill-defined metrics, which happen a lot in benchmarking where we try to compare, for example, our churn rate to an industry average. If you are going to make such comparisons, you must begin with clear definitions or else you are simply counting angels on pinheads.   See this post where I give an example where, off the same data, I can calculate a renewals rate of 69%, 80%, 100%, 103%, 120%, 208%, or 310%, depending on how you choose to calculate.  If you want to do a meaningful benchmark, you better be comparing the 80% to the 80%, not the 208%.
  1. Blind benchmarking. The strategic mistake that managers make in benchmarking is that they try to converge blindly to the industry average.  This reminds me of the Vonnegut short-story where ballerinas have to wear sash-weights and the intelligentsia have music blasted into their ears in order to make everyone equal.  Benchmarks should be tools of understanding, not instruments of oppression.   In addition, remember that benchmarks definitionally blend industry participants with different strategies.  One company may heavily invest in R&D in product-leadership strategy.  One may heavily invest in S&M as part of market-share leadership strategy.  A third may invest heavily in supply chain optimization as part of cost-leadership strategy.  Aspiring to the average of these companies is a recipe for failure, not success, as you will end up in a strategic No Man’s Land.  In my opinion, this is the most dangerous form of metrics slavery because it happens at the boardroom level, and often with little debate.
  1. Conflicting metrics. Let’s take a concrete example here.  Imagine you are running a SaaS business that’s in a turnaround.  This year bookings growth was flat.  Next year you want to grow bookings 100%.  In addition, you want to converge your P&L over time to an industry average of S&M expenses at 50% of revenues, whereas today you are running at 90%.  While that may sound reasonable it’s actually a mathematical impossibility.   Why?  Because the company is changing trajectories and in a SaaS business revenues lag bookings by a year.   So next year revenue will be growing slowly [4] and that means you need to grow S&M even slower if you want to meet the P&L convergence goal.  But if you want to meet the 100% bookings growth goal, with improving efficiency, you’ll need to increase S&M cost by say 70%.  It’s impossible.  #QED.  There will always be a tendency to split the difference in such scenarios but that is a mistake.  The question is which is the better metric off which to anchor?   The answer, in a SaaS business is bookings.  Ergo, the correct answer is not to split the difference (which will put the bookings goal at risk) but to recognize that bookings is the better metric and anchor S&M expense to bookings growth.  This requires a deep understanding of the metrics you use and the courage to confront two conflicting rules of conventional wisdom in so doing.

In the end, metrics slavery, while all too common, is more about the people than the metrics.  Managers need to be challenged to understand metrics.  Managers need to be empowered to define new and better metrics.  Managers must to be told to use their brains at all times and never do something simply to move a metric.

If you’re always thinking critically, you’ll never be a metrics slave.  The day you stop, you’ll become one.

# # #

[1] The way it works is simple:  if you’re on the ice when your team scores, you get +1.  If you’re on the ice when the opponent scores you get -1.  When you look at someone’s plus/minus rating over time, you can see, for example, which forwards hustle back on defense and which don’t.

[2] When, thanks to an opponent’s penalty you have more players on the ice then they do.

[3] When, thanks to your team’s penalty, your opponent has more players on the ice than you do.

[4] Because bookings grew slowly this year

Churn:  Net-First or Sum-First?

While I’ve already done a comprehensive post on the subject of churn in SaaS companies and some perils in how companies analyze it, in talking with fellow SaaS metrics lovers of late, I’ve discovered a new problem that isn’t addressed by my posts.

The question?   When calculating churn, should you sum first (adding up all the shrinkage ARR) or net first (net shrinkage vs. expansion ARR and then sum that).  It seems like a simple question, but like so many subtitles in SaaS metrics, whether you net-first or sum-first, and how you report in so doing, can make a big difference in how you see the business through the numbers.

Let’s see an example.

net1

So what’s our churn rate:  a healthy -1% or a scary 15%?  The answer is both.  In my other post, I define about 5 churn rates, and when you sum first you get my “net ARR churn” rate [1], which comes in at a rather disturbing 15%.  When, however, you net first you end up a healthy -1% (“gross ARR churn”) rate because expansion ARR has more than offset shrinkage.  At my company we track both rates because each tells you a different story.

Thanks to the wonders of math, both the net-first and sum-first calculations take you to the same ending ARR number.  That’s not the problem.

The problem is that many companies report churn in a format not like my table above, but in something simpler like that looks like this below [2].

net2

As you can see, this net-first format doesn’t show expansion and shrinkage by customer.  I think this is dangerous because it can obscure real problems when shrinkage ARR is offset, or more than offset, by expansion ARR.

For example, customer 2 looks great in the second chart (“wow, $20K in negative churn!”).  In the first chart, however, you can see customer dropped 4 seats of product A and more than offset that by buying 8 seats of product B.  In fact, in the first chart, you can see that everyone is dropping product A and buying product B which is hidden in the second chart that neither breaks out shrinkage from expansion nor provides a comment as to what’s going on.  My advice is simple:  do sum-first churn and report both the “net ARR” and “gross ARR” renewal rates and you’ll get the whole picture.

Aside 1:  The Reclaimed ARR Issue
This debate prompted a second one with my Customers For Life (CFL) team who wanted to introduce a new metric called “reclaimed ARR,” the ARR that would have been lost on renewal but was saved by CFL through cross-sells, up-sells, and price increases.  Thus far, I’m not in love with the concept as it adds complexity, but I understand why they like it and you can see how I’d calculate it below.

net3

Aside 2:  Saved ARR
The first aside was prompted by the fact that CFL/renewals teams primarily play defense, not offense.  Like goalies on a hockey team, they get measured by a negative metric (i.e., the churn ARR that got away).   Even when they deliver offsetting expansion ARR, there is still some ARR that gets away, and a lot of their work (in the customer support and customer success parts of CFL) is not about offsetting-upsell, it’s about protecting the core of the renewal.  For that reason, so as to reflect that important work in our metrics, we’ve taken a lesson from baseball and the notion of a “save.”  Once the renewals come in, we add up all the ARR that came from customers who were, at any point in time since their last renewal, in our escalated accounts program and call that Saved ARR.    It’s best metric we’ve found thus far to reflect that important work.

# # #

[1] I have backed into the rather unfortunate position of using the word “net” in two different ways.  When I say “net ARR churn” I mean churn ARR net of (i.e., exclusive of) expansion ARR.  When I say net-first churn, I meant to net-out shrinkage vs. expansion first, before summing the customers to get total churn.

[2] Note that I properly inverted the sign because negative churn is good and positive churn is bad.

Average Contract Duration and SaaS Renewals: All Is Not As It Appears

Chatting with some SaaS buddies the other day, we ran into a fun — and fairly subtle — SaaS metrics question.  It went something like this:

VP of Customer Success:  “Our average contract duration (ACD) on renewals was 1.5 years last quarter and –”

VP of Sales:  “– Wait a minute, our ACD on new business is 2.0 years.  If customers are renewing for shorter terms than those of the initial sale, it  means they are less confident about future usage at renewals time than they are at the initial purchase. Holy Moly, that means we have a major problem with the product or with our customer success program.”

Or do we?  At first blush, the argument makes perfect sense.  If new customers sign two-year contracts and renewing ones sign 1.5-year contracts, it would seem to indicate that renewing customers are indeed less bullish on future usage than existing ones.  Having drawn that conclusion, you are instantly tempted to blame the product, the customer success team, technical support, or some other factor for the customers’ confidence reduction.

But is there a confidence reduction?  What does it actually mean when your renewals ACD is less than your new business ACD?

The short answer is no.  We’re seeing what I call the “why are there so many frequent flyers on airplanes” effect.  At first blush, you’d think that if ultra-frequent flyers (e.g., United 1K) represent the top 1%, then a 300-person flight might have three or four on board, while in reality it’s more like 20-30.  But that’s it — frequent flyers are over-represented on airplanes because they fly more; just like one-year contracts are over-represented in renewals because they renew more.

Let’s look at an example.  We have a company that signs one-year, two-year, and three-year deals.  Let’s assume customers renew for the same duration as their initial contract — so there is no actual confidence reduction in play.  Every deal is $100K in annual recurring revenue (ARR).  We’ll calculate ACD on an ARR-weighted basis.  Let’s assume zero churn.

If we sign five one-year, ten two-year, and fifteen three-year deals, we end up with $3M in new ARR and an ACD of 2.3 years.

renewals and acd

In year 1, only the one-year deals come up for renewal (and since we’ve assumed everyone renews for the same length as their initial term), we have an ACD of one year.  The VP of Sales is probably panicking — “OMG, customers have cut their ACD from 2.3 to 1.0 years!  Who’s to blame?  What’s gone wrong?!”

Nothing.  Only the one-year contracts had a shot at renewing and they all renewed for one year.

In year 2, both the (re-renewing) one-year and the (initially renewing) two-year contracts come up for renewal.  The ACD is 1.7 — again lower than the 2.3-year new business ACD.  While, again, the decrease in ACD might lead you to suspect a problem, there is nothing wrong.  It’s just math and the fact that the shorter-duration contracts renew more often which pulls down the renewals ACD.

What To Do About This?
First, understand it.  As with many SaaS metrics, it’s counter-intuitive.

As I’ve mentioned before, SaaS metrics and unit economics are often misunderstood.  While I remain a huge fan of using them to run the business, I strongly recommend taking the time to develop a deep understanding of them.  In addition, the more I see counter-intuitive examples, the more I believe in building full three- to five-year financial models of SaaS businesses in order to correctly see the complex interplay among drivers.

For example, if a company does one-year, two-year, and three-year deals, a good financial model should have drivers for both new business contract duration (i.e., percent of 1Y, 2Y, and 3Y deals) and a renewals duration matrix that has renewals rates for all nine combinations of {1Y, 2Y, 3Y} x (1Y, 2Y, 3Y} deals (e.g., a 3Y to 1Y renewal rate).  This will produce an overall renewals rate and an overall ACD for renewals.  (In a really good model, both the new business breakdown and the renewals matrix should vary by year.)

Armed with that model, built with assumptions based on both history and future goals for the new business breakdown and the renewals matrix, you can then have meaningful conversations how ACD is varying on new and renewals business relative to plan.  Without that, by just looking at one number and not understanding how it’s produced, you run the very real risk of reacting to math effects setting off a false alarm on renewals.

The Ultimate SaaS Metric: The Customer Lifetime Value to Customer Acquisition Cost Ratio (LTV/CAC)

I’m a big fan of software-as-a-service (SaaS) metrics.  I’ve authored very deep posts on SaaS renewals rates and customer acquisition costs.  I also routinely point readers to other great posts on the topic, including:

But in today’s post, I’m going to examine the question:  of the literally scores of SaaS metrics out there, if you could only pick one single metric, which one would it be?

Let’s consider some candidates:

  • Revenue is bad because it’s a lagging indicator in a SaaS business.
  • Bookings is good because it’s a leading indicator of both revenue and cash, but tells you nothing about the existing customer base.
  • ARR (annual recurring revenue) is good because it’s a leading indicator of revenue and includes the effects of both new sales and customer churn.  However, there are two ways to have slow ending ARR growth:  high sales and high churn or low sales and low churn — and they are very different.
  • Cashflow is good because it tends to net-out a lot of other effects, but can be misleading unless you understand the structure of a company’s bookings mix and payment terms.
  • Gross margin (GM) is nice because it gives you an indicator of how efficiently the service is run, but unfortunately tells you nothing else.
  • The churn rate is good because it helps you value the existing customer annuity, but tells you nothing about new sales.
  • Customer acquisition cost (CAC) is a great measure of sales and marketing efficiency, but by itself is not terribly meaningful because you don’t know what you’re buying:  are you paying, for example, $12K in sales and marketing (S&M) expense for a $1K/month customer who will renew for 3 months or 120?  There’s a big difference between the two.
  • Lifetime value (LTV) is good measure of the annuity value of your customer base, but says nothing about new sales.

Before revealing my single best-choice metric, let me make what might be an unfashionable and counter-intuitive statement.  While I love SaaS “unit economics” as much as anybody, to me there is nothing better than a realistic, four-statement, three-year financial model that factors everything into the mix.  I say this not only because my company makes tools to create such models, but more importantly because unit economics can be misleading in a complicated world of varying contract duration (e.g., 1 to 3+ years), payment terms (e.g., quarterly, annual, prepaid, non-prepaid), long sales cycles (typical CAC calculations assume prior-quarter S&M drives current-quarter sales), and renewals which may differ from the original contract in both duration and terms.

Remember that SaaS unit economics were born in an era of monthly recurring revenue (MRR), so the more your business runs monthly, the better those metrics work — and conversely.  For example, consider two companies:

  • Company A does month-to-month contracts charging $100/month and has a CAC ratio of 1.0.
  • Company B does annual contracts, does three-year prepaid deals, and has a CAC ratio of 2.0.

If both companies have 80% subscription gross margins (GM), then the CAC payback period is 15 months for company A and 30 months for company B.  (CAC payback period is months of subscription gross margin to recover CAC.)

This implies company B is much riskier than company A because company B’s payback period is twice as long and company B’s money is at risk for a full 30 months until it recovers payback.

But it’s completely wrong.  Note that because company B does pre-paid deals its actual, cash payback period is not 30 months, but 1 day.  Despite ostensibly having half the CAC payback period, company A is far riskier because it has to wait 15 months until recovering its S&M investment and each month presents an opportunity for non-renewal.  (Or, as I like to say, “is exposed to the churn rate.”)  Thus, while company B will recoup its S&M investment (and then some) every time, company A will only recoup it some percentage of the time as a function of its monthly churn rate.

Now this is not to say that three-year prepaid deals are a panacea and that everyone should do them.  From the vendor perspective, they are good for year 1 cashflow, but bad in years 2 and 3.  From the customer perspective, three-year deals make plenty of sense for “high consideration” purchases (where once you have completed your evaluation, you are pretty sure of your selection), but make almost no sense in try-and-buy scenarios.  So the point is not “long live the three-year deal,” but instead “examine unit economics, but do so with an awareness of both their origins and limitations.”

This is why I think nothing tells the story better than a full four-statement, three-year financial model.  Now I’m sure there are plenty of badly-built over-optimistic models out there.  But don’t throw the baby out with the bathwater.   It is just not that hard to model:

  • The mix of the different types of deals your company does by duration and prepayment terms — and how that changes over time.
  • The existing renewals base and the matrix of deals of one duration that renew as another.
  • The cashflow ramifications of prepaid and non-prepaid multi-year contracts.
  • The impact on ARR and cashflow of churn rates and renewals bookings.
  • The impact of upsell to the existing customer base

Now that I’ve disclaimed all that, let’s answer the central question posed by this post:  if you could know just one SaaS metric, which would it be?

The LTV/CAC ratio.

Why?  Because what you pay for something should be a function of what it’s worth.

Some people say, for example, that a CAC of 2.0 is bad.  Well, if you’re selling a month-to-month product where most customers discontinue by month 9, then a CAC of 2.0 is horrific.  However, if you’re selling sticky enterprise infrastructure, replacing systems that have been in place for a decade with applications that might well be in place for another decade, then a CAC is 2.0 is probably fine.  That’s the point:  there is no absolute right or wrong answer to what a company should be willing to pay for a customer.  What you are willing to pay for a customer should be a function of what they are worth.

The CAC ratio captures the cost of acquiring customers.  In plain English, the CAC ratio is the multiple you are willing to pay for $1 for annual recurring revenue (ARR).  With a CAC ratio of 1.5, you are paying $1.50 for a $1 of ARR, implying an 18 month payback period on a revenue basis and 18-months divided by subscription-GM on a gross margin basis.

Lifetime value (LTV) attempts to calculate what a customer is worth and is typically calculated using gross margin (the profit from a customer after paying the cost of operating the service) as opposed to simply revenue.  LTV is calculated first by inverting the annual churn rate (to get the average customer lifetime in years) and then multiplying by subscription-GM.

For example, with a churn rate is 10%, subscription GM of 75%, and a CAC ratio of 1.5, the LTV/CAC ratio is (1/10%) * 0.75 / 1.5 = 5.0.

The general rule of thumb is that LTV/CAC should be 3.0 or higher, with of course, the higher the better.

There are three limitations I am aware of in working with LTV/CAC as a metric.

  • Churn rate.  Picking the right churn rate isn’t easy and is made complicated in the presence of a mix of single- and multi-year deals.  All in, I think simple churn is the best rate to use as it reflects the “auto-renewal” of multi-year deals as well as the very real negative churn generated by upsell.
  • Statistics and distributions.  I’m not a hardcore stats geek, but I secretly worry that many different distributions can produce an average of 10%, and thus inverting a 10% churn rate to produce an average 10-year customer lifetime scares me a bit.  It’s the standard way to do things, but I do worry late at night that averages can be misleading.
  • Light from a distant star.  Remember that today’s churn rate is a function of yesterday’s deals.  The more you change who you sell to and how, the less reflective yesterday’s churn is of tomorrow’s.  It’s like light arriving from a star that’s three light-years away:  what you see today happened three years ago.  To the extent that LTV is a forward-looking metric, beware that it’s based on churn which is backward-looking.  In perfect world, you’d use predicted-churn in an LTV calculation but since calculating that would be difficult and controversial, we take the next best thing:  past churn.  But remember that the future doesn’t always look like the past.

 

You Can’t Analyze Churn by Analyzing Churn

One thing that amazes me is when I hear people talk about how they analyze churn in a cloud, software as a service (SaaS), or other recurring revenue business.

You hear things like:

  • “17% of our churn comes from emerging small business (ESB) segment, which is normal because small businesses are inherently unstable.”
  • “22% of our churn comes from companies in the $1B+ revenue range, indicating that we may have a problem meeting enterprise needs.”
  • “40% of the customers in the residential mortgage business churned, indicating there is something wrong our product for that vertical.”

There are three fallacies at work here.

The first is assumed causes.  If you that 17% of your churn comes from the ESB segment, you know one and only one thing:  that 17% of your churn comes from the ESB segment.  Asserting small business stability as the cause is pure speculation.  Maybe they did go out of business or get bought.  Or maybe they didn’t like your product.  Or maybe they did like your product, but decided it was overkill for their needs.  If you want to how much of your churn came from a given segment, ask a finance person.  If you want to know why a customer churned, ask them.  Companies with relatively small customer bases can do it via a phone.  Customers with big bases can use an online survey.  It’s not hard.  Use metrics to figure out where your churn comes from.  Use surveys to figure out why.

The second is not looking at propensities and the broader customer base. If I said that 22% of your annual recurring revenue (ARR) comes from $1B+ companies, then you shouldn’t be surprised that 22% of your churn comes from them as well.  If I said that 50% of your ARR comes from $1B+ companies (and they were your core target market), then you’d be thrilled that only 22% of your churn comes from them.  The point isn’t how much of your churn comes from a given segment:  it’s how much of your churn comes from a given segment relative to how much of your overall business comes from that segment.  Put differently, what is the propensity of someone to churn in one segment versus another.

And you can’t perform that analysis without getting a full data set — of both customers who did churn and customers who didn’t.  That’s why I say you can’t analyze churn by analyzing churn.  Too many people, when tasked with churn analysis:  say, “quick, get me a list of all the customers who churned in the past 6 months and we’ll look for patterns.”   At that instant you are doomed.  All you can do is decompose churn into buckets, but know nothing of propensities.

For example, if you noticed that in one country that a stunning 99% of churn came from customers with blue eyes, you might be prompted to launch an immediate inquiry into how your product UI somehow fails for blue-eyed customers.  Unless, of course, the country was Estonia where 99% of the population has blue eyes, and ergo 99% of your customers do.  Bucketing churn buys you nothing without knowing propensities.

The last is correlation vs. causation.  Knowing that a large percentage of customers in the residential mortgage segment churned (or even have higher propensity to churn) doesn’t tell you why they are churning.  Perhaps your product does lack functionality that is important in that segment.  Or perhaps it’s 2008, the real estate crisis is in full bloom, and those customers aren’t buying anything from anybody.  The root cause is the mortgage crisis, not your product.   Yes, there is a high correlation between customers in that vertical and their churn rate.  But the cause isn’t a poor product fit for that vertical, it’s that the vertical itself is imploding.

A better, and more fun, example comes from The Halo Effect, which tells the story that a famous statistician once showed a precise correlation between the increase in the number of Baptist preachers and the increase in arrests for public drunkenness during the 19th Century.  Do we assume that one caused the other?  No.  In fact, the underlying driver was the general increase in the population — with which both were correlated.

So, remember these two things before starting your next churn analysis

  • If you want to know why someone churned, ask them.
  • If you want to analyze churn, don’t just look at who churned — compare who churned to who didn’t