Category Archives: Cloud

Why has Standalone Cloud BI been such a Tough Slog?

I remember when I left Business Objects back in 2004 that it was early days in the cloud.  We were using Salesforce internally (and one of their larger customers at the time) so I was familiar with and a proponent of cloud-based applications, but never felt great about BI in the cloud.  Despite that, Business Objects and others were aggressively ramping on-demand offerings all of which amounted to pretty much nothing a few years later.

Startups were launched, too.  Specifically, I remember:

  • Birst, née Success Metrics, and founded in 2004 by Siebel BI veterans Brad Peters and Paul Staelin, which was originally supposed to be vertical industry analytic applications.
  • LucidEra, founded in 2005 by Salesforce and Siebel veteran Ken Rudin (et alia) whose original mission was to be to BI what Salesforce was to CRM.
  • PivotLink, which did their series A in 2007 (but was founded in 1998), positioned as on-demand BI and later moved into more vertically focused apps in retail.
  • GoodData, founded in 2007 by serial entrepreneur Roman Stanek, which early on focused on SaaS embedded BI and later moved to more of a high-end enterprise positioning.

These were great people — Brad, Ken, Roman, and others were brilliant, well educated veterans who knew the software business and their market space.

These were great investors — names like Andreessen Horowitz, Benchmark, Emergence, Matrix, Sequoia, StarVest, and Tenaya invested over $300M in those four companies alone.

This was theoretically a great, straightforward cloud-transformation play of a $10B+ market, a la Siebel to Salesforce.

But of the four companies named above only GoodData is doing well and still in the fight (with a high-end enterprise platform strategy that bears little resemblance to a straight cloud transformation play) and the three others all came to uneventful exits:

So, what the hell happened?

Meantime, recall that Tableau, founded in 2003, and armed in its early years with a measly $15M in venture capital, and with an exclusively on-premises business model, literally blew by all the cloud BI vendors, going public in May 2013 and despite the stock being cut by more than half since its July 2015 peak is still worth $4.2B today.

I can’t claim to have the definitive answer to the question I’ve posed in the title.  In the early days I thought it was related to technical issues like trust/security, trust/scale, and the complexities of cloud-based data integration.  But those aren’t issues today.  For a while back in the day I thought maybe the cloud was great for applications, but perhaps not for platforms or infrastructure.  While SaaS was the first cloud category to take off, we’ve obviously seen enormous success with both platforms (PaaS) and infrastructure (IaaS) in the cloud, so that can’t be it.

While some analysts lump EPM under BI, cloud-based EPM has not had similar troubles.  At Host, and our top competitors, we have never struggled with focus or positioning and we are all basically running slightly different variations on the standard cloud transformation play.  I’ve always believed that lumping EPM under BI is a mistake because while they use similar technologies, they are sold to different buyers (IT vs. finance) and the value proposition is totally different (tool vs. application).  While there’s plenty of technology in EPM, it is an applications play — you can’t sell it or implement it without domain knowledge in finance, sales, marketing or whatever domain for which you’re building the planning system.  So I’m not troubled to explain why cloud EPM hasn’t been a slog while cloud BI absolutely has been.

My latest belief is that the business model wasn’t the problem in BI.  The technology was.  Cloud transformation plays are all about business model transformation.  On-premises applications business models were badly broken:  the software cost $10s of millions to buy and $10s of millions more to implement (for large customers).  SMBs were often locked out of the market because they couldn’t afford the ante.  ERP and CRM were exposed because of this and the market wanted and needed a business model transformation.

With BI, I believe, the business model just wasn’t the problem.  By comparison to ERP and CRM, it was fraction of the cost to buy and implement.  A modest BusinessObjects license might have cost $150K and less than that to implement.  That problem was not that BI business model was broken, it was that the technology never delivered on the democratization promise that it made.  Despite shouting “BI for the masses” in 1995, BI never really made it beyond the analyst’s desk.

Just as RDBMS themselves failed to deliver information democracy with SQL (which, believe it or not, was part of the original pitch — end users could write SQL to answer their own queries!), BI tools — while they helped enable analysts — largely failed to help Joe User.  They weren’t easy enough to use.  They lacked information discovery.  They lacked, importantly, easy-yet-powerful visualization.

That’s why Tableau, and to a lesser extent Qlik, prospered while the cloud BI vendors struggled.  (It’s also why I find it profoundly ironic that Tableau is now in a massive rush to “go cloud” today.)  It’s also one reason why the world now needs companies like Alation — the information democracy brought by Tableau has turned into information anarchy and companies like Alation help rein that back in (see disclaimers).

So, I think that cloud BI proved to be such a slog because the cloud BI vendors solved the wrong problem. They fixed a business model that wasn’t fundamentally broken, all while missing the ease of use, data discovery, and visualization power that both required the horsepower of on-premises software and solved the real problems the users faced.

I suspect it’s simply another great, if simple, lesson is solving your customer’s problem.

Feel free to weigh in on this one as I know we have a lot of BI experts in the readership.

Host Analytics World: Some Key Takeaways

We are having an amazing time at Host Analytics World this week in San Francisco.  I’m thrilled with size (over 700 people), the positive energy, and the learning/sharing that’s taking place at this event.

IMG_1973

Probably the single best thing I’ve heard from customers at the conference is this:

“I use a lot of cloud software and … the relationship you have with your customers is unique.”

The reason this makes me so happy is that’s what our strategy is all about.  We are a 100% customer-focused SaaS vendor and a huge part of my strategy here is to build a real, deep, sincere customer-success culture.  So any time I hear an echo back from our customers that is what they are seeing/feeling it makes me very happy.  And I’ve heard plenty of those echos this week.

The other big things I’ve seen thus far:

  • Tremendous interest in modeling and our new Modeling Cloud offering.  Organizations are doing more modeling than ever before and they want a modeling solution that leverages Excel and ties together disparate departmental models into a single enterprise model.
  • Huge support for our intelligent leverage of Excel strategy.  AirLiftXL, SpotLightXL, and our web-based Excel grid allow customers to leverage their existing models and, more importantly, skills / human capital in the context of a proper planning system.
  • Major interest in tying together sales and financial planning.  This is a real hot button in finance right now as sales planning is increasing done by sales ops and/or sales strategy groups outside of finance and in software not linked to the central planning system.
  • Big interest in our new Aviso partnership as part of our strategy to better link sales and finance.  Aviso delivers predictive analytics that not only help forecast sales but actually guides sales management to the most important opportunities in the pipeline.  In general, customers seem to support our strategy to stay focused on EPM and not extend ourselves in adjacent fields where best-of-breed players already exist.
  • And finally, I’d be remiss if I didn’t introduce our new mascots, Tick and Tie.

IMG_1972

Survivor Bias in Churn Calculations: Say It’s Not So!

I was chatting with a fellow SaaS executive the other day and the conversation turned to churn and renewal rates.  I asked how he calculated them and he said:

Well, we take every customer who was also a customer 12 months ago and then add up their ARR 12 months ago and add up their ARR today, and then divide today’s ARR by year-ago ARR to get an overall retention or expansion rate.

Well, that sounds dandy until you think for a minute about survivor bias, the often inadvertent logical error in analyzing data from only the survivors of a given experiment or situation.  Survivor bias is subtle, but here are some common examples:

  • I first encountered survivor bias in mutual funds when I realized that look-back studies of prior 5- or 10-year performance include only the funds still in existence today.  If you eliminate my bogeys I’m actually an below-par golfer.
  • My favorite example is during World War II, analysts examined the pattern of anti-aircraft fire on returning bombers and argued to strengthen them  in the places that were most often hit.  This was exactly wrong — the places where returning bombers were hit were already strong enough.  You needed to reinforce them in the places that the downed bombers were hit.

So let’s turn back to churn rates.  If you’re going to calculate an overall expansion or retention rate, which way should you approach it?

  1. Start with a list of customers today, look at their total ARR, and then go compare that to their ARR one year ago, or
  2. Start with a list of customers from one year ago and look at their ARR today.

Number 2 is the obvious answer.  You should include the ARR from customers who choose to stop being customers in calculating an overall churn or expansion rate.  Calculating it the first way can be misleading because you are looking at the ARR expansion only from customers who chose to continue being customers.

Let’s make this real via an example.

survivor bias

The ARR today is contained in the boxed area.  The survivor bias question comes down to whether you include or exclude the orange rows from year-ago ARR.  The difference can be profound.  In this simple example, the survivor-biased expansion rate is a nice 111%.  However, the non-biased rate is only 71% which will get you a quick “don’t let the door hit your ass on the way out” at most VCs.  And while the example is contrived, the difference is simply one of calculation off identical data.

Do companies use survivor-biased calculations in real life?  Let’s look at my post on the Hortonworks S-1 where I quote how they calculate their net expansion rate:

We calculate dollar-based net expansion rate as of a given date as the aggregate annualized subscription contract value as of that date from those customers that were also customers as of the date 12 months prior, divided by the aggregate annualized subscription contract value from all customers as of the date 12 months prior.

When I did my original post on this, I didn’t even catch it.  But therein lies the subtle head of survivor bias.

# # #

Disclaimers:

  • I have not tracked the Hortonworks in the meantime so I don’t know if they still report this metric, at what frequency, how they currently calculate it, etc.
  • To the extent that “everyone calculates it this way” is true, then companies might report it this way for comparability, but people should be aware of the bias.  One approach is to create a present back-looking and a past forward-looking metric and show both.
  • See my FAQ for additional disclaimers, including that I am not a financial analyst and do not make recommendations on stocks.

Don’t Be a Metrics Slave

I love metrics.  I live for metrics.  Every week and every quarter I drown my team in metrics reviews.  Why?  Because metrics are the instrumentation — the flight panel — of our business.   Good metrics provide clear insights.  They cut through politics, spin, and haze.  They spark amazing debates.   They help you understand your business and compare it to others.

I love metrics, but I’ll never be a slave to them.  Far too often in business I see people who are metrics slaves.  Instead of mastering metrics to optimize the business, the metrics become the master and the manager a slave.

I define metrics slavery as the case when managers stop thinking and work blindly towards achieving a metric regardless of whether they believe doing so leads to what they consider is best for the business.

One great thing about sports analytics is that despite an amazing slew of metrics, everyone remembers it’s the team with the most goals that wins, not the one who took the most shots.  In business, we often get that wrong in both subtle and not-so-subtle ways.

Here are metrics mistakes that often lead to metrics slavery.

  1. Dysfunctional compensation plans, where managers actively and openly work on what they believe are the wrong priorities in response to a compensation plan that drives them to do so. The more coin-operated the type of people in a department, the more carefully you must define incentives.  While strategic marketers might challenge a poorly aligned compensation plan, most salespeople will simply behave exactly as dictated by the compensation plan.  Be careful what you ask for, because you will often get it.
  1. Poor metric selection. Marketers who count leads instead of opportunities are counting shots instead of goals.  I can’t stand to see tradeshow teams giving away valuable items so they can run the card of every passing attendee.  They might feel great about getting 500 leads by the end of the day, but if 200 are people who will never buy, then they are not only useless but actually have negative value because the company’s nurture machine is going to invest fruitless effort in converting them.
  1. Lack of leading indicators. Most managers are more comfortable with solid lagging indicators than they are with squishier leading indicators.  For example, you might argue that leads are a great leading indicator of sales, and you’d be right to the extent that they are good leads.  This then requires you to define “good,” which is typically done using some ABC-style scoring system.  But because the scoring system is complex, subjective, and requires iteration and regression to define, some managers find the whole thing too squishy and say “let’s just count leads.” That’s the equivalent of counting shots, including shots off-goal that never could have scored.  While leading indicators require a great deal of thought to get right, you must include them in your key metrics, lest you create a company of backwards-looking managers.
  1. Poorly-defined metrics. The plus/minus metric in hockey is one of my favorite sports metrics because it measures teamwork, something I’d argue is pretty hard to measure [1].  However, there is a known problem with the plus/minus rating.  It includes time spent on power plays [2] and penalty kills [3].  Among other problems, this unfairly penalizes defenders on the penalty-killing unit, diluting the value of the metric.  Yet, far as I know, no one has fixed this problem.   So while it’s tracked, people don’t take it too seriously because of its known limitations.  Do you have metrics like this at your company?  If so, fix them.
  1. Self-fulfilling metrics. These are potential leading metrics where management losses sight of the point and accidentally makes their value a self-fulfilling prophecy.  Pipeline coverage (value of oppties in the pipeline / plan) is such a metric.  Long ago, it was good leading indicator of plan attainment, but over the past decade literally every sales organization I know has institutionalized beating salespeople unless they have 3x coverage.  What’s happened?  Today, everyone has 3x coverage. It just doesn’t mean anything anymore.  See this post for a long rant on this topic.
  1. Ill-defined metrics, which happen a lot in benchmarking where we try to compare, for example, our churn rate to an industry average. If you are going to make such comparisons, you must begin with clear definitions or else you are simply counting angels on pinheads.   See this post where I give an example where, off the same data, I can calculate a renewals rate of 69%, 80%, 100%, 103%, 120%, 208%, or 310%, depending on how you choose to calculate.  If you want to do a meaningful benchmark, you better be comparing the 80% to the 80%, not the 208%.
  1. Blind benchmarking. The strategic mistake that managers make in benchmarking is that they try to converge blindly to the industry average.  This reminds me of the Vonnegut short-story where ballerinas have to wear sash-weights and the intelligentsia have music blasted into their ears in order to make everyone equal.  Benchmarks should be tools of understanding, not instruments of oppression.   In addition, remember that benchmarks definitionally blend industry participants with different strategies.  One company may heavily invest in R&D in product-leadership strategy.  One may heavily invest in S&M as part of market-share leadership strategy.  A third may invest heavily in supply chain optimization as part of cost-leadership strategy.  Aspiring to the average of these companies is a recipe for failure, not success, as you will end up in a strategic No Man’s Land.  In my opinion, this is the most dangerous form of metrics slavery because it happens at the boardroom level, and often with little debate.
  1. Conflicting metrics. Let’s take a concrete example here.  Imagine you are running a SaaS business that’s in a turnaround.  This year bookings growth was flat.  Next year you want to grow bookings 100%.  In addition, you want to converge your P&L over time to an industry average of S&M expenses at 50% of revenues, whereas today you are running at 90%.  While that may sound reasonable it’s actually a mathematical impossibility.   Why?  Because the company is changing trajectories and in a SaaS business revenues lag bookings by a year.   So next year revenue will be growing slowly [4] and that means you need to grow S&M even slower if you want to meet the P&L convergence goal.  But if you want to meet the 100% bookings growth goal, with improving efficiency, you’ll need to increase S&M cost by say 70%.  It’s impossible.  #QED.  There will always be a tendency to split the difference in such scenarios but that is a mistake.  The question is which is the better metric off which to anchor?   The answer, in a SaaS business is bookings.  Ergo, the correct answer is not to split the difference (which will put the bookings goal at risk) but to recognize that bookings is the better metric and anchor S&M expense to bookings growth.  This requires a deep understanding of the metrics you use and the courage to confront two conflicting rules of conventional wisdom in so doing.

In the end, metrics slavery, while all too common, is more about the people than the metrics.  Managers need to be challenged to understand metrics.  Managers need to be empowered to define new and better metrics.  Managers must to be told to use their brains at all times and never do something simply to move a metric.

If you’re always thinking critically, you’ll never be a metrics slave.  The day you stop, you’ll become one.

# # #

[1] The way it works is simple:  if you’re on the ice when your team scores, you get +1.  If you’re on the ice when the opponent scores you get -1.  When you look at someone’s plus/minus rating over time, you can see, for example, which forwards hustle back on defense and which don’t.

[2] When, thanks to an opponent’s penalty you have more players on the ice then they do.

[3] When, thanks to your team’s penalty, your opponent has more players on the ice than you do.

[4] Because bookings grew slowly this year

Average Contract Duration and SaaS Renewals: All Is Not As It Appears

Chatting with some SaaS buddies the other day, we ran into a fun — and fairly subtle — SaaS metrics question.  It went something like this:

VP of Customer Success:  “Our average contract duration (ACD) on renewals was 1.5 years last quarter and –”

VP of Sales:  “– Wait a minute, our ACD on new business is 2.0 years.  If customers are renewing for shorter terms than those of the initial sale, it  means they are less confident about future usage at renewals time than they are at the initial purchase. Holy Moly, that means we have a major problem with the product or with our customer success program.”

Or do we?  At first blush, the argument makes perfect sense.  If new customers sign two-year contracts and renewing ones sign 1.5-year contracts, it would seem to indicate that renewing customers are indeed less bullish on future usage than existing ones.  Having drawn that conclusion, you are instantly tempted to blame the product, the customer success team, technical support, or some other factor for the customers’ confidence reduction.

But is there a confidence reduction?  What does it actually mean when your renewals ACD is less than your new business ACD?

The short answer is no.  We’re seeing what I call the “why are there so many frequent flyers on airplanes” effect.  At first blush, you’d think that if ultra-frequent flyers (e.g., United 1K) represent the top 1%, then a 300-person flight might have three or four on board, while in reality it’s more like 20-30.  But that’s it — frequent flyers are over-represented on airplanes because they fly more; just like one-year contracts are over-represented in renewals because they renew more.

Let’s look at an example.  We have a company that signs one-year, two-year, and three-year deals.  Let’s assume customers renew for the same duration as their initial contract — so there is no actual confidence reduction in play.  Every deal is $100K in annual recurring revenue (ARR).  We’ll calculate ACD on an ARR-weighted basis.  Let’s assume zero churn.

If we sign five one-year, ten two-year, and fifteen three-year deals, we end up with $3M in new ARR and an ACD of 2.3 years.

renewals and acd

In year 1, only the one-year deals come up for renewal (and since we’ve assumed everyone renews for the same length as their initial term), we have an ACD of one year.  The VP of Sales is probably panicking — “OMG, customers have cut their ACD from 2.3 to 1.0 years!  Who’s to blame?  What’s gone wrong?!”

Nothing.  Only the one-year contracts had a shot at renewing and they all renewed for one year.

In year 2, both the (re-renewing) one-year and the (initially renewing) two-year contracts come up for renewal.  The ACD is 1.7 — again lower than the 2.3-year new business ACD.  While, again, the decrease in ACD might lead you to suspect a problem, there is nothing wrong.  It’s just math and the fact that the shorter-duration contracts renew more often which pulls down the renewals ACD.

What To Do About This?
First, understand it.  As with many SaaS metrics, it’s counter-intuitive.

As I’ve mentioned before, SaaS metrics and unit economics are often misunderstood.  While I remain a huge fan of using them to run the business, I strongly recommend taking the time to develop a deep understanding of them.  In addition, the more I see counter-intuitive examples, the more I believe in building full three- to five-year financial models of SaaS businesses in order to correctly see the complex interplay among drivers.

For example, if a company does one-year, two-year, and three-year deals, a good financial model should have drivers for both new business contract duration (i.e., percent of 1Y, 2Y, and 3Y deals) and a renewals duration matrix that has renewals rates for all nine combinations of {1Y, 2Y, 3Y} x (1Y, 2Y, 3Y} deals (e.g., a 3Y to 1Y renewal rate).  This will produce an overall renewals rate and an overall ACD for renewals.  (In a really good model, both the new business breakdown and the renewals matrix should vary by year.)

Armed with that model, built with assumptions based on both history and future goals for the new business breakdown and the renewals matrix, you can then have meaningful conversations how ACD is varying on new and renewals business relative to plan.  Without that, by just looking at one number and not understanding how it’s produced, you run the very real risk of reacting to math effects setting off a false alarm on renewals.

The Ultimate SaaS Metric: The Customer Lifetime Value to Customer Acquisition Cost Ratio (LTV/CAC)

I’m a big fan of software-as-a-service (SaaS) metrics.  I’ve authored very deep posts on SaaS renewals rates and customer acquisition costs.  I also routinely point readers to other great posts on the topic, including:

But in today’s post, I’m going to examine the question:  of the literally scores of SaaS metrics out there, if you could only pick one single metric, which one would it be?

Let’s consider some candidates:

  • Revenue is bad because it’s a lagging indicator in a SaaS business.
  • Bookings is good because it’s a leading indicator of both revenue and cash, but tells you nothing about the existing customer base.
  • ARR (annual recurring revenue) is good because it’s a leading indicator of revenue and includes the effects of both new sales and customer churn.  However, there are two ways to have slow ending ARR growth:  high sales and high churn or low sales and low churn — and they are very different.
  • Cashflow is good because it tends to net-out a lot of other effects, but can be misleading unless you understand the structure of a company’s bookings mix and payment terms.
  • Gross margin (GM) is nice because it gives you an indicator of how efficiently the service is run, but unfortunately tells you nothing else.
  • The churn rate is good because it helps you value the existing customer annuity, but tells you nothing about new sales.
  • Customer acquisition cost (CAC) is a great measure of sales and marketing efficiency, but by itself is not terribly meaningful because you don’t know what you’re buying:  are you paying, for example, $12K in sales and marketing (S&M) expense for a $1K/month customer who will renew for 3 months or 120?  There’s a big difference between the two.
  • Lifetime value (LTV) is good measure of the annuity value of your customer base, but says nothing about new sales.

Before revealing my single best-choice metric, let me make what might be an unfashionable and counter-intuitive statement.  While I love SaaS “unit economics” as much as anybody, to me there is nothing better than a realistic, four-statement, three-year financial model that factors everything into the mix.  I say this not only because my company makes tools to create such models, but more importantly because unit economics can be misleading in a complicated world of varying contract duration (e.g., 1 to 3+ years), payment terms (e.g., quarterly, annual, prepaid, non-prepaid), long sales cycles (typical CAC calculations assume prior-quarter S&M drives current-quarter sales), and renewals which may differ from the original contract in both duration and terms.

Remember that SaaS unit economics were born in an era of monthly recurring revenue (MRR), so the more your business runs monthly, the better those metrics work — and conversely.  For example, consider two companies:

  • Company A does month-to-month contracts charging $100/month and has a CAC ratio of 1.0.
  • Company B does annual contracts, does three-year prepaid deals, and has a CAC ratio of 2.0.

If both companies have 80% subscription gross margins (GM), then the CAC payback period is 15 months for company A and 30 months for company B.  (CAC payback period is months of subscription gross margin to recover CAC.)

This implies company B is much riskier than company A because company B’s payback period is twice as long and company B’s money is at risk for a full 30 months until it recovers payback.

But it’s completely wrong.  Note that because company B does pre-paid deals its actual, cash payback period is not 30 months, but 1 day.  Despite ostensibly having half the CAC payback period, company A is far riskier because it has to wait 15 months until recovering its S&M investment and each month presents an opportunity for non-renewal.  (Or, as I like to say, “is exposed to the churn rate.”)  Thus, while company B will recoup its S&M investment (and then some) every time, company A will only recoup it some percentage of the time as a function of its monthly churn rate.

Now this is not to say that three-year prepaid deals are a panacea and that everyone should do them.  From the vendor perspective, they are good for year 1 cashflow, but bad in years 2 and 3.  From the customer perspective, three-year deals make plenty of sense for “high consideration” purchases (where once you have completed your evaluation, you are pretty sure of your selection), but make almost no sense in try-and-buy scenarios.  So the point is not “long live the three-year deal,” but instead “examine unit economics, but do so with an awareness of both their origins and limitations.”

This is why I think nothing tells the story better than a full four-statement, three-year financial model.  Now I’m sure there are plenty of badly-built over-optimistic models out there.  But don’t throw the baby out with the bathwater.   It is just not that hard to model:

  • The mix of the different types of deals your company does by duration and prepayment terms — and how that changes over time.
  • The existing renewals base and the matrix of deals of one duration that renew as another.
  • The cashflow ramifications of prepaid and non-prepaid multi-year contracts.
  • The impact on ARR and cashflow of churn rates and renewals bookings.
  • The impact of upsell to the existing customer base

Now that I’ve disclaimed all that, let’s answer the central question posed by this post:  if you could know just one SaaS metric, which would it be?

The LTV/CAC ratio.

Why?  Because what you pay for something should be a function of what it’s worth.

Some people say, for example, that a CAC of 2.0 is bad.  Well, if you’re selling a month-to-month product where most customers discontinue by month 9, then a CAC of 2.0 is horrific.  However, if you’re selling sticky enterprise infrastructure, replacing systems that have been in place for a decade with applications that might well be in place for another decade, then a CAC is 2.0 is probably fine.  That’s the point:  there is no absolute right or wrong answer to what a company should be willing to pay for a customer.  What you are willing to pay for a customer should be a function of what they are worth.

The CAC ratio captures the cost of acquiring customers.  In plain English, the CAC ratio is the multiple you are willing to pay for $1 for annual recurring revenue (ARR).  With a CAC ratio of 1.5, you are paying $1.50 for a $1 of ARR, implying an 18 month payback period on a revenue basis and 18-months divided by subscription-GM on a gross margin basis.

Lifetime value (LTV) attempts to calculate what a customer is worth and is typically calculated using gross margin (the profit from a customer after paying the cost of operating the service) as opposed to simply revenue.  LTV is calculated first by inverting the annual churn rate (to get the average customer lifetime in years) and then multiplying by subscription-GM.

For example, with a churn rate is 10%, subscription GM of 75%, and a CAC ratio of 1.5, the LTV/CAC ratio is (1/10%) * 0.75 / 1.5 = 5.0.

The general rule of thumb is that LTV/CAC should be 3.0 or higher, with of course, the higher the better.

There are three limitations I am aware of in working with LTV/CAC as a metric.

  • Churn rate.  Picking the right churn rate isn’t easy and is made complicated in the presence of a mix of single- and multi-year deals.  All in, I think simple churn is the best rate to use as it reflects the “auto-renewal” of multi-year deals as well as the very real negative churn generated by upsell.
  • Statistics and distributions.  I’m not a hardcore stats geek, but I secretly worry that many different distributions can produce an average of 10%, and thus inverting a 10% churn rate to produce an average 10-year customer lifetime scares me a bit.  It’s the standard way to do things, but I do worry late at night that averages can be misleading.
  • Light from a distant star.  Remember that today’s churn rate is a function of yesterday’s deals.  The more you change who you sell to and how, the less reflective yesterday’s churn is of tomorrow’s.  It’s like light arriving from a star that’s three light-years away:  what you see today happened three years ago.  To the extent that LTV is a forward-looking metric, beware that it’s based on churn which is backward-looking.  In perfect world, you’d use predicted-churn in an LTV calculation but since calculating that would be difficult and controversial, we take the next best thing:  past churn.  But remember that the future doesn’t always look like the past.

 

You Can’t Analyze Churn by Analyzing Churn

One thing that amazes me is when I hear people talk about how they analyze churn in a cloud, software as a service (SaaS), or other recurring revenue business.

You hear things like:

  • “17% of our churn comes from emerging small business (ESB) segment, which is normal because small businesses are inherently unstable.”
  • “22% of our churn comes from companies in the $1B+ revenue range, indicating that we may have a problem meeting enterprise needs.”
  • “40% of the customers in the residential mortgage business churned, indicating there is something wrong our product for that vertical.”

There are three fallacies at work here.

The first is assumed causes.  If you that 17% of your churn comes from the ESB segment, you know one and only one thing:  that 17% of your churn comes from the ESB segment.  Asserting small business stability as the cause is pure speculation.  Maybe they did go out of business or get bought.  Or maybe they didn’t like your product.  Or maybe they did like your product, but decided it was overkill for their needs.  If you want to how much of your churn came from a given segment, ask a finance person.  If you want to know why a customer churned, ask them.  Companies with relatively small customer bases can do it via a phone.  Customers with big bases can use an online survey.  It’s not hard.  Use metrics to figure out where your churn comes from.  Use surveys to figure out why.

The second is not looking at propensities and the broader customer base. If I said that 22% of your annual recurring revenue (ARR) comes from $1B+ companies, then you shouldn’t be surprised that 22% of your churn comes from them as well.  If I said that 50% of your ARR comes from $1B+ companies (and they were your core target market), then you’d be thrilled that only 22% of your churn comes from them.  The point isn’t how much of your churn comes from a given segment:  it’s how much of your churn comes from a given segment relative to how much of your overall business comes from that segment.  Put differently, what is the propensity of someone to churn in one segment versus another.

And you can’t perform that analysis without getting a full data set — of both customers who did churn and customers who didn’t.  That’s why I say you can’t analyze churn by analyzing churn.  Too many people, when tasked with churn analysis:  say, “quick, get me a list of all the customers who churned in the past 6 months and we’ll look for patterns.”   At that instant you are doomed.  All you can do is decompose churn into buckets, but know nothing of propensities.

For example, if you noticed that in one country that a stunning 99% of churn came from customers with blue eyes, you might be prompted to launch an immediate inquiry into how your product UI somehow fails for blue-eyed customers.  Unless, of course, the country was Estonia where 99% of the population has blue eyes, and ergo 99% of your customers do.  Bucketing churn buys you nothing without knowing propensities.

The last is correlation vs. causation.  Knowing that a large percentage of customers in the residential mortgage segment churned (or even have higher propensity to churn) doesn’t tell you why they are churning.  Perhaps your product does lack functionality that is important in that segment.  Or perhaps it’s 2008, the real estate crisis is in full bloom, and those customers aren’t buying anything from anybody.  The root cause is the mortgage crisis, not your product.   Yes, there is a high correlation between customers in that vertical and their churn rate.  But the cause isn’t a poor product fit for that vertical, it’s that the vertical itself is imploding.

A better, and more fun, example comes from The Halo Effect, which tells the story that a famous statistician once showed a precise correlation between the increase in the number of Baptist preachers and the increase in arrests for public drunkenness during the 19th Century.  Do we assume that one caused the other?  No.  In fact, the underlying driver was the general increase in the population — with which both were correlated.

So, remember these two things before starting your next churn analysis

  • If you want to know why someone churned, ask them.
  • If you want to analyze churn, don’t just look at who churned — compare who churned to who didn’t