Category Archives: SaaS

Don’t Be a Metrics Slave

I love metrics.  I live for metrics.  Every week and every quarter I drown my team in metrics reviews.  Why?  Because metrics are the instrumentation — the flight panel — of our business.   Good metrics provide clear insights.  They cut through politics, spin, and haze.  They spark amazing debates.   They help you understand your business and compare it to others.

I love metrics, but I’ll never be a slave to them.  Far too often in business I see people who are metrics slaves.  Instead of mastering metrics to optimize the business, the metrics become the master and the manager a slave.

I define metrics slavery as the case when managers stop thinking and work blindly towards achieving a metric regardless of whether they believe doing so leads to what they consider is best for the business.

One great thing about sports analytics is that despite an amazing slew of metrics, everyone remembers it’s the team with the most goals that wins, not the one who took the most shots.  In business, we often get that wrong in both subtle and not-so-subtle ways.

Here are metrics mistakes that often lead to metrics slavery.

  1. Dysfunctional compensation plans, where managers actively and openly work on what they believe are the wrong priorities in response to a compensation plan that drives them to do so. The more coin-operated the type of people in a department, the more carefully you must define incentives.  While strategic marketers might challenge a poorly aligned compensation plan, most salespeople will simply behave exactly as dictated by the compensation plan.  Be careful what you ask for, because you will often get it.
  1. Poor metric selection. Marketers who count leads instead of opportunities are counting shots instead of goals.  I can’t stand to see tradeshow teams giving away valuable items so they can run the card of every passing attendee.  They might feel great about getting 500 leads by the end of the day, but if 200 are people who will never buy, then they are not only useless but actually have negative value because the company’s nurture machine is going to invest fruitless effort in converting them.
  1. Lack of leading indicators. Most managers are more comfortable with solid lagging indicators than they are with squishier leading indicators.  For example, you might argue that leads are a great leading indicator of sales, and you’d be right to the extent that they are good leads.  This then requires you to define “good,” which is typically done using some ABC-style scoring system.  But because the scoring system is complex, subjective, and requires iteration and regression to define, some managers find the whole thing too squishy and say “let’s just count leads.” That’s the equivalent of counting shots, including shots off-goal that never could have scored.  While leading indicators require a great deal of thought to get right, you must include them in your key metrics, lest you create a company of backwards-looking managers.
  1. Poorly-defined metrics. The plus/minus metric in hockey is one of my favorite sports metrics because it measures teamwork, something I’d argue is pretty hard to measure [1].  However, there is a known problem with the plus/minus rating.  It includes time spent on power plays [2] and penalty kills [3].  Among other problems, this unfairly penalizes defenders on the penalty-killing unit, diluting the value of the metric.  Yet, far as I know, no one has fixed this problem.   So while it’s tracked, people don’t take it too seriously because of its known limitations.  Do you have metrics like this at your company?  If so, fix them.
  1. Self-fulfilling metrics. These are potential leading metrics where management losses sight of the point and accidentally makes their value a self-fulfilling prophecy.  Pipeline coverage (value of oppties in the pipeline / plan) is such a metric.  Long ago, it was good leading indicator of plan attainment, but over the past decade literally every sales organization I know has institutionalized beating salespeople unless they have 3x coverage.  What’s happened?  Today, everyone has 3x coverage. It just doesn’t mean anything anymore.  See this post for a long rant on this topic.
  1. Ill-defined metrics, which happen a lot in benchmarking where we try to compare, for example, our churn rate to an industry average. If you are going to make such comparisons, you must begin with clear definitions or else you are simply counting angels on pinheads.   See this post where I give an example where, off the same data, I can calculate a renewals rate of 69%, 80%, 100%, 103%, 120%, 208%, or 310%, depending on how you choose to calculate.  If you want to do a meaningful benchmark, you better be comparing the 80% to the 80%, not the 208%.
  1. Blind benchmarking. The strategic mistake that managers make in benchmarking is that they try to converge blindly to the industry average.  This reminds me of the Vonnegut short-story where ballerinas have to wear sash-weights and the intelligentsia have music blasted into their ears in order to make everyone equal.  Benchmarks should be tools of understanding, not instruments of oppression.   In addition, remember that benchmarks definitionally blend industry participants with different strategies.  One company may heavily invest in R&D in product-leadership strategy.  One may heavily invest in S&M as part of market-share leadership strategy.  A third may invest heavily in supply chain optimization as part of cost-leadership strategy.  Aspiring to the average of these companies is a recipe for failure, not success, as you will end up in a strategic No Man’s Land.  In my opinion, this is the most dangerous form of metrics slavery because it happens at the boardroom level, and often with little debate.
  1. Conflicting metrics. Let’s take a concrete example here.  Imagine you are running a SaaS business that’s in a turnaround.  This year bookings growth was flat.  Next year you want to grow bookings 100%.  In addition, you want to converge your P&L over time to an industry average of S&M expenses at 50% of revenues, whereas today you are running at 90%.  While that may sound reasonable it’s actually a mathematical impossibility.   Why?  Because the company is changing trajectories and in a SaaS business revenues lag bookings by a year.   So next year revenue will be growing slowly [4] and that means you need to grow S&M even slower if you want to meet the P&L convergence goal.  But if you want to meet the 100% bookings growth goal, with improving efficiency, you’ll need to increase S&M cost by say 70%.  It’s impossible.  #QED.  There will always be a tendency to split the difference in such scenarios but that is a mistake.  The question is which is the better metric off which to anchor?   The answer, in a SaaS business is bookings.  Ergo, the correct answer is not to split the difference (which will put the bookings goal at risk) but to recognize that bookings is the better metric and anchor S&M expense to bookings growth.  This requires a deep understanding of the metrics you use and the courage to confront two conflicting rules of conventional wisdom in so doing.

In the end, metrics slavery, while all too common, is more about the people than the metrics.  Managers need to be challenged to understand metrics.  Managers need to be empowered to define new and better metrics.  Managers must to be told to use their brains at all times and never do something simply to move a metric.

If you’re always thinking critically, you’ll never be a metrics slave.  The day you stop, you’ll become one.

# # #

[1] The way it works is simple:  if you’re on the ice when your team scores, you get +1.  If you’re on the ice when the opponent scores you get -1.  When you look at someone’s plus/minus rating over time, you can see, for example, which forwards hustle back on defense and which don’t.

[2] When, thanks to an opponent’s penalty you have more players on the ice then they do.

[3] When, thanks to your team’s penalty, your opponent has more players on the ice than you do.

[4] Because bookings grew slowly this year

Churn:  Net-First or Sum-First?

While I’ve already done a comprehensive post on the subject of churn in SaaS companies and some perils in how companies analyze it, in talking with fellow SaaS metrics lovers of late, I’ve discovered a new problem that isn’t addressed by my posts.

The question?   When calculating churn, should you sum first (adding up all the shrinkage ARR) or net first (net shrinkage vs. expansion ARR and then sum that).  It seems like a simple question, but like so many subtitles in SaaS metrics, whether you net-first or sum-first, and how you report in so doing, can make a big difference in how you see the business through the numbers.

Let’s see an example.

net1

So what’s our churn rate:  a healthy -1% or a scary 15%?  The answer is both.  In my other post, I define about 5 churn rates, and when you sum first you get my “net ARR churn” rate [1], which comes in at a rather disturbing 15%.  When, however, you net first you end up a healthy -1% (“gross ARR churn”) rate because expansion ARR has more than offset shrinkage.  At my company we track both rates because each tells you a different story.

Thanks to the wonders of math, both the net-first and sum-first calculations take you to the same ending ARR number.  That’s not the problem.

The problem is that many companies report churn in a format not like my table above, but in something simpler like that looks like this below [2].

net2

As you can see, this net-first format doesn’t show expansion and shrinkage by customer.  I think this is dangerous because it can obscure real problems when shrinkage ARR is offset, or more than offset, by expansion ARR.

For example, customer 2 looks great in the second chart (“wow, $20K in negative churn!”).  In the first chart, however, you can see customer dropped 4 seats of product A and more than offset that by buying 8 seats of product B.  In fact, in the first chart, you can see that everyone is dropping product A and buying product B which is hidden in the second chart that neither breaks out shrinkage from expansion nor provides a comment as to what’s going on.  My advice is simple:  do sum-first churn and report both the “net ARR” and “gross ARR” renewal rates and you’ll get the whole picture.

Aside 1:  The Reclaimed ARR Issue
This debate prompted a second one with my Customers For Life (CFL) team who wanted to introduce a new metric called “reclaimed ARR,” the ARR that would have been lost on renewal but was saved by CFL through cross-sells, up-sells, and price increases.  Thus far, I’m not in love with the concept as it adds complexity, but I understand why they like it and you can see how I’d calculate it below.

net3

Aside 2:  Saved ARR
The first aside was prompted by the fact that CFL/renewals teams primarily play defense, not offense.  Like goalies on a hockey team, they get measured by a negative metric (i.e., the churn ARR that got away).   Even when they deliver offsetting expansion ARR, there is still some ARR that gets away, and a lot of their work (in the customer support and customer success parts of CFL) is not about offsetting-upsell, it’s about protecting the core of the renewal.  For that reason, so as to reflect that important work in our metrics, we’ve taken a lesson from baseball and the notion of a “save.”  Once the renewals come in, we add up all the ARR that came from customers who were, at any point in time since their last renewal, in our escalated accounts program and call that Saved ARR.    It’s best metric we’ve found thus far to reflect that important work.

# # #

[1] I have backed into the rather unfortunate position of using the word “net” in two different ways.  When I say “net ARR churn” I mean churn ARR net of (i.e., exclusive of) expansion ARR.  When I say net-first churn, I meant to net-out shrinkage vs. expansion first, before summing the customers to get total churn.

[2] Note that I properly inverted the sign because negative churn is good and positive churn is bad.

Average Contract Duration and SaaS Renewals: All Is Not As It Appears

Chatting with some SaaS buddies the other day, we ran into a fun — and fairly subtle — SaaS metrics question.  It went something like this:

VP of Customer Success:  “Our average contract duration (ACD) on renewals was 1.5 years last quarter and –“

VP of Sales:  “– Wait a minute, our ACD on new business is 2.0 years.  If customers are renewing for shorter terms than those of the initial sale, it  means they are less confident about future usage at renewals time than they are at the initial purchase. Holy Moly, that means we have a major problem with the product or with our customer success program.”

Or do we?  At first blush, the argument makes perfect sense.  If new customers sign two-year contracts and renewing ones sign 1.5-year contracts, it would seem to indicate that renewing customers are indeed less bullish on future usage than existing ones.  Having drawn that conclusion, you are instantly tempted to blame the product, the customer success team, technical support, or some other factor for the customers’ confidence reduction.

But is there a confidence reduction?  What does it actually mean when your renewals ACD is less than your new business ACD?

The short answer is no.  We’re seeing what I call the “why are there so many frequent flyers on airplanes” effect.  At first blush, you’d think that if ultra-frequent flyers (e.g., United 1K) represent the top 1%, then a 300-person flight might have three or four on board, while in reality it’s more like 20-30.  But that’s it — frequent flyers are over-represented on airplanes because they fly more; just like one-year contracts are over-represented in renewals because they renew more.

Let’s look at an example.  We have a company that signs one-year, two-year, and three-year deals.  Let’s assume customers renew for the same duration as their initial contract — so there is no actual confidence reduction in play.  Every deal is $100K in annual recurring revenue (ARR).  We’ll calculate ACD on an ARR-weighted basis.  Let’s assume zero churn.

If we sign five one-year, ten two-year, and fifteen three-year deals, we end up with $3M in new ARR and an ACD of 2.3 years.

renewals and acd

In year 1, only the one-year deals come up for renewal (and since we’ve assumed everyone renews for the same length as their initial term), we have an ACD of one year.  The VP of Sales is probably panicking — “OMG, customers have cut their ACD from 2.3 to 1.0 years!  Who’s to blame?  What’s gone wrong?!”

Nothing.  Only the one-year contracts had a shot at renewing and they all renewed for one year.

In year 2, both the (re-renewing) one-year and the (initially renewing) two-year contracts come up for renewal.  The ACD is 1.7 — again lower than the 2.3-year new business ACD.  While, again, the decrease in ACD might lead you to suspect a problem, there is nothing wrong.  It’s just math and the fact that the shorter-duration contracts renew more often which pulls down the renewals ACD.

What To Do About This?
First, understand it.  As with many SaaS metrics, it’s counter-intuitive.

As I’ve mentioned before, SaaS metrics and unit economics are often misunderstood.  While I remain a huge fan of using them to run the business, I strongly recommend taking the time to develop a deep understanding of them.  In addition, the more I see counter-intuitive examples, the more I believe in building full three- to five-year financial models of SaaS businesses in order to correctly see the complex interplay among drivers.

For example, if a company does one-year, two-year, and three-year deals, a good financial model should have drivers for both new business contract duration (i.e., percent of 1Y, 2Y, and 3Y deals) and a renewals duration matrix that has renewals rates for all nine combinations of {1Y, 2Y, 3Y} x (1Y, 2Y, 3Y} deals (e.g., a 3Y to 1Y renewal rate).  This will produce an overall renewals rate and an overall ACD for renewals.  (In a really good model, both the new business breakdown and the renewals matrix should vary by year.)

Armed with that model, built with assumptions based on both history and future goals for the new business breakdown and the renewals matrix, you can then have meaningful conversations how ACD is varying on new and renewals business relative to plan.  Without that, by just looking at one number and not understanding how it’s produced, you run the very real risk of reacting to math effects setting off a false alarm on renewals.

The Ultimate SaaS Metric: The Customer Lifetime Value to Customer Acquisition Cost Ratio (LTV/CAC)

I’m a big fan of software-as-a-service (SaaS) metrics.  I’ve authored very deep posts on SaaS renewals rates and customer acquisition costs.  I also routinely point readers to other great posts on the topic, including:

But in today’s post, I’m going to examine the question:  of the literally scores of SaaS metrics out there, if you could only pick one single metric, which one would it be?

Let’s consider some candidates:

  • Revenue is bad because it’s a lagging indicator in a SaaS business.
  • Bookings is good because it’s a leading indicator of both revenue and cash, but tells you nothing about the existing customer base.
  • ARR (annual recurring revenue) is good because it’s a leading indicator of revenue and includes the effects of both new sales and customer churn.  However, there are two ways to have slow ending ARR growth:  high sales and high churn or low sales and low churn — and they are very different.
  • Cashflow is good because it tends to net-out a lot of other effects, but can be misleading unless you understand the structure of a company’s bookings mix and payment terms.
  • Gross margin (GM) is nice because it gives you an indicator of how efficiently the service is run, but unfortunately tells you nothing else.
  • The churn rate is good because it helps you value the existing customer annuity, but tells you nothing about new sales.
  • Customer acquisition cost (CAC) is a great measure of sales and marketing efficiency, but by itself is not terribly meaningful because you don’t know what you’re buying:  are you paying, for example, $12K in sales and marketing (S&M) expense for a $1K/month customer who will renew for 3 months or 120?  There’s a big difference between the two.
  • Lifetime value (LTV) is good measure of the annuity value of your customer base, but says nothing about new sales.

Before revealing my single best-choice metric, let me make what might be an unfashionable and counter-intuitive statement.  While I love SaaS “unit economics” as much as anybody, to me there is nothing better than a realistic, four-statement, three-year financial model that factors everything into the mix.  I say this not only because my company makes tools to create such models, but more importantly because unit economics can be misleading in a complicated world of varying contract duration (e.g., 1 to 3+ years), payment terms (e.g., quarterly, annual, prepaid, non-prepaid), long sales cycles (typical CAC calculations assume prior-quarter S&M drives current-quarter sales), and renewals which may differ from the original contract in both duration and terms.

Remember that SaaS unit economics were born in an era of monthly recurring revenue (MRR), so the more your business runs monthly, the better those metrics work — and conversely.  For example, consider two companies:

  • Company A does month-to-month contracts charging $100/month and has a CAC ratio of 1.0.
  • Company B does annual contracts, does three-year prepaid deals, and has a CAC ratio of 2.0.

If both companies have 80% subscription gross margins (GM), then the CAC payback period is 15 months for company A and 30 months for company B.  (CAC payback period is months of subscription gross margin to recover CAC.)

This implies company B is much riskier than company A because company B’s payback period is twice as long and company B’s money is at risk for a full 30 months until it recovers payback.

But it’s completely wrong.  Note that because company B does pre-paid deals its actual, cash payback period is not 30 months, but 1 day.  Despite ostensibly having half the CAC payback period, company A is far riskier because it has to wait 15 months until recovering its S&M investment and each month presents an opportunity for non-renewal.  (Or, as I like to say, “is exposed to the churn rate.”)  Thus, while company B will recoup its S&M investment (and then some) every time, company A will only recoup it some percentage of the time as a function of its monthly churn rate.

Now this is not to say that three-year prepaid deals are a panacea and that everyone should do them.  From the vendor perspective, they are good for year 1 cashflow, but bad in years 2 and 3.  From the customer perspective, three-year deals make plenty of sense for “high consideration” purchases (where once you have completed your evaluation, you are pretty sure of your selection), but make almost no sense in try-and-buy scenarios.  So the point is not “long live the three-year deal,” but instead “examine unit economics, but do so with an awareness of both their origins and limitations.”

This is why I think nothing tells the story better than a full four-statement, three-year financial model.  Now I’m sure there are plenty of badly-built over-optimistic models out there.  But don’t throw the baby out with the bathwater.   It is just not that hard to model:

  • The mix of the different types of deals your company does by duration and prepayment terms — and how that changes over time.
  • The existing renewals base and the matrix of deals of one duration that renew as another.
  • The cashflow ramifications of prepaid and non-prepaid multi-year contracts.
  • The impact on ARR and cashflow of churn rates and renewals bookings.
  • The impact of upsell to the existing customer base

Now that I’ve disclaimed all that, let’s answer the central question posed by this post:  if you could know just one SaaS metric, which would it be?

The LTV/CAC ratio.

Why?  Because what you pay for something should be a function of what it’s worth.

Some people say, for example, that a CAC of 2.0 is bad.  Well, if you’re selling a month-to-month product where most customers discontinue by month 9, then a CAC of 2.0 is horrific.  However, if you’re selling sticky enterprise infrastructure, replacing systems that have been in place for a decade with applications that might well be in place for another decade, then a CAC is 2.0 is probably fine.  That’s the point:  there is no absolute right or wrong answer to what a company should be willing to pay for a customer.  What you are willing to pay for a customer should be a function of what they are worth.

The CAC ratio captures the cost of acquiring customers.  In plain English, the CAC ratio is the multiple you are willing to pay for $1 for annual recurring revenue (ARR).  With a CAC ratio of 1.5, you are paying $1.50 for a $1 of ARR, implying an 18 month payback period on a revenue basis and 18-months divided by subscription-GM on a gross margin basis.

Lifetime value (LTV) attempts to calculate what a customer is worth and is typically calculated using gross margin (the profit from a customer after paying the cost of operating the service) as opposed to simply revenue.  LTV is calculated first by inverting the annual churn rate (to get the average customer lifetime in years) and then multiplying by subscription-GM.

For example, with a churn rate is 10%, subscription GM of 75%, and a CAC ratio of 1.5, the LTV/CAC ratio is (1/10%) * 0.75 / 1.5 = 5.0.

The general rule of thumb is that LTV/CAC should be 3.0 or higher, with of course, the higher the better.

There are three limitations I am aware of in working with LTV/CAC as a metric.

  • Churn rate.  Picking the right churn rate isn’t easy and is made complicated in the presence of a mix of single- and multi-year deals.  All in, I think simple churn is the best rate to use as it reflects the “auto-renewal” of multi-year deals as well as the very real negative churn generated by upsell.
  • Statistics and distributions.  I’m not a hardcore stats geek, but I secretly worry that many different distributions can produce an average of 10%, and thus inverting a 10% churn rate to produce an average 10-year customer lifetime scares me a bit.  It’s the standard way to do things, but I do worry late at night that averages can be misleading.
  • Light from a distant star.  Remember that today’s churn rate is a function of yesterday’s deals.  The more you change who you sell to and how, the less reflective yesterday’s churn is of tomorrow’s.  It’s like light arriving from a star that’s three light-years away:  what you see today happened three years ago.  To the extent that LTV is a forward-looking metric, beware that it’s based on churn which is backward-looking.  In perfect world, you’d use predicted-churn in an LTV calculation but since calculating that would be difficult and controversial, we take the next best thing:  past churn.  But remember that the future doesn’t always look like the past.

 

You Can’t Analyze Churn by Analyzing Churn

One thing that amazes me is when I hear people talk about how they analyze churn in a cloud, software as a service (SaaS), or other recurring revenue business.

You hear things like:

  • “17% of our churn comes from emerging small business (ESB) segment, which is normal because small businesses are inherently unstable.”
  • “22% of our churn comes from companies in the $1B+ revenue range, indicating that we may have a problem meeting enterprise needs.”
  • “40% of the customers in the residential mortgage business churned, indicating there is something wrong our product for that vertical.”

There are three fallacies at work here.

The first is assumed causes.  If you that 17% of your churn comes from the ESB segment, you know one and only one thing:  that 17% of your churn comes from the ESB segment.  Asserting small business stability as the cause is pure speculation.  Maybe they did go out of business or get bought.  Or maybe they didn’t like your product.  Or maybe they did like your product, but decided it was overkill for their needs.  If you want to how much of your churn came from a given segment, ask a finance person.  If you want to know why a customer churned, ask them.  Companies with relatively small customer bases can do it via a phone.  Customers with big bases can use an online survey.  It’s not hard.  Use metrics to figure out where your churn comes from.  Use surveys to figure out why.

The second is not looking at propensities and the broader customer base. If I said that 22% of your annual recurring revenue (ARR) comes from $1B+ companies, then you shouldn’t be surprised that 22% of your churn comes from them as well.  If I said that 50% of your ARR comes from $1B+ companies (and they were your core target market), then you’d be thrilled that only 22% of your churn comes from them.  The point isn’t how much of your churn comes from a given segment:  it’s how much of your churn comes from a given segment relative to how much of your overall business comes from that segment.  Put differently, what is the propensity of someone to churn in one segment versus another.

And you can’t perform that analysis without getting a full data set — of both customers who did churn and customers who didn’t.  That’s why I say you can’t analyze churn by analyzing churn.  Too many people, when tasked with churn analysis:  say, “quick, get me a list of all the customers who churned in the past 6 months and we’ll look for patterns.”   At that instant you are doomed.  All you can do is decompose churn into buckets, but know nothing of propensities.

For example, if you noticed that in one country that a stunning 99% of churn came from customers with blue eyes, you might be prompted to launch an immediate inquiry into how your product UI somehow fails for blue-eyed customers.  Unless, of course, the country was Estonia where 99% of the population has blue eyes, and ergo 99% of your customers do.  Bucketing churn buys you nothing without knowing propensities.

The last is correlation vs. causation.  Knowing that a large percentage of customers in the residential mortgage segment churned (or even have higher propensity to churn) doesn’t tell you why they are churning.  Perhaps your product does lack functionality that is important in that segment.  Or perhaps it’s 2008, the real estate crisis is in full bloom, and those customers aren’t buying anything from anybody.  The root cause is the mortgage crisis, not your product.   Yes, there is a high correlation between customers in that vertical and their churn rate.  But the cause isn’t a poor product fit for that vertical, it’s that the vertical itself is imploding.

A better, and more fun, example comes from The Halo Effect, which tells the story that a famous statistician once showed a precise correlation between the increase in the number of Baptist preachers and the increase in arrests for public drunkenness during the 19th Century.  Do we assume that one caused the other?  No.  In fact, the underlying driver was the general increase in the population — with which both were correlated.

So, remember these two things before starting your next churn analysis

  • If you want to know why someone churned, ask them.
  • If you want to analyze churn, don’t just look at who churned — compare who churned to who didn’t

The Box S-1, Delayed IPO, and the Genius of Tien Tzuo

While I did my own post on the Box S-1, I also noticed that fellow CEO blogger, Tien Tzuo of Zuora, had done a post of his own with the catchy title These Numbers Show That Box CEO Aaron Levie is a Genius.  I saw the post, clipped it to Evernote, and I decided to read it on my next flight.

That trip was a few days ago and at 35,000 feet I decided that Tien Tzuo was also a genius.  Not because he did a nice post on Box, but because he is devising an new accounting for SaaS companies which reflects them more accurately than current GAAP, and – rather amazingly– I’m guessing he came up with this more than 5 years ago.

You see, being a natural cynic, I had tended to dismiss Zuora’s “subscription economy” mantra as part Silicon Valley narcissism (lots of businesses have been selling subscriptions for a long time —  just because it’s new to us doesn’t mean it’s new to the world) and part marketing pitch.  In hindsight, I think I dismissed it too quickly.

While I’d seen one of Tien’s presentations, the concepts didn’t resonate with me until I read his post on the Box S-1.

I’ve always believed two things about SaaS companies and GAAP:

  •  GAAP P&Ls are not particularly reflective of the state of a SaaS business.  (Because expenses are taken now, but revenue is amortized going forward.)
  • The faster a SaaS company is growing, the less reflective the GAAP P&L is.

Box provides an extreme example of the second point, so it’s a good one to study.

However, with the exception of the CAC ratio, I’d defaulted to using other existing metrics that I thought captured things better, such as bookings and cashflow.  What I’d never tried to do was invent a new set of metrics that actually capture a SaaS business better – and that’s exactly what Tien has done.

Here are Tien’s core SaaS metrics:

  •  ARR (annual recurring revenue).  Everybody uses this one.  Tien however makes the clever and basic observation that current quarter subscription revenues * 4 is a good proxy for starting-quarter ARR.
  •  Gross recurring margin (GRM).  ARR – annualized COGS.  Tien argues this is the true gross margin on the business, and is equivalent to the steady-state gross margin if the business shut down all sales and marketing and stopped growing.  By Tien’s math, Box has GRM of 79%, Workday 83%, ServiceNow 78%, and Salesforce 85%.
  • Recurring revenue margin (RRM).  ARR – annualized ( COGS + R&D + G&A).  Tien argues this is margin on the recurring part of the business, including the recurring costs of delivering the service, enhancing it (as SaaS customers expect) and operating the business.  It notably excludes S&M, which is seen as a discretionary expense driver by how fast you want to grow.  By Tien’s math, Box has an RRM of 20%, Workday 28%, ServiceNow 40%, and Salesforce 57%.
  •  Customer acquisition cost (CAC) ratio.   I’ve covered this ratio extensively already, so I won’t redefine it.  I will note that Tien calculates Box’s CAC at around 2.0, which is higher than my estimate of 1.6.  However, we define CAC slightly differently (mine is based on new ARR, his on net new ARR) so I would expect mine to be lower since it’s not offset by churn.

And when you look on Tien’s metrics, Box looks pretty good.

If Tien’s Right, Why has the Box IPO Been Delayed?

Because Wall Street doesn’t care right now.  I think there are a number of reasons for that:

  • The general shellacking that SaaS stocks have taken in the past few months.  Many are off around 50%.
  • The unsustainable cash burn.  You might think it’s easy to back off growth, but it’s not. Growing fast means hiring like crazy and hiring like crazy adds the annualized cost of the new staff to your run rate.  Last I checked, Box was burning $20M+ per quarter and unless cash comes from somewhere that hiring party will end abruptly and unpleasantly — in the short-term at least.
  • Lifetime value concerns.  Tien’s math is silently predicated on a 100% renewal rate, and thus a high customer lifetime value (LTV).

Let’s look at this in more detail.

Tien’s metrics assume that if you have $150M in ARR and you turn off growth sales and marketing that you stay $150M forever.  That’s not true.  You actually enter a decay curve where you shrink by your churn rate each year.

Upsell and price increases can more than offset churn resulting in the hallowed negative churn rate, in which case you would actually grow every year, even without sales and marketing.  This appears to be the case at Box which claims a 123% net customer expansion rate.

So if the future looks like the past, things look pretty good for most SaaS companies and for Box in particular.  But what driver underlies that assumption?

Switching costs:  the cost of switching from offering A to offering B.  High switching costs ensure a high renewal rate regardless of whether you are delighting customers.  (Think of all those folks who write big maintenance checks to SAP or Oracle; they’re usually not “delighted” in my experience.)

And low switching costs, in my opinion, are Box’s potential Achilles’ Heel.  As a customer, and a happy one, I intend to renew for a while.  But if something better came along, well, it’s just not that hard to switch.

Put differently, Box’s file sharing isn’t that “sticky” — compared to a CRM or ERP system (and all the work you do to configure it, write reports, et cetera).

Put differently once more, what Box sells is much more of a commodity than other enterprise software offerings.

Despite that, this issue isn’t obvious in my opinion:

  • Switching costs can take subtle forms.  You can argue that part of Amazon’s success has been of the switching costs associated with account setup.  It’s not a huge cost, per se, but seemingly enough to cause me to just buy off Amazon instead of using Google Shopping or another price comparison engine.  Electronic wallets were supposed to fix this, but they didn’t.
  • Brand/trust.  Switching costs can also include what you lose in brand/trust by moving off an existing known supplier.  Box will certainly try to argue that leadership, trust, and brand are a big part of their value, and a cost to those who move away from them.
  • Entry barriers.  Box and Dropbox have both raised huge amounts of money and will work hard to create barriers to entry.  Switching costs to new entrants are only relevant to the extent there actually are new entrants.  The fundraising Box and Dropbox have done have basically scared, for the time being, everybody else out of the category.

So is Box theoretically very sticky?  In my opinion, no.

Might Box end up sticky in practice?  Quite possibly yes.

In which case Tien is right, and he’s a genius.  Which in turn makes Aaron Levie one, too.

Woe is Media: Lessons from Tidemark’s PR

[Major revision 5/11/14 5:10 PM]

  • “All media exist to invest our lives with artificial perceptions and arbitrary values.”  — Marshall McLuhan, philosopher of communications theory and coiner of the phrase “the medium is the message.”
  • “Modern business must have its finger continuously on the public pulse. It must understand the changes in the public mind and be prepared to interpret itself fairly and eloquently to changing opinion.”  — Edward Bernays, widely known as the Father of Public Relations and author of Propoganda [1].
  • “No one ever went broke underestimating the taste of the American public.”  — H.L. Mencken
  • “Don’t hate the media, become the media.”  — Jello Biafra, spoken word artist, producer, and formerly lead singer of the Dead Kennedys.

In this post, I’ll take some inspiration from Jello Biafra, “become the media,” and do some analysis of Tidemark’s most recent PR hit, a story in Business Insider entitled This Guy Arrived in the US with $26, Sold a Startup for Half a Billion, and is Working on Another Cool Company.  Since Host Analytics competes with Tidemark, see the footer for a disclaimer [2].

I’m doing this mostly because I’m tired of seeing stories like this one, where it’s my perception that a publication takes a story wholesale, spin and all, from a skilled PR firm and sends it down the line, unchallenged, to us readers.  I’m going to challenge the story, piece by piece, and try not to throw too many competitive jabs in the process.

Let’s start by analyzing the headline.

“$26″

While this may be true, it strikes me as exactly the kind of specifics that PR people know journalists love and a number that actually sounds better than say $30 or $25.  Perhaps CG (see footnote [3]) actually had $26 exactly in his pocket on arrival, but did he really have no other resources whatsoever on which to to rely?   Let us beware that it is not only the specificity of the $26 that makes the claim interesting, but also — and more importantly — the implication that he had nothing or no one else on which to rely.  Arriving with $26, not knowing the language, and having no friends/relatives is certainly much tougher than showing up with $26, a brother in Brooklyn, and $2,000 in the bank.  Which was the case?  I don’t know.  Given the overall quality of the story, and the author’s general susceptibility to spin (which we will show), I’d certainly wonder.

“Sold a startup for Half a Billion.”

To me, this clearly implies that CG was either:

  • Founder/CEO of a startup that sold for half a billion dollars, or
  • CEO of a startup that sold for half a billion dollars (while he was CEO)

He was neither.

CG was not a founder of OutlookSoft, nor was he ever CEO. He was CTO.  CTO’s don’t sell startups; CEO’s do.  Phil Wilmington was OutlookSoft’s CEO.

CG had founded a company called Tian Software which, per CG’s own LinkedIn profile, was acquired (not “merged” as the story later says) by OutlookSoft in 2005.

Now let’s challenge the half-a-billion.

My sources say SAP acquired OutlookSoft for $350M plus a $50M earn-out, making the deal worth $400M — not $500M.  This is sort of confirmed in another Tidemark PR marvel, here, which says “short of $500M,” a very nicely PR-packaged way of saying $400M.  A few phone calls to SAP alums and deal-makers in the valley might well have confirmed the lower price.

Net/net:  we have blown the headline to bits.  The $26 claim is suspect (if quite possibly true) while the very impressive “sold a startup for half a billion” is simply false.  It wasn’t half a billion.  It wasn’t his startup.  He didn’t sell it.  QED.

I know that neither CG nor Tidemark wrote this headline.  Someone at Business Insider did — and quite possibly not the journalist who wrote the article.

So perhaps we’re just caught up in headline sensationalism.  The Horatio Alger message still sells well in America and the SEO people at Business Insider know it — the URL for the story is:    www.businessinsider.com/christian-gheoghre-rags-to-riches-story.

Before digging into the story itself, we should observe that this is basically the same story as this one that ran on CNET over a year ago:  Escaping the Iron Curtain for Silicon Valley.  This raises a question that is difficult for me to answer.  It’s a cool story, no doubt, but the tech blogs are news blogs and old stories aren’t news.  So why even write the same story that CNET did 15 months earlier?  Is it possible they didn’t even fact check enough to know?

Let’s dig into some of the lines from the story.

“Today’s he working on his fourth successful startup, having sold all of his previous ones, including his third one, OutlookSoft, to SAP for $500M.”

I count two:  Tian Software and Tidemark.

The story itself contradicts the idea that Saxe Marketing “was CG’s” in saying, “[Andrew] Saxe hired CG” — i.e., if CG was “hired” he was not a founder and ergo the company was not “his.”   The name of company itself — Saxe Marketing, as opposed to Saxe & CG Marketing — additionally reinforces that.

As discussed above, you can’t call OutlookSoft “his,” nor can you say he sold it.

If we said, “CG spent 10 years toiling on two startups, one that got sold to Experian for $32M and one that was acquired by a private company at an undisclosed valuation” — would it have the same impact?  Methinks not.

“Taught himself English by listening to Pink Floyd.”  

I have no doubt that CG listened to Pink Floyd in his home country and that he learned (probably quite strange) words from so doing.  From my experience with second-language songs, it’s actually quite difficult to learn words and much easier to learn pronunciation.  Many of my French friends can sing English songs, but only in a phonetic way.

So, to me, this rings partially true but it also rings as something a PR person would grab onto faster than swimming across the border.  “Wait, you learned English listening to Pink Floyd.  Oh!  We’ve got to use that.”

So, to have some fun with this one, let me imagine the conversation he had with the immigration officer on arriving at JFK:

INS:  “So why are you entering America?”

CG:  “We don’t need no education.”

INS:  “So you’re not on a student visa?”

CG:  “We’re just two lost souls swimming in a fish bowl, year after year.”

INS:  “So you’re coming to to get married, then?”

CG:  “You raise the blade, you make the change, you re-arrange me ’till I’m sane.”

INS:  “Ah, a medical visa, excellent.”

This spin-taking was harmless.

“He taught himself to code by hacking into video games on [a Commodore 64] machine.”  

Frankly, I’m not sure you could “hack into” video games on a Commodore 64, but I guess that sounds better than saying “wrote BASIC programs on a Commodore 64″ like the rest of us.  If I had to guess, you probably got the source code since BASIC wasn’t a compiled language so there was no “hacking” to get in.  You were in if you wanted to be.

The CNET story somewhat contradicts this account saying CG “played games on the C64″ but he later bought a “Sinclair ZX and taught himself some programming.”

Details, yes, somehow programming a C64 or ZX isn’t good enough for the narrative:  he had to “hack into” them.  All part of the journalist embellishing the (probably already embellished) details in order to make CG larger than life and get a lot of hits on the story.

“[He got] a masters [sic] degree in Romania in mechanical engineering with a minor in computer science. But the degree wasn’t recognized and accepted once he got here.”  

If there were ever a field in which people care about what you can do as opposed to your degree, it’s programming.

Recognized (by whom?) or not, CG was not a limo driver who knew nothing about programming and miraculously started a software company.  He had a master’s degree in engineering and computer science.

“Immigrant with master’s in computer science founds software company” would probably describe about half of all Silicon Valley companies.

Business Insider insists on the Man Bites Dog approach of “Limo Driver Founds Software Company” to the point of explaining away the master’s degree because it interferes with the narrative.

“He launched a second startup, TIAN, and merged it with a company called OutlookSoft.”

Tian was not “merged” with OutlookSoft; it was acquired by them, per CG’s own LinkedIn.  Why the spin?

“OutlookSoft did a form of big data known as business analytics.”

There was nothing whatsoever “big data” about OutlookSoft, which was a business performance management company that did planning, budgeting, consolidation, and analytics.  Gratuitous buzzword inclusion, and nothing more.  Presumably inserted by the PR firm and swallowed whole by the journalist.

“Tidemark also does business analytics/big data, but it’s designed for the modern age: it works on a tablet and runs in the cloud.”  

The Holy Grail of PR these days is social, mobile, cloud.  This sentence scores a 2 out of 3.  For what it’s worth, I actually think this is part of their strategy, so in this case it’s not buzz-wordy journalism, it’s the clear communication of a buzz-wordy strategy.

“More importantly, it is designed to be what CG calls a ‘revolution at the edge’ with a ‘Siri-like interface.'”  

Revolution at the edge is both buzz-wordy and meaningless.  Siri is definitionally not revolutionary because it was launched 4 years ago in 2010 and based upon natural language and speech recognition technology that was more than a decade old.  What was revolutionary about Siri was its inclusion in a mass-market, consumer product.

I’d say a Siri-like interface for BI has been discussed since the Natural Language Inc (NLI) was acquired by Microsoft in the late 1980s.  If nobody’s noticed, it hasn’t worked.  Turns out the specificity of human language is not precise enough to directly map to a database query — even with a semantic layer.   But, hey, let’s go pitch the idea because it sounds cool, the journalist probably has no idea of the history and doesn’t realize that no CFO wants to say “Hey Tiri, I want to hire 3 people next quarter and increase average salaries 3.5%.”

“It’s like Google mixed with Wolfram|Alpha.” 

That’s like saying it’s nuclear fusion mixed with a perpetual motion machine.

While it may indeed do voice recognition like Siri, I can assure you it is not like Wolfram|Alpha (press the link to see just one example).   This seems an easily challenged assertion, but it gets repeated as a sexy soundbite.  Great packaging of the message to just flow through the media channel.

The first rule of PR is to have good metaphors and that certainly a good one.  The first rule of journalism, however, should be to challenge what’s said.  How is it like Wolfram|Alpha exactly (and there’s a lot, lot more to Wolfram|Alpha than a question-style interface).

“In the first 18 months since his product became available, his company is on track to hit $45 million in revenue, CG told us, growing 300% year over year. It has about 45 customers so far, with, on average, 180 business people at each customer using the product.”

We’re going to need to analyze this last set of claims one at a time.

  • “In the first 18 months.”  Tidemark was founded in 2009, so it’s about 5 years old.  While PR is cleverly trying to reframe the age issue around product availability, you’d think a journalist would want to know what happened during the other 3.5 years.  As it turns out, a lot.  The company was originally founded as Proferi, with an integrated GRC and EPM vision.  When that failed, the company “pivoted” (a euphemism for re-started with a new strategy) to a new vision which I’ve frankly never quite understood because of the buzzword-Cuisinart messaging strategy they employ.
  • “On track to hit $45M in revenue.”  Frankly, I have a lot of trouble believing this, but it’s happily stated without a timeframe and thus impossible to analyze.   Normally, when you say $45M, it implies “this fiscal year.”  But it could be anything.   Is it simply “on track” for doing $45M in, say, 2016?  Or, maybe it’s a really misleading answer like $45M in cumulative revenue since inception?   To paraphrase an old friend, saying $45M without a timeframe is like offering a salary of 100,000 but not mentioning the currency.
  • “Growing 300% year over year.”  Most journalists and some PR people confuse tripling with growing 300% which is actually quadrupling.  But let’s assume both that the math is right and we are talking annual revenues:  this means they did $11.25 in 2013 and are on track to do $45M in 2014.  To do this in revenues means an even bigger number in bookings (due to amortization of SaaS revenues).  I banged out a quick model to show my point.

Tidemark analysis

  • “Growing 300% a year.”  The far easier way to grow 300% year, of course, is to do so off a small base.  If you do some basic math on private company numbers and it doesn’t make sense, you probably shouldn’t repeat them.  Net/net:  a journalist who hears 200% or 300% growth claims should first make sure the math is right, and second default-conclude it’s off a small base until proven otherwise.
  • “It has 45 customers so far with 180 [users at each customer].”  Some quick math says $45M/45 = $1M/customer, which is Workday-class large and ergo highly suspect.  Slightly better math (using my quarterly model) suggests $800K/customer in ARR, which is still huge — by my estimates $100-$200K ARR is a nice deal in EPM.   Combining this with 180 users/customer implies an average price of $4.5K/user/year — 150% of the list price of the most expensive edition of Salesforce.com.  ERP-sized deals, deals 4-10x the industry average, deals done at 150% of Salesforce’s list.  It doesn’t add up.

I should also note that LinkedIn says Tidemark has 51-200 employees which is generally not consistent with the numbers in my model.  Moreover, I can find searching for words like “account” [executive] or  “sales” [executive], only fewer than 10 people who appear to be in sales at Tidemark.

Overall, I conclude that the $45M is more like 2014 bookings or maybe cumulative bookings since inception than any annual revenue figure.  The numbers just don’t hang together.  If I had to pick a figure, I’d guess they are closer to $10M in revenues in 2014 than $45M.

But what is a journalist supposed to do in this situation?  I’d argue:  fact check.  Call VCs and get company size estimates.  Use Google to find similar/alternative stories. See Crunchbase for history. Do some basic triangulation off LinkedIn both in terms of numbers of sales reps and size of company.   Ask industry execs for industry averages.  And if the numbers don’t hang together, don’t publish them.

To wrap this up, yes, I dislike this kind of puff-piece, softball story.  Not because it’s friendly — not all news has to be challenging and analytical and the raw material of CG’s story is indeed impressive — but because it seems to take the PR-enhanced version of it, and swallow it hook, line, and sinker.

The media should do better.  The trade press was crushed by the tech blogs for lack of sufficient value add.  The tech blogs are quickly falling into the same trap.

Disclaimer / Footnotes
[1]  I’m told Autonomy’s Mike Lynch was a big fan of this book.

[2] Host Analytics theoretically competes with Tidemark.  Since we rarely see them in deals, I feel comfortable editorializing about their PR as I might not with a more direct competitor.  Nevertheless, I can certainly be said to have a horse in this race.

[3] I refer to Christian Gheorghe as CG both because his name is notoriously hard to spell, but more importantly because this post is not supposed to be an attack on him — to my knowledge he is a delightful and inspiring person — but rather instead a call-out of the publication that wrote this story and the system of which it is a part.

Insight Ventures Periodic Tables of SaaS Sales and Marketing Metrics


I just ran into these two tables of SaaS metrics published by Insight Venture Partners (or, more precisely, the Insight Onsite team) and they are too good not to share.

Along with Bessemer’s awkwardly titled 30 Questions and Answers That Every SaaS Revenue Leader Needs to Know, financial metrics from Opex Engine, and the wonderful Pacific Crest Annual SaaS Survey, SaaS leaders now have a great set of reference documents to benchmark their firms.

(And that’s not to mention David Skok’s great post on SaaS metrics or, for that matter, my own posts on the customer acquisition cost (CAC) ratio and renewals rates / churn.)

Here is Insight’s SaaS sales periodic table:

ivp saas sales

And here is Insight’s B2B digital marketing periodic table:

ivp saals mkting

The Customer Acquisition Cost (CAC) Ratio: Another Subtle SaaS Metric

The software-as-a-service (SaaS) space is full of seemingly simple metrics that can quickly slip through your fingers when you try to grasp them.  For example, see Measuring SaaS Renewals Rates:  Way More Than Meets the Eye for a two-thousand-word post examining the many possible answers to the seemingly simple question, “what’s your renewal rate?”

In this post, I’ll do a similar examination to the slightly simpler question, “what’s your customer acquisition cost (CAC) ratio?”

I write these posts, by the way, not because I revel in the detail of calculating SaaS / cloud metrics, but rather because I cannot stand when groups of otherwise very intelligent people have long discussions based on ill-defined metrics.  The first rule of metrics is to understand what they are and what they mean before entertaining long discussions and/or making important decisions about them.  Otherwise you’re just counting angels on pinheads.

The intent of the CAC ratio is to determine the cost associated with acquiring a customer in a subscription business.  When trying to calculate it, however, there are six key issues to consider:

  • Months vs. years
  • Customers vs. dollars
  • Revenue on top vs. bottom
  • Revenue vs. gross margin
  • The cost of customer success
  • Time periods of S&M

Months vs. Years

The first question — which relates not only to CAC but also to many other SaaS metrics:  is your business inherently monthly or annual?

Since the SaaS movement started out with monthly pricing and monthly payments, many SaaS businesses conceptualized themselves as monthly and thus many of the early SaaS metrics were defined in monthly terms (e.g., monthly recurring revenue, or MRR).

While for some businesses this undoubtedly remains true, for many others – particularly in the enterprise space – the real rhythm of the business is annual.  Salesforce.com, the enterprise SaaS pioneer, figured this out early on as customers actually encouraged the company to move to an annual rhythm, for among other reasons, to avoid the hassle associated with monthly billing.

Hence, many SaaS companies today view themselves as in the business of selling annual subscriptions and talk not about MRR, but ARR (annual recurring revenue).

Customers vs. Dollars

If you ask some cloud companies their CAC ratio, they will respond with a dollar figure – e.g., “it costs us $12,500 to acquire a customer.”  Technically speaking, I’d call this customer acquisition cost, and not a cost ratio.

There is nothing wrong with using customer acquisition cost as a metric and, in fact, the more your business is generally consistent and the more your customers resemble each other, the more logical it is to say things like, “our average customer costs $2,400 to acquire and pays us $400/month, so we recoup our customer acquisition cost in six months.”

However, I believe that in most SaaS businesses:

  • The company is trying to run a “velocity” and an “enterprise” model in parallel.
  • The company may also be trying to run a freemium model (e.g., with a free and/or a low-price individual subscription) as well.

Ergo, your typical SaaS company might be running three business models in parallel, so wherever possible, I’d argue that you want to segment your CAC (and other metric) analysis.

In so doing, I offer a few generic cautions:

  • Remember to avoid the easy mistake of taking “averages of averages,” which is incorrect because it does not reflect weighting the size of the various businesses.
  • Remember that in a bi-modal business that the average of the two real businesses represents a fictional mathematical middle.

avg of avg

For example, the “weighted avg” column above is mathematically correct, but it contains relatively little information.  In the same sense that you’ll never find a family with 1.8 children, you won’t find a customer with $12.7K in revenue/month.  The reality is not that the company’s average months to recoup CAC is a seemingly healthy 10.8 – the reality is the company has one very nice business (SMB) where it takes only 6 months to recoup CAC and one very expensive one where it takes 30.  How you address the 30-month CAC recovery is quite different from how you might try to squeeze a month or two out the 10.8.

Because customers come in so many different sizes, I dislike presenting CAC as an average cost to acquire a customer and prefer to define CAC as an average cost to acquire a dollar of annual recurring revenue.

Revenue on Top vs. Bottom

When I first encountered the CAC ratio is was in a Bessemer white paper, and it looked like this.

cac picture

In English, Bessemer defined the 3Q08 CAC as the annualized amount of incremental gross margin in 3Q08 divided by total S&M expense in 2Q08 (the prior quarter).

Let’s put aside (for a while) the choice to use gross margin as opposed to revenue (e.g., ARR) in the numerator.  Instead let’s focus on whether revenue makes more sense in the numerator or the denominator.  Should we think of the CAC ratio as:

  • The amount of S&M we spend to generate $1 of revenue
  • The amount of revenue we get per $1 of S&M cost

To me, Bessemer defined the ratio upside down.  The customer acquisition cost ratio should be the amount of S&M spent to acquire a dollar of (annual recurring) revenue.

Scale Venture Partners evidently agreed  and published a metric they called the Magic Number:

Take the change in subscription revenue between two quarters, annualize it (multiply by four), and divide the result by the sales and marketing spend for the earlier of the two quarters.

This changes the Bessemer CAC to use subscription revenue, not gross margin, as well as inverts it.  I think this is very close to CAC should be calculated.  See below for more.

Bessemer later (kind of) conceded the inversion — while they side-stepped redefining the CAC, per se, they now emphasize a new metric called “CAC payback period” which puts S&M in the numerator.

Revenue vs. Gross Margin

While Bessemer has written some great papers on Cloud Computing (including their Top Ten Laws of Cloud Computing and Thirty Q&A that Every SaaS Revenue Leader Needs to Know) I think they have a tendency to over-think things and try to extract too much from a single metric in defining their CAC.  For example, I think their choice to use gross margin, as opposed to ARR, is a mistake.

One metric should be focused on measuring one specific item. To measure the overall business, you should create a great set of metrics that work together to show the overall state of affairs.

leaky

I think of a SaaS company as a leaky bucket.  The existing water level is a company’s starting ARR.  During a time period the company adds water to the bucket in form of sales (new ARR), and water leaks out of the bucket in the form of churn.

  • If you want to know how efficient a company is at adding water to the bucket, look at the CAC ratio.
  • If you want to know what happens to water once in the bucket, look at the renewal rates.
  • If you want to know how efficiently a company runs its SaaS service, look at the subscription gross margins.

There is no need to blend the efficiency of operating the SaaS service with the efficiency of customer acquisition into a single metric.  First, they are driven by different levers.  Second, to do so invariably means that being good at one of them can mask being bad at the other.  You are far better off, in my opinion, looking at these three important efficiencies independently.

The Cost of Customer Success

Most SaaS companies have “customer success” departments that are distinct from their customer support departments (which are accounted for in COGS).  The mission of the customer success team is to maximize the renewals rate – i.e., to prevent water from leaking out of the bucket – and towards this end they typically offer a form of proactive support and adoption monitoring to ferret out problems early, fix them, and keep customers happy so they will renew their subscriptions.

In addition, the customer success team often handles basic upsell and cross-sell, selling customers additional seats or complementary products.  Typically, when a sale to an existing customer crosses some size or difficultly threshold, it will be kicked back to sales.  For this reason, I think of customer success as handling incidental upsell and cross-sell.

The question with respect to the CAC is what to do with the customer success team.  They are “sales” to the extent that they are renewing, upselling, and cross-selling customers.  However, they are primarily about ARR preservation as opposed to new ARR.

My preferred solution is to exclude both the results from and the cost of the customer success team in calculating the CAC.  That is, my definition of the CAC is:

dk cac pic

I explicitly exclude the cost customer success in the numerator and exclude the effects of churn in the denominator by looking only at the new ARR added during the quarter.  This formula works on the assumption that the customer success team is selling a relatively immaterial amount of new ARR (and that their primary mission instead is ARR preservation).  If that is not true, then you will need to exclude both the new ARR from customer success as well as its cost.

I like this formula because it keeps you focused on what the ratio is called:  customer acquisition cost.  We use revenue instead of gross margin and we exclude the cost of customer success because we are trying to build a ratio to examine one thing:  how efficiently do I add new ARR to the bucket?  My CAC deliberately says nothing about:

  • What happens to the water once S&M pours it in the bucket.  A company might be tremendous at acquiring customers, but terrible at keeping them (e.g., offer a poor quality service).  If you look at net change in ARR across two periods then you are including both the effects of new sales and churn.  That is why I look only at new ARR.
  • The profitability of operating the service.  A company might be great at acquiring companies but unable to operate its service at a profit.  You can see that easily in subscription gross margins and don’t need to embed that in the CAC.

There is a problem, of course.  For public companies you will not be able to calculate my CAC because in all likelihood customer success has been included in S&M expense but not broken out and because you can typically only determine the net change in subscription revenues and not the amounts of new ARR and churn.  Hence, for public companies, the Magic Number is probably your best metric, but I’d just call it 1/CAC.

My definition is pretty close to that used by Pacific Crest in their annual survey, which uses yet another slightly different definition of the CAC:  how much do you spend in S&M for a dollar of annual contract value (ACV) from a new customer?

(Note that many vendors include first-year professional services in their definition of ACV which is why I prefer ARR.  Pacific Crest, however, defines ACV so it is equivalent to ARR.)

I think Pacific Crest’s definition has very much the same spirit as my own.  I am, by comparison, deliberately simpler (and sloppier) in assuming that customer success not providing a lot of new ARR (which is not to say that a company is not making significant sales to its customer base – but is to say that those opportunities are handed back to the sales function.)

Let’s see the distribution of CAC ratios reported in Pacific Crest’s recent, wonderful survey:

pac crest cac

Wow.  It seems like a whole lot of math and analysis to come back and say:  “the answer is 1.

But that’s what it is.  A healthy CAC ratio is around 1, which means that a company’s S&M investment in acquiring a new customer is repaid in about a year.  Given COGS associated with running the service and a company’s operating expenses, this implies that the company is not making money until at least year 3.  This is why higher CACs are undesirable and why SaaS businesses care so much about renewals.

Technically speaking, there is no absolute “right” answer to the CAC question in my mind.  Ultimately the amount you spend on anything should be related to what it’s worth, which means we need relate customer acquisition cost to customer lifetime value (LTV).

For example, a company whose typical customer lifetime is 3 years needs to have a CAC well less than 1, whereas a company with a 10 year typical customer lifetime can probably afford a CAC of more than 2.  (The NPV of a 10-year subscription increasing price at 3% with a 90% renewal rate and discount at 8% is nearly $7.)

Time Periods of S&M Expense

Let me end by taking a practical position on what could be a huge rat-hole if examined from first principles.  The one part of the CAC we’ve not yet challenged is the use of the prior quarter’s sales and marketing expense.  That basically assumes a 90-day sales cycle – i.e., that total S&M expense from the prior quarter is what creates ARR in the current quarter.  In most enterprise SaaS companies this isn’t true.  Customers may engage with a vendor over a period of a year before signing up.  Rather than creating some overlapped ramp to try and better model how S&M expense turns into ARR, I generally recommend simply using the prior quarter for two reasons:

  • Some blind faith in offsetting errors theory.  (e.g., if 10% of this quarter’s S&M won’t benefit us for a year than 10% of a year ago’s spend did the same thing, so unless we are growing very quickly this will sort of cancel out).
  • Comparability.  Regardless of its fundamental correctness, you will have nothing to compare to if you create your own “more accurate” ramp.

I hope you’ve enjoyed this journey of CAC discovery.  Please let me know if you have questions or comments.

Measuring SaaS Renewal Rates: Way More Than Meets the Eye

I love cloud computing. I love metrics. And I love renewals. So when I went looking on the Web for a great discussion of SaaS renewals and metrics I was surprised not to find much. Certainly, I found the two classics on SaaS metrics:

  • The Bessemer Venture Partners 10 Laws of Cloud Computing white paper, which I highly recommend despite its increasing pollution with portfolio-company marketing.

The Four Factors
While the above articles are all great, I was surprised that no one really dug into the nitty-gritty of renewals at an enterprise SaaS company, where I believe there are four independent factors at work:

  • Timing. When a contracted is renewed. For example, how to handle when a contract is renewed early or late.
  • Duration. The length of the renewed contract. For example, how to handle when a one-year customer renews for three years, and receives a multi-year discount in the process (for either pre-payment or the contractual commitment itself). [1]
  • Expansion/shrinkage. The expansion or shrinkage of the contract’s value compared to the original contract. For example, how to handle customers adding or dropping seats or products, and/or price increases or decreases.
  • The count metric. What do we wish to count (e.g., bookings, ARR, seats, or customers) and what does it mean when we count one thing versus another.

Particularly in a world where companies are increasingly marketing “negative churn” rates and renewal rates well in excess of 100%, I think it’s worth digging into this and offering some rigor.

A Simple Example
Let’s take a concrete example. Imagine a customer who buys 100 seats of product A at $1,200/seat/year on 7/30/12, with a contractual provision that says the price cannot increase by more than 3% per year [1a].

Imagine that customer renews on 6/30/13, buying 80 seats of product A for $1,225, and adding 40 seats of product B at $1,200/seat/year, and who receives a 15% discount for making a prepaid three-year commitment.

Hang on. While I know you want to run away right now, don’t. This is all real-life stuff in a SaaS company. Bear with me, and download the spreadsheet here (as an Excel file, not a PDF) that shows the supporting math.

A few questions are easy:

  • What were the bookings on the initial order? Answer: $120,000.
  • What was the annual recurring revenue (ARR) of the initial order? Answer: $120,000.
  • What were the bookings on the renewal order? Answer: $372,300.
  • What was the ARR of the renewal order? Answer: $124,100. [2]

Calculating Churn: Leaky Bucket Analysis
So far, so good. Now let’s talk about churn. Because, as you will see, renewal rates alone are complicated enough, I have adopted a convention where:

  • When it comes to renewals, I look only at rates
  • When it comes to churn, I look only at dollars/values

I know this is a completely arbitrary decision, but doing this lets me remember one set of formulas instead of two, reduces rat-hole conversations about definitions, and — most importantly – lets me look at one area in percentages and the other in dollars, helping me to avoid the “percent trap” where you can lose all perspective of absolute scale. [3]

I define churn with an equation that I call “leaky bucket analysis.” [4]

Starting ARR + new ARR – churn ARR = ending ARR

So, some questions:

  • Was there any churn associated with this renewal? Answer: Yes.
  • Why? Answer: Despite a small price increase on product A, there was a 15% multi-year discount and a loss of 20 seats which more than offset it.
  • How much ARR churned? Answer: $36,700. [5]
  • How much new ARR was added? Answer: $40,800. The after-discount value of the product B subscriptions.
  • What is ending ARR? 124,100 = 120,000 + 40,800 – 36,700.
  • How many customers churned? Answer: 0.
  • How many seats churned? Answer: 20.

Note that ARR, seats, and customers are all snapshot (or, point-in-time) metrics that lend themselves to leaky bucket analysis. Period-metrics, like bookings, do not. Bookings happen within a period. There is no concept of starting bookings + new bookings – churn bookings = ending bookings. That’s not how it works. So, when you define churn through leaky bucket analysis, measuring bookings churn doesn’t work.

We can, however, calculate bookings churn as the difference between what was up for renewal and what we renewed. In this case, $120,000 – $372,300 = ($252,300), showing one way to generate a negative churn number. The example makes somewhat more sense in the other direction: if we had a three-year $372,300 contract up for renewal and only renewed $120,000 them we might argue that $252,300 in bookings were churned. From a cash collections perspective, this makes sense [6].

But from a customer value perspective it does not. Unless the customer has plans to discontinue using the service, by dropping from a three-year to a one-year contract we will actually collect more money from them over the next 3 years if they continue to renew ($438,000 vs. $372,300) [7]. So the bookings churn that looks bad for year-one cash actually results in superior ARR and three-year cash collections.

The lesson here is that different metrics are suited for measuring different things. In this case, we can see that bookings churn is useful primarily for analyzing short-term cash collections and not, say, for customer lifetime value or customer satisfaction.

Renewal Rates and Timing
Now that we’re warmed up let’s have some fun. Let’s answer some questions on renewals:

  • From a bookings perspective, when should we count the renewal order? Answer: the order was received on 6/30/13 so it’s a 2Q13 booking.
  • From a renewal rate perspective, when should we count this order? Answer: while debatable, to me it’s a renewal of a 3Q contract, so I would count it in 3Q from a renewal rate perspective. [8]
  • When would we count the booking if it were late and arrived on 10/30/13? Answer: From a bookings perspective, it would be a 4Q13 booking. From a renewal rate perspective, it’s the renewal of a 3Q contract, so I would count it in 3Q. [9]
  • On a customer-count basis, how do we count this renewal? Answer: 100%. We had one logo before and we have one logo after, so 100%. [10]

Here it’s going to get a little dicey.

On an ARR basis, how do we measure this renewal? Answer: this begs the question of whether we should include expansion ARR due to new seats, new products, and price increases. Since I am worried that expansion may hide shrinkage, I want to see this both ways. Hence, I will define “gross” to mean including expansion and “net” to mean excluding expansion.

  • What is the gross ARR-based renewal rate? Answer: 103%. [11]
  • What is the net ARR-based renewal rate? Answer: 69%. Now you understand why I want to see it both ways. The net rate is showing that we lost real ARR on product A due to reduced seats and the multi-year discount. The upsell of product B hides shrinkage, producing an innocuous 103% number that might evoke a very different scenario in the mind’s eye (e.g., renewing the original deal for one year with a 3% price hike).
  • What is the gross bookings-based renewal rate? Answer: 310%. We took a $120,000 order and renewed it at $372,000. (But we transformed it greatly in the process.)
  • What is the net bookings-based renewal rate? 208%. We took a $120,000 order for product A and turned it into a $249,000 order for product A. But we dropped ARR about 33% in the process (from $120,000 to $83,300) through lost seats and the multi-year discount.
  • What is the gross seat-count renewal rate? 120%
  • What is the net seat-count renewal rate? 80%
  • What is the customer-count renewal rate? 100%

Identifying the Best Renewal-Related Metrics
So, what is the renewal rate then anyway?  69%, 80%, 100%, 103%, 120%, 208%, or 310%?

I’d say the answer depends on what you want to measure. Having nearly drowned you in the renewal-rate swamp, let me now drain it. Here are the metrics that I think matter most:

key renewals metrics

Here’s why:

  • Leaky bucket analysis is important because ARR growth is the single most important driver of value for a SaaS company.
  • Churn ARR shows you, viscerally, how much extra you had to sell just to make up for leaks [12].  Rates seem sterile by comparison.
  • The customer count-based renewal rate is the best indicator of overall customer satisfaction: what percent of your customers want to keep doing business with you, regardless of whether they change their configuration, product mix, seat mix, contract duration, etc.
  • The gross seat-based based renewal rate shows you how effective you are at driving adoption of your services. Think: land and expand (in terms of seats).
  • The gross ARR-based renewal rate shows you, overall, how effective you are at increasing your customers’ annual commitment. However, it says nothing about how you do that (i.e., which type of expansion ARR) or the extent to which expansion ARR in one area is offsetting shrinkage in another.
  • The net ARR-based renewal rate shows you how much of ARR you renew without relying on expansion. This is a very conservative metric designed to unmask problems that can be hidden by expansion ARR.
  • The gross bookings-based renewal rate is the best predictor of future cashflows. If we know that, on average, we take an order of 100 units and turn it into an order of 175 units – through whatever means – then we should use this metric to predict cashflows. Note that, as we’ve seen, there are trade-offs between ARR and bookings, but the consequences of those can be revealed by other metrics.

Revision 6/25/14, New Definition of Simple Churn, Timing Issues on Gross ARR Renewal Rate
While I generally like and stick with my “show churn in dollars and renewal rates in percents” mentality, I have found that a lot of people still ask about churn as a rate.

To answer, I use one of two different metrics:

  • “Simple churn” which = (net change in ARR from existing customers  / starting-period-ARR) * 4.  This is, I believe, what most companies present as their churn rate, includes the effects of both shrinkage and expansion ARR, and is arguably optimistic because it implicitly includes multi-year deals in the starting ARR.
  • “Simple net churn” which = (churn ARR / starting-period-ARR) * 4.  This presents churn net-of (i.e., exclusive of) expansion ARR.

I have discovered that there are timing issues with the gross ARR renewal rate, defined above.  For companies that do multi-year deals, you will end up including expansion ARR in your ARR base as it is sold along the way, but only reflecting it in the renewal rate when the contract renews, in effect deferring good news until renewal time, and seemingly failing to take credit along the way.

Footnotes
[1] Note that in a multi-year prepaid contract that bookings (order value) equals total contract value (TCV). When multi-year contracts are not prepaid, bookings are only the first-year portion of TCV.

[1a] Some purists would argue that having the right to raise the price 3% should set the denominator of subsequent renewal rate calculations to 1.03 * original-value.  While I get the idea, I nevertheless disagree.

[2] The renewal order is for three years, so to calculate the ARR we need to divide the bookings value by three.

[3] Saying our “churn rate was 10%” makes things sound OK, but saying we churned $2M in ARR is, to me, somehow more visceral. That is, we had to sell an extra $2M in ARR just to make up for existing business that we lost.

[4] A leaky bucket starts at one water level, during a period new water is added, some water leaks out, and the net change establish the ending water level. (Note that in leaky bucket analysis, definitionally, leaks are never negative.)

[5] Now might be a good time to download the spreadsheet accompanying this post so you can see my calculations. In this case, the churn is the difference between the total value from product A on the original order versus the renewals order.

[6] Subscription bookings typically turn into cash within 90 days.

[7] In reality, we should both uplift the price in years 2 and 3 and discount by the renewal rate to get a better expected cash collections figure. (There is nearly endless detail in analyzing this subject but I will make simplifying assumptions at times.)

[8] Otherwise, it would juice 2Q renewal rates and depress 3Q renewal rates, making both less meaningful.

[9] Bonus question:  how would you handle the late-renewal scenario at the 7/20/13 board meeting? Answer: I would publish provisional renewal rates that exclude the transaction, letting the board know we have an outstanding renewal in process. Then once it closed, I would revise the 3Q renewal rates accordingly.

[10] Which then begs the question of how you count customers. For example, while GE has one logo, they have numerous very independent divisions in a large number of countries.

[11] Note that purist might argue that since we had the right to raise prices up to 3% that we should put 103% of the ARR in denominator in this and all similar calculations, thus dropping the resulting renewal rate here to 100%.  While I believe annual increases are important, I still believe renewing someone to 103K in ARR who was at 100K in ARR is a 103% renewal.  Tab 3 of the supporting spreadsheet plays with some numbers in this regard.

[12] It is a good idea to divide churn into 3 buckets to describe the reason: owner change (including bankruptcy), leadership change, and customer dissatisfaction.