Category Archives: SaaS

What Marketing Costs Should be Included in CAC Calculations?

Dear Kellblog:

I’m working on my CAC calculations and I’m trying to determine if I should include all marketing costs or just my direct demand generation costs?  I’ve talked to many of my CMO peers and can’t get a consistent answer to the question?

Thanks / Bewildered CMO

Dear Bewildered CMO:

My gut reaction is that you should include all marketing costs.  Don’t try to argue that PR and product marketing don’t work on customer acquisition.  Don’t try to argue that people aren’t programs and try to exclude the cost of your demandgen team.

Why?  Three reasons:

  • Demandgen people and programs dollars should be fungible.  PR and product marketing better be doing things that help acquire customers., even if indirectly.
  • Playing counting games can hurt your credibility.  VCs aren’t just trying to compare metrics, they’re trying to get to know you by seeing how you think about and/or calculate them.  I’d think you were a weasel if I found you excluding these costs without really good reason.
  • To the extent that people try to compare these things between private and public companies, remember that there is no way to split marketing apart (or split customer success from sales) with public companies which should suggest that by default you include things.

Best / Kellblog

For fun, let’s go quickly look at some sources for CAC definitions and see what we find regarding this issue:

Kellblog defines the CAC as:


S&M, by default, needs to include all S&M costs, so you can’t cut anything out.

(Side note:  to the extent you amortize commissions, I would prefer to say cash sales expense as opposed to GAAP sales expense, because the latter will hide some costs — but that has nothing to do with marketing.)

The 2015 Pacific Crest Private SaaS Company Survey defines the CAC as:

How much do you spend on a fully-loaded sales & marketing cost basis to acquire $1 of new ACV from a new customer.

This seems to close one door (i.e., you better include IT and facilities allocations to your sales costs — as GAAP would require anyway), but open another because it defines the CAC not in terms of total new ACV, but new ACV from new customers.  So if, for example, you had installed base upsell marketing programs, then I would not count those costs in the CAC calculation because they are not marketing costs spent to win new ARR from new customers.  Is PR?  Is product marketing?  It’s a slippery slope.  I’m not in love with this definition for that reason.  You could never do it for public companies.

David Skok defines the CAC as:

Note that while Skok is calculating a cost to acquire a new customer as opposed to $1 of new ARR, his definition is clear when it comes to splitting marketing costs:  include all S&M costs.

Bessemer prefers talking about a CAC payback period and defines it as:

bess cac

Again, this definition is clear — include all S&M costs.

The Perils of Measuring a SaaS Business on Total Contract Value (TCV)

It’s a frothy time and during such times people can develop a tendency to get sloppy about their numbers.  The first sign of froth is when people routinely discuss company size using market capitalization instead of revenue.  This happened constantly during Bubble 1.0 and started again several years ago – e.g., all the talk of unicorns, private companies with $1B+ valuations.

Oneupsmanship becomes the name of the game in frothy times.  If your competitor’s site had 1M pageviews to your own site’s 750K, marketing quickly came up with a new metric on which you could win:  “we had 1.5M eyeballs.”  This kind of gaming, pardon the pun, is seen through rather easily.

The more disturbing distortions are those intended to impress industry influencers to validate strategy.  Analysts – whose job is supposedly to analyze – have a troubling tendency to not judge strategies on their logical merits but on their results.  So if vendor has a silly, unfocused, or simply bad strategy, the vendor doesn’t need to argue that it actually makes sense, they just need find a way to show that it is producing results – and the ensuing Halo Effects will serve as validation.

Public companies try to demonstrate results through revenue allocation games, robbing from non-strategic SKUs to pump up strategic ones (e.g., “cloudwashing” as the megavendors are now often accused).   Private companies have free reign and can either point to unverifiable lofty financing valuations as supposed proof that their strategy is working, or to unverifiable sales growth figures where sales is typically defined as the metric that looked best last quarter.

Most people would quickly agree that at a SaaS business, the best metric for measuring sales is growth in new annual recurring revenue (ARR).  They’d also agree that the best metric for valuing the business is ending ARR and its growth.  (LTV/CAC would come in right behind.)  Using my leaky bucket analogy, the best way to measure sales is by how fast they pour water in the bucket.  The best way to measure the value of the business is the water level of the bucket and how fast it is going up.

But it’s a frothy time, and sometimes the numbers produced using the correct SaaS measures don’t produces numbers that, well, sufficiently impress.  So what’s a poor CEO to do?  Embellish.  The Wall Street Journal recently ran a piece that compared company claims about size/growth made while the company was still private to those later revealed in the S-1.  The results were disappointing, if not perhaps surprising.

Put differently, what’s the SaaS equivalent of “eyeballs”?

The answer is simple:  bookings or, more precisely, total contract value (TCV) bookings.  To show this, we’ll need to define some terms.

  • ARR = annual recurring revenue, the annual subscription fee
  • NSB = new subscription bookings, the prepaid (and – no gaming — quickly collectible) portion of the contract. Since enterprise SaaS contracts are often multi-year and can be fully, partially, or only first-year prepaid, we need a metric to understand the cash implications of the deal.
  • TCV = total contract value, including both prepaid and non-prepaid subscription as well as services. TCV is the largest metric because it includes everything.  Some people exclude services but, to me, total means total.

Now, let’s look at several ways to transform a simple $100K ARR deal in the following spreadsheet:


Note that in each case, the ARR is $100K.  But by varying deal terms the TCV can vary from $150K to $750K.  Now in the real world if someone was going to pay you $100K for one-year deal, they are unlikely to pay $300K for a three-year prepay or contractual commitment.  They will want something in return; typically a discount.

Let’s combine these ideas in one more example.  Say you run a SaaS company and want to impress everyone that you’re doing really well.  The trouble is you’re not.  You sold $10M in new ARR in 2014 (all one-year, prepaid) and think you can sell $10M again in 2015 on those same terms.   If you measure yourself on new ARR growth, that’s 0% and no one is going to think you are cool or write you up on the tech blogs.  But if you switch to TCV and increase your contract duration, you get a lot more flexibility:


If you switch to TCV, the good news is you can grow literally as fast as you want just by playing with contract terms.  Want to grow at 60%?  Switch to 2-year prepaids and give a 20% discount.  That’s not fast enough and you want to grow at 101%?  Move to 3-year prepaids by effectively doing a year-long “buy 2 get 1 free” promotion.   That’s not good enough?  Move to 5-year non-prepaids and you can grow at a dazzling 235% and get nice TechCrunch articles about your strategic vision, your hypergrowth, and your unique culture (that is, most probably, just like everyone else’s unique culture).

This is great.  Why doesn’t everybody do it?  Because you’re mortgaging the future:

  • The discounts you’re giving to get multi-year deals are crushing ARR; new ARR growth is shrinking in all cases.
  • You are therefore crushing both revenue and cash collections over the time period(s)
  • The prepaid deals create a drug addiction problem because you’re not collecting cash in the out years. So you build a dependency either on lots of capital or lots more prepaid deals.
  • Worse yet, on the non-prepaid deals you may not ever collect the money at all.

Wait, what did he say?

In my opinion, non-prepaid multi-year deals are often not worth the paper they are written on.  Why?  Just look at it from the customer’s perspective.  Say you sign a $100K five-year deal with only the first year paid up-front.  And say the software’s not delivering.  It took more work to implement than you thought.  You’ve fallen short on the requirements.  It’s not performing very well.  You’ve called for help but the company can’t fix it because they’re too busy doing other 5-year non-prepaid deals with other customers.

What do you do?  Simple:  you don’t pay the invoice when it comes.  Technically,  yes, you are very much breaking the contract that you signed — but if the software really isn’t delivering, when the vendor calls you say:  “sue me.”

Since software companies generally don’t like suing customers, the vendor – especially if they know the implementation failed – will generally walk away and write it your receivable as bad debt.   If they are particularly devious (and incorrect) they might not even take it as churn until the end of the five-year period when the contract is supposed to renew.   I wouldn’t be shocked if you could find a company that did it this way.

Most sophisticated SaaS people know that SaaS companies shouldn’t be run on TCV or bookings and are well aware of the problems doing so creates with ARR, revenue, and cash.

However, I have never heard anyone make the simple additional point I’m making here:  in a frothy environment dubious companies can create a fictitious bubble around themselves using TCV.  However, because non-prepaid multi-year deals only work when the customers are happy, if the company is out over its skis on promises and implementations, then many of the customers will not end up happy, and the company will never collect much of that TCV.  Meaning, that it was never really “value” in the first place.

Beware Greeks bearing gifts and SaaS vendors talking TCV.

Survivor Bias in Churn Calculations: Say It’s Not So!

I was chatting with a fellow SaaS executive the other day and the conversation turned to churn and renewal rates.  I asked how he calculated them and he said:

Well, we take every customer who was also a customer 12 months ago and then add up their ARR 12 months ago and add up their ARR today, and then divide today’s ARR by year-ago ARR to get an overall retention or expansion rate.

Well, that sounds dandy until you think for a minute about survivor bias, the often inadvertent logical error in analyzing data from only the survivors of a given experiment or situation.  Survivor bias is subtle, but here are some common examples:

  • I first encountered survivor bias in mutual funds when I realized that look-back studies of prior 5- or 10-year performance include only the funds still in existence today.  If you eliminate my bogeys I’m actually an below-par golfer.
  • My favorite example is during World War II, analysts examined the pattern of anti-aircraft fire on returning bombers and argued to strengthen them  in the places that were most often hit.  This was exactly wrong — the places where returning bombers were hit were already strong enough.  You needed to reinforce them in the places that the downed bombers were hit.

So let’s turn back to churn rates.  If you’re going to calculate an overall expansion or retention rate, which way should you approach it?

  1. Start with a list of customers today, look at their total ARR, and then go compare that to their ARR one year ago, or
  2. Start with a list of customers from one year ago and look at their ARR today.

Number 2 is the obvious answer.  You should include the ARR from customers who choose to stop being customers in calculating an overall churn or expansion rate.  Calculating it the first way can be misleading because you are looking at the ARR expansion only from customers who chose to continue being customers.

Let’s make this real via an example.

survivor bias

The ARR today is contained in the boxed area.  The survivor bias question comes down to whether you include or exclude the orange rows from year-ago ARR.  The difference can be profound.  In this simple example, the survivor-biased expansion rate is a nice 111%.  However, the non-biased rate is only 71% which will get you a quick “don’t let the door hit your ass on the way out” at most VCs.  And while the example is contrived, the difference is simply one of calculation off identical data.

Do companies use survivor-biased calculations in real life?  Let’s look at my post on the Hortonworks S-1 where I quote how they calculate their net expansion rate:

We calculate dollar-based net expansion rate as of a given date as the aggregate annualized subscription contract value as of that date from those customers that were also customers as of the date 12 months prior, divided by the aggregate annualized subscription contract value from all customers as of the date 12 months prior.

When I did my original post on this, I didn’t even catch it.  But therein lies the subtle head of survivor bias.

# # #


  • I have not tracked the Hortonworks in the meantime so I don’t know if they still report this metric, at what frequency, how they currently calculate it, etc.
  • To the extent that “everyone calculates it this way” is true, then companies might report it this way for comparability, but people should be aware of the bias.  One approach is to create a present back-looking and a past forward-looking metric and show both.
  • See my FAQ for additional disclaimers, including that I am not a financial analyst and do not make recommendations on stocks.

Joining the Granular Board of Directors

I’m very happy to say that I’ve joined the Board of Directors of Granular.  In this post, I’ll provide some commentary that goes beyond the formal announcement.

I think all CEOs should sit on boards because it makes you a better CEO.  You get take remove the blinders that come from your own (generally all-consuming) company, you build the network of people you can rely upon for answering typical CEO questions, and most importantly, you get to turn the tables and better understand how things might look when seen from the board perspective of your own company.

Let’s share a bit about Granular.

  • Granular is a cloud computing company, specifically a vertical SaaS company, aimed at improving the efficiency of farms.
  • They have a world-class team with the usual assortment of highly intelligent overachievers and with an unusual number of physicists on the executive team, which is always a good thing in a big data company.  (While you might think data scientists are computer science or stats majors, a large number of them seem to come from physics.)

To get a sense of the team’s DNA, here’s a word cloud of the leadership page.

wordle 2

Finally, let’s share a bit about why I decided to join the board.

  • As mentioned, they have a world-class team and I love working with supersmart people.
  • I like vertical strategies.  At MarkLogic, we built the company using a highly vertical strategy.  At Versant, a decade earlier, we turned the company around with a vertical strategy.  At BusinessObjects, while we grew to $1B largely horizontally, as we began to hit scale we used verticals as “+1” kickers to sustain growth.  As a marketeer by trade, I love getting into the mind of and focusing on the needs of the customer, and verticals are a great way to do that.
  • I love the transformational power of the cloud. (Wait, do I sound like too much like @Benioff?)  While cloud computing has many benefits, one of my favorites is that the cloud can bring software to markets and businesses where the technology was previously inaccessible.  This is particularly true with farming, which is a remote, fragmented, and “non-sexy” industry by Silicon Valley standards.
  • I like their angle.  While a lot of farming technology thus far has been focused on precision ag, Granular is taking more of financial and operations platform approach that is a layer up the stack.  Granular helps farmers make better operational decisions (e.g., which field to harvest when), tracks those decisions, and then as a by-product produces a bevy of data that can be used for big data analysis.
  • I love their opportunity.  Not only is this a huge, untapped market, but there is a two-fer opportunity:  [1] a software service that helps automate operations and [2] an information service opportunity derived from the collected big data.
  • Social good.  The best part is that all these amazing people and great technology comes packaged with a built-in social good.  Helping farmers be more productive not only helps feed the world but helps us maximize planetary resource efficiency in so doing.

I thank the Granular team for taking me on the board, and look forward to a bright, transformational future.

Don’t Be a Metrics Slave

I love metrics.  I live for metrics.  Every week and every quarter I drown my team in metrics reviews.  Why?  Because metrics are the instrumentation — the flight panel — of our business.   Good metrics provide clear insights.  They cut through politics, spin, and haze.  They spark amazing debates.   They help you understand your business and compare it to others.

I love metrics, but I’ll never be a slave to them.  Far too often in business I see people who are metrics slaves.  Instead of mastering metrics to optimize the business, the metrics become the master and the manager a slave.

I define metrics slavery as the case when managers stop thinking and work blindly towards achieving a metric regardless of whether they believe doing so leads to what they consider is best for the business.

One great thing about sports analytics is that despite an amazing slew of metrics, everyone remembers it’s the team with the most goals that wins, not the one who took the most shots.  In business, we often get that wrong in both subtle and not-so-subtle ways.

Here are metrics mistakes that often lead to metrics slavery.

  1. Dysfunctional compensation plans, where managers actively and openly work on what they believe are the wrong priorities in response to a compensation plan that drives them to do so. The more coin-operated the type of people in a department, the more carefully you must define incentives.  While strategic marketers might challenge a poorly aligned compensation plan, most salespeople will simply behave exactly as dictated by the compensation plan.  Be careful what you ask for, because you will often get it.
  1. Poor metric selection. Marketers who count leads instead of opportunities are counting shots instead of goals.  I can’t stand to see tradeshow teams giving away valuable items so they can run the card of every passing attendee.  They might feel great about getting 500 leads by the end of the day, but if 200 are people who will never buy, then they are not only useless but actually have negative value because the company’s nurture machine is going to invest fruitless effort in converting them.
  1. Lack of leading indicators. Most managers are more comfortable with solid lagging indicators than they are with squishier leading indicators.  For example, you might argue that leads are a great leading indicator of sales, and you’d be right to the extent that they are good leads.  This then requires you to define “good,” which is typically done using some ABC-style scoring system.  But because the scoring system is complex, subjective, and requires iteration and regression to define, some managers find the whole thing too squishy and say “let’s just count leads.” That’s the equivalent of counting shots, including shots off-goal that never could have scored.  While leading indicators require a great deal of thought to get right, you must include them in your key metrics, lest you create a company of backwards-looking managers.
  1. Poorly-defined metrics. The plus/minus metric in hockey is one of my favorite sports metrics because it measures teamwork, something I’d argue is pretty hard to measure [1].  However, there is a known problem with the plus/minus rating.  It includes time spent on power plays [2] and penalty kills [3].  Among other problems, this unfairly penalizes defenders on the penalty-killing unit, diluting the value of the metric.  Yet, far as I know, no one has fixed this problem.   So while it’s tracked, people don’t take it too seriously because of its known limitations.  Do you have metrics like this at your company?  If so, fix them.
  1. Self-fulfilling metrics. These are potential leading metrics where management losses sight of the point and accidentally makes their value a self-fulfilling prophecy.  Pipeline coverage (value of oppties in the pipeline / plan) is such a metric.  Long ago, it was good leading indicator of plan attainment, but over the past decade literally every sales organization I know has institutionalized beating salespeople unless they have 3x coverage.  What’s happened?  Today, everyone has 3x coverage. It just doesn’t mean anything anymore.  See this post for a long rant on this topic.
  1. Ill-defined metrics, which happen a lot in benchmarking where we try to compare, for example, our churn rate to an industry average. If you are going to make such comparisons, you must begin with clear definitions or else you are simply counting angels on pinheads.   See this post where I give an example where, off the same data, I can calculate a renewals rate of 69%, 80%, 100%, 103%, 120%, 208%, or 310%, depending on how you choose to calculate.  If you want to do a meaningful benchmark, you better be comparing the 80% to the 80%, not the 208%.
  1. Blind benchmarking. The strategic mistake that managers make in benchmarking is that they try to converge blindly to the industry average.  This reminds me of the Vonnegut short-story where ballerinas have to wear sash-weights and the intelligentsia have music blasted into their ears in order to make everyone equal.  Benchmarks should be tools of understanding, not instruments of oppression.   In addition, remember that benchmarks definitionally blend industry participants with different strategies.  One company may heavily invest in R&D in product-leadership strategy.  One may heavily invest in S&M as part of market-share leadership strategy.  A third may invest heavily in supply chain optimization as part of cost-leadership strategy.  Aspiring to the average of these companies is a recipe for failure, not success, as you will end up in a strategic No Man’s Land.  In my opinion, this is the most dangerous form of metrics slavery because it happens at the boardroom level, and often with little debate.
  1. Conflicting metrics. Let’s take a concrete example here.  Imagine you are running a SaaS business that’s in a turnaround.  This year bookings growth was flat.  Next year you want to grow bookings 100%.  In addition, you want to converge your P&L over time to an industry average of S&M expenses at 50% of revenues, whereas today you are running at 90%.  While that may sound reasonable it’s actually a mathematical impossibility.   Why?  Because the company is changing trajectories and in a SaaS business revenues lag bookings by a year.   So next year revenue will be growing slowly [4] and that means you need to grow S&M even slower if you want to meet the P&L convergence goal.  But if you want to meet the 100% bookings growth goal, with improving efficiency, you’ll need to increase S&M cost by say 70%.  It’s impossible.  #QED.  There will always be a tendency to split the difference in such scenarios but that is a mistake.  The question is which is the better metric off which to anchor?   The answer, in a SaaS business is bookings.  Ergo, the correct answer is not to split the difference (which will put the bookings goal at risk) but to recognize that bookings is the better metric and anchor S&M expense to bookings growth.  This requires a deep understanding of the metrics you use and the courage to confront two conflicting rules of conventional wisdom in so doing.

In the end, metrics slavery, while all too common, is more about the people than the metrics.  Managers need to be challenged to understand metrics.  Managers need to be empowered to define new and better metrics.  Managers must to be told to use their brains at all times and never do something simply to move a metric.

If you’re always thinking critically, you’ll never be a metrics slave.  The day you stop, you’ll become one.

# # #

[1] The way it works is simple:  if you’re on the ice when your team scores, you get +1.  If you’re on the ice when the opponent scores you get -1.  When you look at someone’s plus/minus rating over time, you can see, for example, which forwards hustle back on defense and which don’t.

[2] When, thanks to an opponent’s penalty you have more players on the ice then they do.

[3] When, thanks to your team’s penalty, your opponent has more players on the ice than you do.

[4] Because bookings grew slowly this year

Churn:  Net-First or Sum-First?

While I’ve already done a comprehensive post on the subject of churn in SaaS companies and some perils in how companies analyze it, in talking with fellow SaaS metrics lovers of late, I’ve discovered a new problem that isn’t addressed by my posts.

The question?   When calculating churn, should you sum first (adding up all the shrinkage ARR) or net first (net shrinkage vs. expansion ARR and then sum that).  It seems like a simple question, but like so many subtitles in SaaS metrics, whether you net-first or sum-first, and how you report in so doing, can make a big difference in how you see the business through the numbers.

Let’s see an example.


So what’s our churn rate:  a healthy -1% or a scary 15%?  The answer is both.  In my other post, I define about 5 churn rates, and when you sum first you get my “net ARR churn” rate [1], which comes in at a rather disturbing 15%.  When, however, you net first you end up a healthy -1% (“gross ARR churn”) rate because expansion ARR has more than offset shrinkage.  At my company we track both rates because each tells you a different story.

Thanks to the wonders of math, both the net-first and sum-first calculations take you to the same ending ARR number.  That’s not the problem.

The problem is that many companies report churn in a format not like my table above, but in something simpler like that looks like this below [2].


As you can see, this net-first format doesn’t show expansion and shrinkage by customer.  I think this is dangerous because it can obscure real problems when shrinkage ARR is offset, or more than offset, by expansion ARR.

For example, customer 2 looks great in the second chart (“wow, $20K in negative churn!”).  In the first chart, however, you can see customer dropped 4 seats of product A and more than offset that by buying 8 seats of product B.  In fact, in the first chart, you can see that everyone is dropping product A and buying product B which is hidden in the second chart that neither breaks out shrinkage from expansion nor provides a comment as to what’s going on.  My advice is simple:  do sum-first churn and report both the “net ARR” and “gross ARR” renewal rates and you’ll get the whole picture.

Aside 1:  The Reclaimed ARR Issue
This debate prompted a second one with my Customers For Life (CFL) team who wanted to introduce a new metric called “reclaimed ARR,” the ARR that would have been lost on renewal but was saved by CFL through cross-sells, up-sells, and price increases.  Thus far, I’m not in love with the concept as it adds complexity, but I understand why they like it and you can see how I’d calculate it below.


Aside 2:  Saved ARR
The first aside was prompted by the fact that CFL/renewals teams primarily play defense, not offense.  Like goalies on a hockey team, they get measured by a negative metric (i.e., the churn ARR that got away).   Even when they deliver offsetting expansion ARR, there is still some ARR that gets away, and a lot of their work (in the customer support and customer success parts of CFL) is not about offsetting-upsell, it’s about protecting the core of the renewal.  For that reason, so as to reflect that important work in our metrics, we’ve taken a lesson from baseball and the notion of a “save.”  Once the renewals come in, we add up all the ARR that came from customers who were, at any point in time since their last renewal, in our escalated accounts program and call that Saved ARR.    It’s best metric we’ve found thus far to reflect that important work.

# # #

[1] I have backed into the rather unfortunate position of using the word “net” in two different ways.  When I say “net ARR churn” I mean churn ARR net of (i.e., exclusive of) expansion ARR.  When I say net-first churn, I meant to net-out shrinkage vs. expansion first, before summing the customers to get total churn.

[2] Note that I properly inverted the sign because negative churn is good and positive churn is bad.

Average Contract Duration and SaaS Renewals: All Is Not As It Appears

Chatting with some SaaS buddies the other day, we ran into a fun — and fairly subtle — SaaS metrics question.  It went something like this:

VP of Customer Success:  “Our average contract duration (ACD) on renewals was 1.5 years last quarter and –”

VP of Sales:  “– Wait a minute, our ACD on new business is 2.0 years.  If customers are renewing for shorter terms than those of the initial sale, it  means they are less confident about future usage at renewals time than they are at the initial purchase. Holy Moly, that means we have a major problem with the product or with our customer success program.”

Or do we?  At first blush, the argument makes perfect sense.  If new customers sign two-year contracts and renewing ones sign 1.5-year contracts, it would seem to indicate that renewing customers are indeed less bullish on future usage than existing ones.  Having drawn that conclusion, you are instantly tempted to blame the product, the customer success team, technical support, or some other factor for the customers’ confidence reduction.

But is there a confidence reduction?  What does it actually mean when your renewals ACD is less than your new business ACD?

The short answer is no.  We’re seeing what I call the “why are there so many frequent flyers on airplanes” effect.  At first blush, you’d think that if ultra-frequent flyers (e.g., United 1K) represent the top 1%, then a 300-person flight might have three or four on board, while in reality it’s more like 20-30.  But that’s it — frequent flyers are over-represented on airplanes because they fly more; just like one-year contracts are over-represented in renewals because they renew more.

Let’s look at an example.  We have a company that signs one-year, two-year, and three-year deals.  Let’s assume customers renew for the same duration as their initial contract — so there is no actual confidence reduction in play.  Every deal is $100K in annual recurring revenue (ARR).  We’ll calculate ACD on an ARR-weighted basis.  Let’s assume zero churn.

If we sign five one-year, ten two-year, and fifteen three-year deals, we end up with $3M in new ARR and an ACD of 2.3 years.

renewals and acd

In year 1, only the one-year deals come up for renewal (and since we’ve assumed everyone renews for the same length as their initial term), we have an ACD of one year.  The VP of Sales is probably panicking — “OMG, customers have cut their ACD from 2.3 to 1.0 years!  Who’s to blame?  What’s gone wrong?!”

Nothing.  Only the one-year contracts had a shot at renewing and they all renewed for one year.

In year 2, both the (re-renewing) one-year and the (initially renewing) two-year contracts come up for renewal.  The ACD is 1.7 — again lower than the 2.3-year new business ACD.  While, again, the decrease in ACD might lead you to suspect a problem, there is nothing wrong.  It’s just math and the fact that the shorter-duration contracts renew more often which pulls down the renewals ACD.

What To Do About This?
First, understand it.  As with many SaaS metrics, it’s counter-intuitive.

As I’ve mentioned before, SaaS metrics and unit economics are often misunderstood.  While I remain a huge fan of using them to run the business, I strongly recommend taking the time to develop a deep understanding of them.  In addition, the more I see counter-intuitive examples, the more I believe in building full three- to five-year financial models of SaaS businesses in order to correctly see the complex interplay among drivers.

For example, if a company does one-year, two-year, and three-year deals, a good financial model should have drivers for both new business contract duration (i.e., percent of 1Y, 2Y, and 3Y deals) and a renewals duration matrix that has renewals rates for all nine combinations of {1Y, 2Y, 3Y} x (1Y, 2Y, 3Y} deals (e.g., a 3Y to 1Y renewal rate).  This will produce an overall renewals rate and an overall ACD for renewals.  (In a really good model, both the new business breakdown and the renewals matrix should vary by year.)

Armed with that model, built with assumptions based on both history and future goals for the new business breakdown and the renewals matrix, you can then have meaningful conversations how ACD is varying on new and renewals business relative to plan.  Without that, by just looking at one number and not understanding how it’s produced, you run the very real risk of reacting to math effects setting off a false alarm on renewals.