Category Archives: Cloud

The New Gartner 2018 Magic Quadrants for Cloud Financial Planning & Analysis and Cloud Financial Close Solutions

If all you’re looking for is the free download link, let’s cut to the chase:  here’s where you can download the new 2018 Gartner Magic Quadrant for Financial Planning and Analysis Solutions and the new 2018 Gartner Magic Quadrant for Cloud Financial Close Solutions.  These MQs are written jointly by John Van Decker and Chris Iervolino (with Chris as primary author on the first and John as primary author on the second).  Both are deep experts in the category with decades of experience.

Overall, I can say that at Host Analytics, we are honored to a leader in both MQs again this year.  We are also honored to be the only cloud pure-play vendor to be a leader in both MQs and we believe that speaks volumes about the depth and breadth of EPM functionality that we bring to the cloud.

So, if all you wanted was the links, thanks for visiting.  If, however, you’re looking for some Kellblog editorial on these MQs, then please continue on.

Whither CPM?
The first thing the astute reader will notice is that the category name, which Gartner formerly referred to as corporate performance management (CPM), and which others often referred to as enterprise performance management (EPM), is entirely missing from these MQs.  That’s no accident.  Gartner decided last fall to move away from CPM as a uber category descriptor in favor of referring more directly to the two related, but pretty different, categories beneath it.  Thus, in the future you won’t be hearing “CPM” from Gartner anymore, though I know that some vendors — including Host Analytics — will continue to use EPM/CPM until we can find a more suitable capstone name for the category.

Personally, I’m in favor of this move for two simple reasons.

  • CPM was a forced, analyst-driven category in the first place, dating back to Howard Dresner’s predictions that financial planning/budgeting would converge with business intelligence.  While Howard published the research that launched a thousand ships in terms of BI and financial planning industry consolidation (e.g., Cognos/Adaytum, BusinessObjects/SRC/Cartesis, Hyperion/Brio), the actual software itself never converged.  CPM never became like CRM — a true convergence of sales force automation (SFA) and contact center.  In each case, the two companies could be put under one roof, but they sold fundamentally different value propositions to very different buyers and thus never came together as one.
  • In accordance with the prior point, few customers actually refer to the category by CPM/EPM.  They say things much more akin to “financial planning” and “consolidation and close management.”  Since I like referring to things in the words that customers use, I am again in favor of this change.

It does, however, create one problem — Gartner has basically punted on trying to name a capstone category to include vendors who sell both financial planning and financial consolidation software.  Since we at Host Analytics think that’s important, and since we believe there are key advantages to buying both from the same vendor, we’d prefer if there were a single, standard capstone term.  If it were easy, I suppose a name would have already emerged [1].

How Not To Use Magic Quadrants
While they are Gartner’s flagship deliverable, magic quadrants (MQs) can generate a lot of confusion.  MQs don’t tell you which vendor is “best” because there is no universal best in any category.  MQs don’t tell you which vendor to pick to solve your problem because different solutions are designed around meeting different requirements.  MQs don’t predict the future of vendors — last-year’s movement vectors rarely predict this year’s positions.  And the folks I know at Gartner generally strongly dislike vector analysis of MQs because they view vendor placement as relative to each other at any moment in time [2].

Many things that customers seem to want from Gartner MQs are actually delivered by Gartner’s Critical Capabilities reports which get less attention because they don’t produce a simple, dramatic 2×2 output, but which are far better suited for determine the suitability of different products to different use-cases.

How To Use A Gartner Magic Quadrant?
In my experience after 25+ in enterprise software, I would use MQs for their overall purpose:  to group vendors into 4 different bucketsleaders, challengers, visionaries, and niche players.  That’s it.  If you want to know who the leaders are in a category, look top right.  If you want to know who the visionaries are, look bottom right.  You want to know which big companies are putting resources into the category but who thus far are lacking strategy/vision, then look top-left at the challengers quadrant.

But should you, in my humble opinion, get particularly excited about millimeter differences on either axes?  No.  Why?  Because what drives those deltas may have little, none, or in fact a counter-correlation to your situation.  In my experience, the analysts pay a lot of attention to the quadrants in which vendors end up in [2] so quadrant-placement, I’d say, is quite closely watched by the analysts.  Dot-placement, while closely watched by vendors, save for dramatic differences, doesn’t change much in the real world.  After all, they are called the magic quadrants, not the magic dots.

All that said, let me wind up with some observations on the MQs themselves.

Quick Thoughts on the 2018 Cloud FP&A Solutions MQ
While the MQs were published at the end of July 2018, they were based on information about the vendors gathered in and largely about 2017.  While there is always some phase-lag between the end of data collection and the publication data, this year it was rather unusually long — meaning that a lot may have changed in the market in the first half of 2018 that customers should be aware of. For that reason, if you’re a Gartner customer and using either the MQs or critical capabilities reports that accompany them, you should probably setup an appointment to call the analysts to ensure you’re working off the latest data.

Here are some of my quick thoughts the Cloud FP&A Solutions magic quadrant:

  • Gartner says the FP&A market is accelerating its shift from on-premises cloud.  I agree.
  • Gartner allows three types of “cloud” vendors into this (and the other) MQ:  cloud-only vendors, on-premise vendors with new built-for-the-cloud solutions, and on-premises vendors who allow their software to be run hosted on a third-party cloud platform.  While I understand their need to be inclusive, I think this is pretty broad — the total cost of ownership, cash flows, and incentives are quite different between pure cloud vendors and hosted on-premises solutions.  Caveat emptor.
  • To qualify for the MQ vendors must support at least two of the four following components of FP&A:  planning/budgeting, integrated financial planning, forecasting/modeling, management/performance reporting.  Thus the MQ is not terribly homogeneous in terms of vendor profile and use-cases.
  • For the second year in a row, (1) Host is a leader in this MQ and (2) is the only cloud pure-play vendor who is a leader in both.  We think this says a lot about the breadth and depth of our product line.
  • Customer references for Host cited ease of use, price, and solution flexibility as top three purchasing criteria.  We think this very much represents our philosophy of complex EPM made easy.

Quick Thoughts on the 2018 Cloud Financial Close Solutions MQ
Here are some of my quick thoughts on the Cloud Financial Close Solutions magic quadrant:

  • Gartner says that in the past two years the financial close market has shifted from mature on-premises to cloud solutions.  I agree.
  • While Gartner again allowed all three types of cloud vendors in this MQ, I believe some of the vendors in this MQ do just-enough, just-cloud-enough business to clear the bar, but are fundamentally still offering on-premise wolves in cloud sheep’s clothing.  Customers should look to things like total cost of ownership, upgrade frequency, and upgrade phase lags in order to flesh out real vs. fake cloud offerings.
  • This MQ is more of a mixed bag than the FP&A MQ or, for that matter, most Gartner MQs.  In general, MQs plot substitutes against each other — each dot on an MQ usually represents a vendor who does basically the same thing.  This is not true for the Cloud Financial Close (CFC) MQ — e.g., Workiva is a disclosure management vendor (and a partner of Host Analytics).  However, they do not offer financial consolidation software, as does say Host Analytics or Oracle.
  • Because the scope of this MQ is broad and both general and specialist vendors are included, customers should either call the Gartner for help (if they are Gartner customers) or just be mindful of the mixing and segmentation — e.g., Floqast (in SMB and MM) and Blackline (in enterprise) both do account reconciliation, but they are naturally segmented by customer size (and both are partners of Host, which does financial consolidation but not account reconciliation).
  • Net:  while I love that the analysts are willing to put different types of close-related, office-of-the-CFO-oriented vendors on the same MQ, it does require more than the usual amount of mindfulness in interpreting it.

Conclusion
Finally, if you want to analyze the source documents yourself, you can use the following link to download both the 2018 Gartner Magic Quadrant for Financial Planning and Analysis and Consolidation and Close Management.

# # #

Notes

[1] For Gartner, this is likely more than a semantic issue.  They are pretty strong believers in a “post-modern” ERP vision which eschews the idea of a monolithic application that includes all services, in favor of using and integrating a series of cloud-based services.  Since we are also huge believers in integrating best-of-breed cloud services, it’s hard for us to take too much issue with that.  So we’ll simply have to clearly articulate the advantages of using Host Planning and Host Consolidations together — from our viewpoint, two best-of-breed cloud services that happen to come from a single vendor.

[2] And not something done against absolute scales where you can track movement over time.  See, for example, the two explicit disclaimers in the FP&A MQ:

Capture

[3] I’m also a believer in a slightly more esoteric theory which says:  given that the Gartner dot-placement algorithm seems to try very hard to layout dots in a 45-degree-tilted football shaped pattern, it is always interesting to examine who, how, and why someone ends up outside that football.

My Appearance on DisrupTV Episode 100

Last week I sat down with interviewers Doug Henschen, Vala Afshar, and a bit of Ray Wang (live from a 777 taxiing en route to Tokyo) to participate in Episode 100 of DisrupTV along with fellow guests DataStax CEO Billy Bosworth and big data / science recruiter Virginia Backaitis.

We covered a full gamut of topics, including:

  • The impact of artificial intelligence (AI) and machine learning (ML) on the enterprise performance management (EPM) market.
  • Why I joined Host Analytics some 5 years ago.
  • What it’s like competing with Oracle … for basically your entire career.
  • What it’s like selling enterprise software both upwind and downwind.
  • How I ended up on the board of Alation and what I like about data catalogs.
  • What I learned working at Salesforce (hint:  shoshin)
  • Other lessons from BusinessObjects, MarkLogic, and even Ingres.

DisrupTV Episode 100, Featuring Dave Kellogg, Billy Bosworth, Virginia Backaitis from Constellation Research on Vimeo.

 

Why has Standalone Cloud BI been such a Tough Slog?

I remember when I left Business Objects back in 2004 that it was early days in the cloud.  We were using Salesforce internally (and one of their larger customers at the time) so I was familiar with and a proponent of cloud-based applications, but never felt great about BI in the cloud.  Despite that, Business Objects and others were aggressively ramping on-demand offerings all of which amounted to pretty much nothing a few years later.

Startups were launched, too.  Specifically, I remember:

  • Birst, née Success Metrics, and founded in 2004 by Siebel BI veterans Brad Peters and Paul Staelin, which was originally supposed to be vertical industry analytic applications.
  • LucidEra, founded in 2005 by Salesforce and Siebel veteran Ken Rudin (et alia) whose original mission was to be to BI what Salesforce was to CRM.
  • PivotLink, which did their series A in 2007 (but was founded in 1998), positioned as on-demand BI and later moved into more vertically focused apps in retail.
  • GoodData, founded in 2007 by serial entrepreneur Roman Stanek, which early on focused on SaaS embedded BI and later moved to more of a high-end enterprise positioning.

These were great people — Brad, Ken, Roman, and others were brilliant, well educated veterans who knew the software business and their market space.

These were great investors — names like Andreessen Horowitz, Benchmark, Emergence, Matrix, Sequoia, StarVest, and Tenaya invested over $300M in those four companies alone.

This was theoretically a great, straightforward cloud-transformation play of a $10B+ market, a la Siebel to Salesforce.

But of the four companies named above only GoodData is doing well and still in the fight (with a high-end enterprise platform strategy that bears little resemblance to a straight cloud transformation play) and the three others all came to uneventful exits:

So, what the hell happened?

Meantime, recall that Tableau, founded in 2003, and armed in its early years with a measly $15M in venture capital, and with an exclusively on-premises business model, literally blew by all the cloud BI vendors, going public in May 2013 and despite the stock being cut by more than half since its July 2015 peak is still worth $4.2B today.

I can’t claim to have the definitive answer to the question I’ve posed in the title.  In the early days I thought it was related to technical issues like trust/security, trust/scale, and the complexities of cloud-based data integration.  But those aren’t issues today.  For a while back in the day I thought maybe the cloud was great for applications, but perhaps not for platforms or infrastructure.  While SaaS was the first cloud category to take off, we’ve obviously seen enormous success with both platforms (PaaS) and infrastructure (IaaS) in the cloud, so that can’t be it.

While some analysts lump EPM under BI, cloud-based EPM has not had similar troubles.  At Host, and our top competitors, we have never struggled with focus or positioning and we are all basically running slightly different variations on the standard cloud transformation play.  I’ve always believed that lumping EPM under BI is a mistake because while they use similar technologies, they are sold to different buyers (IT vs. finance) and the value proposition is totally different (tool vs. application).  While there’s plenty of technology in EPM, it is an applications play — you can’t sell it or implement it without domain knowledge in finance, sales, marketing or whatever domain for which you’re building the planning system.  So I’m not troubled to explain why cloud EPM hasn’t been a slog while cloud BI absolutely has been.

My latest belief is that the business model wasn’t the problem in BI.  The technology was.  Cloud transformation plays are all about business model transformation.  On-premises applications business models were badly broken:  the software cost $10s of millions to buy and $10s of millions more to implement (for large customers).  SMBs were often locked out of the market because they couldn’t afford the ante.  ERP and CRM were exposed because of this and the market wanted and needed a business model transformation.

With BI, I believe, the business model just wasn’t the problem.  By comparison to ERP and CRM, it was fraction of the cost to buy and implement.  A modest BusinessObjects license might have cost $150K and less than that to implement.  That problem was not that BI business model was broken, it was that the technology never delivered on the democratization promise that it made.  Despite shouting “BI for the masses” in 1995, BI never really made it beyond the analyst’s desk.

Just as RDBMS themselves failed to deliver information democracy with SQL (which, believe it or not, was part of the original pitch — end users could write SQL to answer their own queries!), BI tools — while they helped enable analysts — largely failed to help Joe User.  They weren’t easy enough to use.  They lacked information discovery.  They lacked, importantly, easy-yet-powerful visualization.

That’s why Tableau, and to a lesser extent Qlik, prospered while the cloud BI vendors struggled.  (It’s also why I find it profoundly ironic that Tableau is now in a massive rush to “go cloud” today.)  It’s also one reason why the world now needs companies like Alation — the information democracy brought by Tableau has turned into information anarchy and companies like Alation help rein that back in (see disclaimers).

So, I think that cloud BI proved to be such a slog because the cloud BI vendors solved the wrong problem. They fixed a business model that wasn’t fundamentally broken, all while missing the ease of use, data discovery, and visualization power that both required the horsepower of on-premises software and solved the real problems the users faced.

I suspect it’s simply another great, if simple, lesson is solving your customer’s problem.

Feel free to weigh in on this one as I know we have a lot of BI experts in the readership.

Host Analytics World: Some Key Takeaways

We are having an amazing time at Host Analytics World this week in San Francisco.  I’m thrilled with size (over 700 people), the positive energy, and the learning/sharing that’s taking place at this event.

IMG_1973

Probably the single best thing I’ve heard from customers at the conference is this:

“I use a lot of cloud software and … the relationship you have with your customers is unique.”

The reason this makes me so happy is that’s what our strategy is all about.  We are a 100% customer-focused SaaS vendor and a huge part of my strategy here is to build a real, deep, sincere customer-success culture.  So any time I hear an echo back from our customers that is what they are seeing/feeling it makes me very happy.  And I’ve heard plenty of those echos this week.

The other big things I’ve seen thus far:

  • Tremendous interest in modeling and our new Modeling Cloud offering.  Organizations are doing more modeling than ever before and they want a modeling solution that leverages Excel and ties together disparate departmental models into a single enterprise model.
  • Huge support for our intelligent leverage of Excel strategy.  AirLiftXL, SpotLightXL, and our web-based Excel grid allow customers to leverage their existing models and, more importantly, skills / human capital in the context of a proper planning system.
  • Major interest in tying together sales and financial planning.  This is a real hot button in finance right now as sales planning is increasing done by sales ops and/or sales strategy groups outside of finance and in software not linked to the central planning system.
  • Big interest in our new Aviso partnership as part of our strategy to better link sales and finance.  Aviso delivers predictive analytics that not only help forecast sales but actually guides sales management to the most important opportunities in the pipeline.  In general, customers seem to support our strategy to stay focused on EPM and not extend ourselves in adjacent fields where best-of-breed players already exist.
  • And finally, I’d be remiss if I didn’t introduce our new mascots, Tick and Tie.

IMG_1972

Survivor Bias in Churn Calculations: Say It’s Not So!

I was chatting with a fellow SaaS executive the other day and the conversation turned to churn and renewal rates.  I asked how he calculated them and he said:

Well, we take every customer who was also a customer 12 months ago and then add up their ARR 12 months ago and add up their ARR today, and then divide today’s ARR by year-ago ARR to get an overall retention or expansion rate.

Well, that sounds dandy until you think for a minute about survivor bias, the often inadvertent logical error in analyzing data from only the survivors of a given experiment or situation.  Survivor bias is subtle, but here are some common examples:

  • I first encountered survivor bias in mutual funds when I realized that look-back studies of prior 5- or 10-year performance include only the funds still in existence today.  If you eliminate my bogeys I’m actually an below-par golfer.
  • My favorite example is during World War II, analysts examined the pattern of anti-aircraft fire on returning bombers and argued to strengthen them  in the places that were most often hit.  This was exactly wrong — the places where returning bombers were hit were already strong enough.  You needed to reinforce them in the places that the downed bombers were hit.

So let’s turn back to churn rates.  If you’re going to calculate an overall expansion or retention rate, which way should you approach it?

  1. Start with a list of customers today, look at their total ARR, and then go compare that to their ARR one year ago, or
  2. Start with a list of customers from one year ago and look at their ARR today.

Number 2 is the obvious answer.  You should include the ARR from customers who choose to stop being customers in calculating an overall churn or expansion rate.  Calculating it the first way can be misleading because you are looking at the ARR expansion only from customers who chose to continue being customers.

Let’s make this real via an example.

survivor bias

The ARR today is contained in the boxed area.  The survivor bias question comes down to whether you include or exclude the orange rows from year-ago ARR.  The difference can be profound.  In this simple example, the survivor-biased expansion rate is a nice 111%.  However, the non-biased rate is only 71% which will get you a quick “don’t let the door hit your ass on the way out” at most VCs.  And while the example is contrived, the difference is simply one of calculation off identical data.

Do companies use survivor-biased calculations in real life?  Let’s look at my post on the Hortonworks S-1 where I quote how they calculate their net expansion rate:

We calculate dollar-based net expansion rate as of a given date as the aggregate annualized subscription contract value as of that date from those customers that were also customers as of the date 12 months prior, divided by the aggregate annualized subscription contract value from all customers as of the date 12 months prior.

When I did my original post on this, I didn’t even catch it.  But therein lies the subtle head of survivor bias.

# # #

Disclaimers:

  • I have not tracked the Hortonworks in the meantime so I don’t know if they still report this metric, at what frequency, how they currently calculate it, etc.
  • To the extent that “everyone calculates it this way” is true, then companies might report it this way for comparability, but people should be aware of the bias.  One approach is to create a present back-looking and a past forward-looking metric and show both.
  • See my FAQ for additional disclaimers, including that I am not a financial analyst and do not make recommendations on stocks.

Don’t Be a Metrics Slave

I love metrics.  I live for metrics.  Every week and every quarter I drown my team in metrics reviews.  Why?  Because metrics are the instrumentation — the flight panel — of our business.   Good metrics provide clear insights.  They cut through politics, spin, and haze.  They spark amazing debates.   They help you understand your business and compare it to others.

I love metrics, but I’ll never be a slave to them.  Far too often in business I see people who are metrics slaves.  Instead of mastering metrics to optimize the business, the metrics become the master and the manager a slave.

I define metrics slavery as the case when managers stop thinking and work blindly towards achieving a metric regardless of whether they believe doing so leads to what they consider is best for the business.

One great thing about sports analytics is that despite an amazing slew of metrics, everyone remembers it’s the team with the most goals that wins, not the one who took the most shots.  In business, we often get that wrong in both subtle and not-so-subtle ways.

Here are metrics mistakes that often lead to metrics slavery.

  1. Dysfunctional compensation plans, where managers actively and openly work on what they believe are the wrong priorities in response to a compensation plan that drives them to do so. The more coin-operated the type of people in a department, the more carefully you must define incentives.  While strategic marketers might challenge a poorly aligned compensation plan, most salespeople will simply behave exactly as dictated by the compensation plan.  Be careful what you ask for, because you will often get it.
  1. Poor metric selection. Marketers who count leads instead of opportunities are counting shots instead of goals.  I can’t stand to see tradeshow teams giving away valuable items so they can run the card of every passing attendee.  They might feel great about getting 500 leads by the end of the day, but if 200 are people who will never buy, then they are not only useless but actually have negative value because the company’s nurture machine is going to invest fruitless effort in converting them.
  1. Lack of leading indicators. Most managers are more comfortable with solid lagging indicators than they are with squishier leading indicators.  For example, you might argue that leads are a great leading indicator of sales, and you’d be right to the extent that they are good leads.  This then requires you to define “good,” which is typically done using some ABC-style scoring system.  But because the scoring system is complex, subjective, and requires iteration and regression to define, some managers find the whole thing too squishy and say “let’s just count leads.” That’s the equivalent of counting shots, including shots off-goal that never could have scored.  While leading indicators require a great deal of thought to get right, you must include them in your key metrics, lest you create a company of backwards-looking managers.
  1. Poorly-defined metrics. The plus/minus metric in hockey is one of my favorite sports metrics because it measures teamwork, something I’d argue is pretty hard to measure [1].  However, there is a known problem with the plus/minus rating.  It includes time spent on power plays [2] and penalty kills [3].  Among other problems, this unfairly penalizes defenders on the penalty-killing unit, diluting the value of the metric.  Yet, far as I know, no one has fixed this problem.   So while it’s tracked, people don’t take it too seriously because of its known limitations.  Do you have metrics like this at your company?  If so, fix them.
  1. Self-fulfilling metrics. These are potential leading metrics where management losses sight of the point and accidentally makes their value a self-fulfilling prophecy.  Pipeline coverage (value of oppties in the pipeline / plan) is such a metric.  Long ago, it was good leading indicator of plan attainment, but over the past decade literally every sales organization I know has institutionalized beating salespeople unless they have 3x coverage.  What’s happened?  Today, everyone has 3x coverage. It just doesn’t mean anything anymore.  See this post for a long rant on this topic.
  1. Ill-defined metrics, which happen a lot in benchmarking where we try to compare, for example, our churn rate to an industry average. If you are going to make such comparisons, you must begin with clear definitions or else you are simply counting angels on pinheads.   See this post where I give an example where, off the same data, I can calculate a renewals rate of 69%, 80%, 100%, 103%, 120%, 208%, or 310%, depending on how you choose to calculate.  If you want to do a meaningful benchmark, you better be comparing the 80% to the 80%, not the 208%.
  1. Blind benchmarking. The strategic mistake that managers make in benchmarking is that they try to converge blindly to the industry average.  This reminds me of the Vonnegut short-story where ballerinas have to wear sash-weights and the intelligentsia have music blasted into their ears in order to make everyone equal.  Benchmarks should be tools of understanding, not instruments of oppression.   In addition, remember that benchmarks definitionally blend industry participants with different strategies.  One company may heavily invest in R&D in product-leadership strategy.  One may heavily invest in S&M as part of market-share leadership strategy.  A third may invest heavily in supply chain optimization as part of cost-leadership strategy.  Aspiring to the average of these companies is a recipe for failure, not success, as you will end up in a strategic No Man’s Land.  In my opinion, this is the most dangerous form of metrics slavery because it happens at the boardroom level, and often with little debate.
  1. Conflicting metrics. Let’s take a concrete example here.  Imagine you are running a SaaS business that’s in a turnaround.  This year bookings growth was flat.  Next year you want to grow bookings 100%.  In addition, you want to converge your P&L over time to an industry average of S&M expenses at 50% of revenues, whereas today you are running at 90%.  While that may sound reasonable it’s actually a mathematical impossibility.   Why?  Because the company is changing trajectories and in a SaaS business revenues lag bookings by a year.   So next year revenue will be growing slowly [4] and that means you need to grow S&M even slower if you want to meet the P&L convergence goal.  But if you want to meet the 100% bookings growth goal, with improving efficiency, you’ll need to increase S&M cost by say 70%.  It’s impossible.  #QED.  There will always be a tendency to split the difference in such scenarios but that is a mistake.  The question is which is the better metric off which to anchor?   The answer, in a SaaS business is bookings.  Ergo, the correct answer is not to split the difference (which will put the bookings goal at risk) but to recognize that bookings is the better metric and anchor S&M expense to bookings growth.  This requires a deep understanding of the metrics you use and the courage to confront two conflicting rules of conventional wisdom in so doing.

In the end, metrics slavery, while all too common, is more about the people than the metrics.  Managers need to be challenged to understand metrics.  Managers need to be empowered to define new and better metrics.  Managers must to be told to use their brains at all times and never do something simply to move a metric.

If you’re always thinking critically, you’ll never be a metrics slave.  The day you stop, you’ll become one.

# # #

[1] The way it works is simple:  if you’re on the ice when your team scores, you get +1.  If you’re on the ice when the opponent scores you get -1.  When you look at someone’s plus/minus rating over time, you can see, for example, which forwards hustle back on defense and which don’t.

[2] When, thanks to an opponent’s penalty you have more players on the ice then they do.

[3] When, thanks to your team’s penalty, your opponent has more players on the ice than you do.

[4] Because bookings grew slowly this year

Average Contract Duration and SaaS Renewals: All Is Not As It Appears

Chatting with some SaaS buddies the other day, we ran into a fun — and fairly subtle — SaaS metrics question.  It went something like this:

VP of Customer Success:  “Our average contract duration (ACD) on renewals was 1.5 years last quarter and –“

VP of Sales:  “– Wait a minute, our ACD on new business is 2.0 years.  If customers are renewing for shorter terms than those of the initial sale, it  means they are less confident about future usage at renewals time than they are at the initial purchase. Holy Moly, that means we have a major problem with the product or with our customer success program.”

Or do we?  At first blush, the argument makes perfect sense.  If new customers sign two-year contracts and renewing ones sign 1.5-year contracts, it would seem to indicate that renewing customers are indeed less bullish on future usage than existing ones.  Having drawn that conclusion, you are instantly tempted to blame the product, the customer success team, technical support, or some other factor for the customers’ confidence reduction.

But is there a confidence reduction?  What does it actually mean when your renewals ACD is less than your new business ACD?

The short answer is no.  We’re seeing what I call the “why are there so many frequent flyers on airplanes” effect.  At first blush, you’d think that if ultra-frequent flyers (e.g., United 1K) represent the top 1%, then a 300-person flight might have three or four on board, while in reality it’s more like 20-30.  But that’s it — frequent flyers are over-represented on airplanes because they fly more; just like one-year contracts are over-represented in renewals because they renew more.

Let’s look at an example.  We have a company that signs one-year, two-year, and three-year deals.  Let’s assume customers renew for the same duration as their initial contract — so there is no actual confidence reduction in play.  Every deal is $100K in annual recurring revenue (ARR).  We’ll calculate ACD on an ARR-weighted basis.  Let’s assume zero churn.

If we sign five one-year, ten two-year, and fifteen three-year deals, we end up with $3M in new ARR and an ACD of 2.3 years.

renewals and acd

In year 1, only the one-year deals come up for renewal (and since we’ve assumed everyone renews for the same length as their initial term), we have an ACD of one year.  The VP of Sales is probably panicking — “OMG, customers have cut their ACD from 2.3 to 1.0 years!  Who’s to blame?  What’s gone wrong?!”

Nothing.  Only the one-year contracts had a shot at renewing and they all renewed for one year.

In year 2, both the (re-renewing) one-year and the (initially renewing) two-year contracts come up for renewal.  The ACD is 1.7 — again lower than the 2.3-year new business ACD.  While, again, the decrease in ACD might lead you to suspect a problem, there is nothing wrong.  It’s just math and the fact that the shorter-duration contracts renew more often which pulls down the renewals ACD.

What To Do About This?
First, understand it.  As with many SaaS metrics, it’s counter-intuitive.

As I’ve mentioned before, SaaS metrics and unit economics are often misunderstood.  While I remain a huge fan of using them to run the business, I strongly recommend taking the time to develop a deep understanding of them.  In addition, the more I see counter-intuitive examples, the more I believe in building full three- to five-year financial models of SaaS businesses in order to correctly see the complex interplay among drivers.

For example, if a company does one-year, two-year, and three-year deals, a good financial model should have drivers for both new business contract duration (i.e., percent of 1Y, 2Y, and 3Y deals) and a renewals duration matrix that has renewals rates for all nine combinations of {1Y, 2Y, 3Y} x (1Y, 2Y, 3Y} deals (e.g., a 3Y to 1Y renewal rate).  This will produce an overall renewals rate and an overall ACD for renewals.  (In a really good model, both the new business breakdown and the renewals matrix should vary by year.)

Armed with that model, built with assumptions based on both history and future goals for the new business breakdown and the renewals matrix, you can then have meaningful conversations how ACD is varying on new and renewals business relative to plan.  Without that, by just looking at one number and not understanding how it’s produced, you run the very real risk of reacting to math effects setting off a false alarm on renewals.