The New Gartner 2018 Magic Quadrants for Cloud Financial Planning & Analysis and Cloud Financial Close Solutions

If all you’re looking for is the free download link, let’s cut to the chase:  here’s where you can download the new 2018 Gartner Magic Quadrant for Financial Planning and Analysis Solutions and the new 2018 Gartner Magic Quadrant for Cloud Financial Close Solutions.  These MQs are written jointly by John Van Decker and Chris Iervolino (with Chris as primary author on the first and John as primary author on the second).  Both are deep experts in the category with decades of experience.

Overall, I can say that at Host Analytics, we are honored to a leader in both MQs again this year.  We are also honored to be the only cloud pure-play vendor to be a leader in both MQs and we believe that speaks volumes about the depth and breadth of EPM functionality that we bring to the cloud.

So, if all you wanted was the links, thanks for visiting.  If, however, you’re looking for some Kellblog editorial on these MQs, then please continue on.

Whither CPM?
The first thing the astute reader will notice is that the category name, which Gartner formerly referred to as corporate performance management (CPM), and which others often referred to as enterprise performance management (EPM), is entirely missing from these MQs.  That’s no accident.  Gartner decided last fall to move away from CPM as a uber category descriptor in favor of referring more directly to the two related, but pretty different, categories beneath it.  Thus, in the future you won’t be hearing “CPM” from Gartner anymore, though I know that some vendors — including Host Analytics — will continue to use EPM/CPM until we can find a more suitable capstone name for the category.

Personally, I’m in favor of this move for two simple reasons.

  • CPM was a forced, analyst-driven category in the first place, dating back to Howard Dresner’s predictions that financial planning/budgeting would converge with business intelligence.  While Howard published the research that launched a thousand ships in terms of BI and financial planning industry consolidation (e.g., Cognos/Adaytum, BusinessObjects/SRC/Cartesis, Hyperion/Brio), the actual software itself never converged.  CPM never became like CRM — a true convergence of sales force automation (SFA) and contact center.  In each case, the two companies could be put under one roof, but they sold fundamentally different value propositions to very different buyers and thus never came together as one.
  • In accordance with the prior point, few customers actually refer to the category by CPM/EPM.  They say things much more akin to “financial planning” and “consolidation and close management.”  Since I like referring to things in the words that customers use, I am again in favor of this change.

It does, however, create one problem — Gartner has basically punted on trying to name a capstone category to include vendors who sell both financial planning and financial consolidation software.  Since we at Host Analytics think that’s important, and since we believe there are key advantages to buying both from the same vendor, we’d prefer if there were a single, standard capstone term.  If it were easy, I suppose a name would have already emerged [1].

How Not To Use Magic Quadrants
While they are Gartner’s flagship deliverable, magic quadrants (MQs) can generate a lot of confusion.  MQs don’t tell you which vendor is “best” because there is no universal best in any category.  MQs don’t tell you which vendor to pick to solve your problem because different solutions are designed around meeting different requirements.  MQs don’t predict the future of vendors — last-year’s movement vectors rarely predict this year’s positions.  And the folks I know at Gartner generally strongly dislike vector analysis of MQs because they view vendor placement as relative to each other at any moment in time [2].

Many things that customers seem to want from Gartner MQs are actually delivered by Gartner’s Critical Capabilities reports which get less attention because they don’t produce a simple, dramatic 2×2 output, but which are far better suited for determine the suitability of different products to different use-cases.

How To Use A Gartner Magic Quadrant?
In my experience after 25+ in enterprise software, I would use MQs for their overall purpose:  to group vendors into 4 different bucketsleaders, challengers, visionaries, and niche players.  That’s it.  If you want to know who the leaders are in a category, look top right.  If you want to know who the visionaries are, look bottom right.  You want to know which big companies are putting resources into the category but who thus far are lacking strategy/vision, then look top-left at the challengers quadrant.

But should you, in my humble opinion, get particularly excited about millimeter differences on either axes?  No.  Why?  Because what drives those deltas may have little, none, or in fact a counter-correlation to your situation.  In my experience, the analysts pay a lot of attention to the quadrants in which vendors end up in [2] so quadrant-placement, I’d say, is quite closely watched by the analysts.  Dot-placement, while closely watched by vendors, save for dramatic differences, doesn’t change much in the real world.  After all, they are called the magic quadrants, not the magic dots.

All that said, let me wind up with some observations on the MQs themselves.

Quick Thoughts on the 2018 Cloud FP&A Solutions MQ
While the MQs were published at the end of July 2018, they were based on information about the vendors gathered in and largely about 2017.  While there is always some phase-lag between the end of data collection and the publication data, this year it was rather unusually long — meaning that a lot may have changed in the market in the first half of 2018 that customers should be aware of. For that reason, if you’re a Gartner customer and using either the MQs or critical capabilities reports that accompany them, you should probably setup an appointment to call the analysts to ensure you’re working off the latest data.

Here are some of my quick thoughts the Cloud FP&A Solutions magic quadrant:

  • Gartner says the FP&A market is accelerating its shift from on-premises cloud.  I agree.
  • Gartner allows three types of “cloud” vendors into this (and the other) MQ:  cloud-only vendors, on-premise vendors with new built-for-the-cloud solutions, and on-premises vendors who allow their software to be run hosted on a third-party cloud platform.  While I understand their need to be inclusive, I think this is pretty broad — the total cost of ownership, cash flows, and incentives are quite different between pure cloud vendors and hosted on-premises solutions.  Caveat emptor.
  • To qualify for the MQ vendors must support at least two of the four following components of FP&A:  planning/budgeting, integrated financial planning, forecasting/modeling, management/performance reporting.  Thus the MQ is not terribly homogeneous in terms of vendor profile and use-cases.
  • For the second year in a row, (1) Host is a leader in this MQ and (2) is the only cloud pure-play vendor who is a leader in both.  We think this says a lot about the breadth and depth of our product line.
  • Customer references for Host cited ease of use, price, and solution flexibility as top three purchasing criteria.  We think this very much represents our philosophy of complex EPM made easy.

Quick Thoughts on the 2018 Cloud Financial Close Solutions MQ
Here are some of my quick thoughts on the Cloud Financial Close Solutions magic quadrant:

  • Gartner says that in the past two years the financial close market has shifted from mature on-premises to cloud solutions.  I agree.
  • While Gartner again allowed all three types of cloud vendors in this MQ, I believe some of the vendors in this MQ do just-enough, just-cloud-enough business to clear the bar, but are fundamentally still offering on-premise wolves in cloud sheep’s clothing.  Customers should look to things like total cost of ownership, upgrade frequency, and upgrade phase lags in order to flesh out real vs. fake cloud offerings.
  • This MQ is more of a mixed bag than the FP&A MQ or, for that matter, most Gartner MQs.  In general, MQs plot substitutes against each other — each dot on an MQ usually represents a vendor who does basically the same thing.  This is not true for the Cloud Financial Close (CFC) MQ — e.g., Workiva is a disclosure management vendor (and a partner of Host Analytics).  However, they do not offer financial consolidation software, as does say Host Analytics or Oracle.
  • Because the scope of this MQ is broad and both general and specialist vendors are included, customers should either call the Gartner for help (if they are Gartner customers) or just be mindful of the mixing and segmentation — e.g., Floqast (in SMB and MM) and Blackline (in enterprise) both do account reconciliation, but they are naturally segmented by customer size (and both are partners of Host, which does financial consolidation but not account reconciliation).
  • Net:  while I love that the analysts are willing to put different types of close-related, office-of-the-CFO-oriented vendors on the same MQ, it does require more than the usual amount of mindfulness in interpreting it.

Conclusion
Finally, if you want to analyze the source documents yourself, you can use the following link to download both the 2018 Gartner Magic Quadrant for Financial Planning and Analysis and Consolidation and Close Management.

# # #

Notes

[1] For Gartner, this is likely more than a semantic issue.  They are pretty strong believers in a “post-modern” ERP vision which eschews the idea of a monolithic application that includes all services, in favor of using and integrating a series of cloud-based services.  Since we are also huge believers in integrating best-of-breed cloud services, it’s hard for us to take too much issue with that.  So we’ll simply have to clearly articulate the advantages of using Host Planning and Host Consolidations together — from our viewpoint, two best-of-breed cloud services that happen to come from a single vendor.

[2] And not something done against absolute scales where you can track movement over time.  See, for example, the two explicit disclaimers in the FP&A MQ:

Capture

[3] I’m also a believer in a slightly more esoteric theory which says:  given that the Gartner dot-placement algorithm seems to try very hard to layout dots in a 45-degree-tilted football shaped pattern, it is always interesting to examine who, how, and why someone ends up outside that football.

The Big Mistake You Might Be Making In Calculating Churn: Failing to Annualize Multi-Year ATR Churn Rates

Most of the thinking, definitions, and formulas regarding SaaS unit economics is based on assumptions that no longer reflect the reality of the enterprise SaaS environment.  For example, thinking in terms of MRR (monthly recurring revenue) is outdated because most enterprise SaaS companies run on annual contracts and thus we should think in terms of ARR (annual recurring revenue) instead.

Most enterprise SaaS companies today do a minimum one-year contract and many do either prepaid or non-prepaid multi-year contracts beyond that. In the case of prepaid multi-year contracts, metrics like the CAC payback period break (or at the very least, get difficult to interpret).  In the case of multi-year contacts, calculating churn correctly gets a lot more complicated – and most people aren’t even aware of the issue, let alone analyze it correctly.

If your company does multi-year contracts and you are not either sidestepping this issue (by using only ARR-pool-based rates) or correcting for it in your available-to-renew (ATR) churn calculations, keep reading.  You are possibly making a mistake and overstating your churn rate.

A Multi-Year Churn Example
Let’s demonstrate my point with an example where Company A does 100% one-year deals and Company B does 100% three-year deals.  For simplicity’s sake, we are going to ignore price increases and upsell [1].  We’re also not going to argue the merits of one- vs. three-year contracts; our focus is simply how to calculate churn in a world of them.

In the example below, you can see that Company A has an available-to-renew-based (ATR-based) [2] churn rate of 10%.  Company B has a 27% ATR-based churn rate.  So we can quickly conclude that Company A’s a winner, and Company B is a loser, right?

Capture

Not so fast.

At the start of year 4, a cohort of Company A customers is worth 72.9 units, the exact same as a cohort of Company B customers.  In fact, if you look at lifetime value (LTV), the Company B cohort is worth nearly 10% more than the Company A cohort [3].

my churn1

Wait a minute!  How can a company with 27% churn rate be “better” than a company with 10% churn rate?

It’s All About Exposure:  How Often are Deals Exposed to the Churn Rate?
One big benefit of multi-year deals is that they are exposed to the churn rate less frequently than one-year deals.  When you exclude the noise (e.g., upsell, discounts, and price increases), and look at churn solely as a decay function, you see that the N-year retention rate [4] is (1-churn rate)^N.  With 10% churn, your 2-year retention rate is (1-0.1)^2 = 0.9^2 = 0.81.  Your 3-year retention rate is (1-0.1)^3 = 0.9^3 = 0.729, or a retention rate of 73%, equivalent to a churn rate of 27%.

Simply put, churn compounds so exposing a contract to the churn rate less often is a good thing:  multi-year deals do this by excluding contracts from the ATR pool, typically for one or two years, before they come up for renewal [5].  This also means that you cannot validly compare churn rates on contracts with different duration.

This is huge.  As we have just shown, a 10% churn rate on one-year deals is equivalent to a 27% churn rate on three-year deals, but few people I know recognize this fact.

I can imagine two VCs talking:

“Yo, Trey.”

“Yes.”

“You’re not going to believe it, I saw a company today with a 27% churn rate.”

“No way.”

“Yep, and it crushed their LTV/CAC — it was only 1.6.”

“Melting ice cube.  Run away.”

“I did.”

Quite sad, in fact, because with a correct (annualized) churn rate of 10% and holding the other assumptions constant [6], the LTV/CAC jumps to healthy 4.4.  But any attempt to explain a 27% churn rate is as likely to be seen as a lame excuse for a bad number as it is to be seen as valid analysis.

Best Alternative Option:  Calculate Churn Rates off the Entire ARR Pool
I’m going to define the 27% figure as the nominal ATR-based churn rate.  It’s what you get when you take churn ARR / ATR in any given period.  I call it a nominal rate because it’s not annualized and it doesn’t reflect the varying distribution of 1Y, 2Y, and 3Y deals that are mixed in the ATR pool in any given quarter.  I call it nominal because you can’t validly compare it to anything [7].

Because correcting this to a more meaningful rate is going to involve a lot of brute force math, I’ll first advise you to do two things:

  • Banish any notion from your mind that ATR rates are somehow “more real” than churn rates calculated against the entire ARR pool [8].
  • Then use churn rates calculated against the entire ARR pool and sidestep the mess we’re about to enter in the next section [9] where we correct ATR-based churn rates.

In a world of mixed-duration contracts calculating churn rates off the entire ARR pool effectively auto-corrects for the inability of some contracts to churn.  I have always believed that if you were going to use the churn rate in a math function (e.g., as the discount rate in an NPV calculation) that you should only use churn rates calculated against the entire ARR pool because, in a mixed multi-year contract world, only some of the contracts come up for renewal in any given period.  In one sense you can think of some contracts as “excluded from the available-to-churn (ATC) pool.”  In another, you can think of them as auto-renewing.  Either way, it doesn’t make sense in a mixed pool to apply the churn rate of those contracts up for renewal against the entire pool which includes contracts that are not.

If you want to persist in using ATR-based churn rates, then we must correct for two problems:  we need to annualize the multi-year rates, and we then need to calculate ATR churn using an ATR-weighted average of the annualized churn rates by contract duration.

Turning Nominal ATR Churn into Effective, Annualized ATR Churn
Here’s how to turn nominal ATR churn into an effective, annualized ATR churn rate [10] [11]:

Step 1:  categorize your ATR and churn ARR by contract duration.  Calculate a 1Y churn rate and nominal 2Y and 3Y ATR churn rates.

Step 2:  annualize the nominal multi-year (N-year) churn rates by flipping to retention rates and taking the Nth root of the retention rate.  For example, our 27% 3-year churn rate is equivalent to a 73% 3-year retention rate, so take the cube root of 0.73 to get 0.9.  Then flip back to churn rates and get 10%.

Step 3:  do an ATR-weighted average of the 1Y and annualized 2Y and 3Y churn rates.  Say your ATR was 50% 1Y, 25% 2Y, and 25% 3Y contracts and your annualized churn rates were 10%, 12%, and 9%.  Then the weighted average would be (0.5*0.10) + (0.25*0.12) + (0.25*0.09) = 10.25%, as your annualized, effective ATR churn rate.

That’s it.  You’ve now produced an ATR churn rate that is comparable to a one in a company that does only 1-year contracts.

Conclusion
If nothing else, I hope I have convinced that you it is invalid to compare churn rates on contracts of different duration and ergo that is simpler to generally calculate churn rates off the entire ARR pool.  If, however, you still want to see ATR-based churn rates, then I hope I’ve convinced you that you must do the math and calculate ATR churn as a weighted average of annualized one-, two-, and three-year ATR churn rates.

# # #

Notes
[1] In a world of zero upsell there is no difference between gross and net churn rates, thus I will simply say “churn rate” in this post.

[2] As soon as you start doing multi-year contracts then the entire ARR base is no longer up for renewal each year.  You therefore need a new concept, available to renew (ATR), which reflects only that ARR up for renewal in a given period.

[3] Thanks to its relatively flatter step-wise decay compared to Company A’s more linear decay.

[4] Retention rate = 1 – churn rate.

[5] If it helps, you can think of the ATR pool in a glass half-empty way as the available-to-churn pool.

[6] Assuming CAC ratio of 1.8 and subscription gross margins of 80%.

[7] Unless your company has a fixed distribution of deals by contract duration – e.g., a degenerate case being 100% 3Y deals.  For most companies the average contract duration in the inbound ATR pool is going to vary each quarter.  Ergo, you can’t even validly compare this rate to itself over time without factoring in the blending.

[8] Most people I meet seem to think ATR rates are more real than rates based on the entire ARR pool.  Sample conversation  — “what’s your churn rate?”  “6%.”  “Gross or net?  “Gross.”  “No, I mean your real churn rate – what gets churned divided only by what was up for renewal.”    The mistake here is in thinking that using ATR makes it comparable to a pure one-year churn rate – and it doesn’t.

[9] Gross churn = churn / starting period ARR.  Net churn = (gross churn – upsell) / starting period ARR.

[10] I thought about trying a less brute-force way using average contract duration (ACD) of the ATR pool, but decided against it because this method, while less elegant, is more systematic.

[11] Note that this method will still understate the LTV advantage of the more step-wise multi-year contract decay because it’s not integrating the area under the curve, but instead intersecting what’s left of the cohort after N years.  In our first example, the 1Y and 3Y cohorts both had 73 units of ARR, but because the multi-year cohort decayed more slowly it’s LTV to that point was about 10% higher.

The Use of Ramped Rep Equivalents (RREs) in Sales Analytics and Modeling

[Editor’s note:  revised 7/18, 6:00 PM to fix spreadsheet error and change numbers to make example easier to follow, if less realistic in terms of hiring patterns.]

How many times have you heard this conversation?

VC:  how many sales reps do you have? 

CEO:  Uh, 25.  But not really.

VC:  What do you mean, not really?

CEO:  Well, some of them are new and not fully productive yet.

VC:  How long does it take for them to fully ramp?

CEO:  Well, to full productivity, four quarters.

VC:  So how many fully-ramped reps do you have?

CEO:  9 fully ramped, but we have 15 in various stages of ramping, and 1 who’s brand new …

There’s a better way to have this conversion, to perform your sales analytics, and to build your bookings capacity waterfall model.  That better way involves creating a new metric called ramped rep equivalents (RREs). Let’s build up to talking about RREs by first looking at a classical sales bookings waterfall model.

ramped rep equivalents, picture 1, revised

I love building these models and they’re a lot of fun to play with, doing what-if analysis, varying the drivers (which are in the orange cells) and looking at the results.  This is a simplified version of what most sales VPs look at when trying to decide next year’s hiring, next year’s quotas [1], and next year’s targets.  This model assumes one type of salesrep [2]; a distribution of existing reps by tenure as 1 first-quarter, 3 second-quarter, 5 third-quarter, 7 fourth-quarter, and 9 steady-state reps; a hiring pattern of 1, 2, 4, 6 reps across the four quarters of 2019; and a salesrep productivity ramp whereby reps are expected to sell 0% of steady-state productivity in their first quarter with the company, and then 25%, 50%, 75% in quarters 2 through 4 and then become fully productive at quarter 5, selling at the steady-state productivity level of $1,000K in new ARR per year [3].

Using this model, a typical sales VP — provided they believed the productivity assumptions [4] and that they could realistically set quotas about 20% above the target productivity — would typically sign up for around a $22M new ARR bookings target for the coming year.

While these models work just fine, I have always felt like the second block (bookings capacity by tenure), while needed for intermediate calculations, is not terribly meaningful by itself.  The lost opportunity here is that we’re not creating any concept to more easily think about, discuss, and analyze the productivity we get from reps as they ramp.

Enter the Ramped Rep Equivalent (RRE)
Rather than thinking about the partial productivity of whole reps, we can think about partial reps against whole productivity — and build the model that way, instead.  This has the by-product of creating a very useful number, the RRE.  Then, to get bookings capacity just multiply the number of RREs times the steady-state productivity.  Let’s see an example below:

ramped rep equivalents, picture 2, revised

This provides a far more intuitive way of thinking about salesrep ramping.  In 1Q19, the company has 25 reps, only 9 of whom are fully ramped, and rest combine to give the productivity of 8.5 additional reps, resulting in an RRE total of 17.5.

“We have 25 reps on board, but thanks to ramping, we only have the capacity equivalent to 17.5 fully-ramped reps at this time.”

This also spits out three interesting metrics:

  • RRE/QCR ratio:  an effective vs. nominal capacity ratio — in 1Q19, nominally we have 25 reps, but we have only the effective capacity of 17.5 reps.  17.5/25 = 70%.
  • Capacity lost to ramping (dollars):  to make the prior figure more visceral, think of the sales capacity lost due to ramping (i.e., the delta between your nominal and effective capacity) expressed in dollars.  In this case, in 1Q19 we’re losing $1,875K of our bookings capacity due to ramping.
  • Capacity lost to ramping (percent):  the same concept as the prior metric, simply expressed in percentage terms.  In this case, in 1Q19 we’re losing 30% of our bookings capacity due to ramping.

Impacts and Cautions
If you want to move to an RRE mindset, here are a few tips:

  • RREs are useful for analytics, like sales productivity.  When looking at actuals you can measure sales productivity not just by starting-period or average-period reps, but by RRE.  It will provide a much more meaningful metric.
  • You can use RREs to measure sales effectiveness.  At the start of each quarter recalculate your theoretical capacity based on your actual staffing.  Then divide your actuals by that start-of-quarter theoretical capacity and you will get a measure of how well you are performing, i.e., the utilization of the quarterly starting capacity in your sales force.  When you’re missing sales targets it is typically for one of two reasons:  you don’t have enough capacity or you’re not making use of the capacity you have.  This helps you determine which.
  • Beware that if you have multiple types of reps (e.g., corporate and field), you be tempted to blend them in the same way you do whole reps today –i.e., when asked “how many reps do you have?” most people say “15” and not “9 enterprise plus 6 corporate.”  You have the same problem with RREs.  While it’s OK to present a blended RRE figure, just remember that it’s blended and if you want to calculate capacity from it, you should calculate RREs by rep type and then get capacity by multiplying the RRE for each rep type by their respective steady-state productivity.

I recommend moving to an RRE mindset for modeling and analyzing sales capacity.  If you want to play with the spreadsheet I made for this post, you can find it here.

Thanks to my friend Paul Albright for being the first person to introduce me to this idea.

# # #

Notes
[1] This is actually a productivity model, based on actual sales productivity — how much people have historically sold (and ergo should require little/no cushion before sales signs up for it).  Most people I know work with a productivity model and then uplift the desired productivity by 15 to 25% to set quotas.

[2] Most companies have two or three types (e.g., corporate vs. field), so you typically need to build a waterfall for each type of rep.

[3] To build this model, you also need to know the aging of your existing salesreps — i.e., how many second-, third-, fourth-, and steady-state-quarter reps you have at the start of the year.

[4] The glaring omission from this model is sales turnover.  In order to keep it simple, it’s not factored in here. While some people try to factor in sales turnover by using reduced sales productivity figures, I greatly prefer to model realistic sales productivity and explicitly model sales turnover in creating a sales bookings capacity model.

[5] This is one reason it’s so expensive to build an enterprise software sales force.  For several quarters you often get 100% of the cost and 50% of the sales capacity.

[6] Which should be an weighted average productivity by type of rep weighted by number of reps of each type.

How To Sales Manage Upside and Unlikely Deals

If your sales organization is like most, you classify sales opportunities in about four categories, such as:

  • Commit, which are 90% likely to close
  • Forecast, which are 70% likely to close
  • Upside, which are 33% likely to close
  • Unlikely, which are 5% likely to close

And then, provided you have sufficient pipeline, your sales management team basically puts all of its effort into and attention on the commit and forecast deals.  They’re the ones that get deal reviews.  They’re the ones where the team does multiple dry runs before big demos and presentations.  They’re the deals that get discussed every week on the forecast call.

The others ones?  No such much.  Sure, the salesreps who own them will continue to toil away.  But they won’t get much, if any, management attention.  You’ll probably lose 75% of them and it won’t actually matter much, provided you have enough high-probability deals to make your forecast and plan.

But, what a waste.  Those opportunities probably each cost the company $2500 to $5000 to generate and many multiples of that to pursue.  But they’re basically ignored by most sales management teams.

The classical solution to this problem is to tell the sales managers to focus on everything.  But it doesn’t work.   A smart sales manager knows the only thing that really matters is making his/her number and doing that typically involves closing almost all the committed and most of the forecast deals.  So that is where their energy goes.

jumpballThe better way to handle these deals is to recognize they’re more likely to be lost than won (e.g., calling them jump-balls, 50/50 balls, or face-offs, depending on your favorite sport), find the most creative non-quota-carrying manager in the sales organization (e.g., VP of salesops) and have him/her manage these low-probability, high-risk deals in the last month of the quarter using non-traditional (i.e., Crazy Ivan) tactics.

This only works if you have happen to have a VP of salesops, enablement, alliances, etc., who has the experience, passion, and creativity to pull it off, but if you do it’s a simply fantastic way to allow core sales management to focus on the core deals that will make or break the quarter while still applying attention and creativity to the lower probability deals that can drive you well over your targets.

This is not as crazy as it might sound, because those in sales ops or productivity positions typically do have prior sales management experience.  Thus, this becomes a great way to keep their saw sharp and keep them close/relevant to the reality of the field in performing their regular job.  What could be better than a VP of sales productivity who works on closing deals 4 months/year?

If your VP of sales ops or sales enablement doesn’t have the background or interest to do this, maybe they should.  If not, and/or you are operating at bigger scale, why not promote a salesperson with management potential into jump-ball, overlay deal management as their first move into sales management?

Important Subtleties in Calculating Quarterly, Annual, and ATR-based Churn Rates

This post won’t save your life, or your company.  But it might save you a few precious hours at 2:00 AM if you’re working on your company’s SaaS metrics and can’t foot your quarterly and annual churn rates while preparing a board or investor deck.

The generic issue is a lot of SaaS metrics gurus define metrics in a generic way using “periods” without paying attention to some subtleties that can arise in calculating these metrics for a quarter vs. a year.  The specific issue is, if you do what many people do, that your quarterly and annual churn rates won’t foot — i.e., the sum of your quarterly churn rates won’t equal your annual churn rate.

Here’s an example to show why.

saas churn subtle

If I asked you to calculate the annual churn rate in the above example, virtually everyone would get it correct.  You’d look at the rightmost column, see that 2018 started with 10,000 in ARR, see that there were 1,250 dollars of churn on the year, divide 1,250 by 10,000 and get 12.5%.  Simple, huh?

However, if I hid the last column, and then asked you to calculate quarterly churn rates, you might come up with churn rate 1, thinking churn rate = period churn / starting period ARR.  You might then multiply by 4 to annualize the quarterly rates and make them more meaningful.  Then, if I asked you to add an annual column, you’d sum the quarterly (non-annualized) rates for the annual churn and either average the annualized quarterly rates or simply gray-out the box as I did because it’s redundant [1].

You’d then pause, swear, and double-check the sheet for errors because the sum of your quarterly rates (10.2%) doesn’t equal your annual rate (12.5%).

What’s going on?  The trap is thinking churn rate = period churn / starting period ARR.

That works in a world of one-year contracts when you look at churn on an annual basis (every contract in the starting ARR base of 10,000 faces renewal at some point during the year), but it breaks on a quarterly basis.  Why?  Because starting ARR is increasing every quarter due to new sales that aren’t in the renewal base for the year.  This depresses your churn rates relative to churn rate 2, which defines quarterly churn as churn in the quarter divided by starting-year ARR.  When you use churn rate 2, the sum of the quarterly rates equals the annual rate, so you can mail out that board deck and go back to bed [2].

Available to Renew (ATR-based) Churn Rates

While we’re warmed up, let’s have some more fun.  If you’ve worked in enterprise software for more than a year, you’ll know that the 10,000 dollars of starting ARR is most certainly not distributed evenly across quarters:  enterprise software sales are almost always backloaded, ergo enterprise software renewals follow the same pattern.

So if we want more accurate [3] quarterly churn rates, shouldn’t we do the extra work, figure out how much ARR we have available to renew (ATR) in each quarter, and then measure churn rates on an ATR basis?  Why not!

Let’s first look at an example, that shows available to renew (ATR) split in a realistic, backloaded way across quarters [4].

ATR churn 1

In some sense, ATR churn rates are cleaner because you’re making fewer implicit assumptions:  here’s what was up for renewal and here’s what we got (or lost).  While ATR rates get complicated fast in a world of multi-year deals, for today, we’ll stay in a world of purely one-year contracts.

Even in that world, however, a potential footing issue emerges.  If I calculate annual ATR churn by looking at annual churn vs. starting ARR, I get the correct answer of 12.5%.  However, if I try to average my quarterly rates, I get a different answer of 13.7%, which I put in red because it’s incorrect.

Quiz:  what’s going on?

Hint:  let me show the ATR distributed in a crazy way to demonstrate the problem more clearly.

atr churn 2

The issue is you can’t get the annual rate by averaging the quarterly ATR rates because the ATR is not evenly distributed.  By using the crazy distribution above, you can see this more clearly because the (unweighted) average of the four quarterly rates is 53.6%, pulled way up by the two quarters with 100% churn rates.  The correct way to foot this is to instead use a weighted average, weighting on an ATR basis.  When you do that (supporting calculations in grey), the average then foots to the correct annual number.

# # #

Notes:

[1] The sum of the quarterly rates (A, B, C, D) will always equal the average of the annualized quarterly rates because (4A+4B+4C+4D)/4 = A+B+C+D.

[2] I won’t go so far as to say that churn rate 1 is “incorrect” while churn rate 2 is “correct.”  Churn rate 1 is simple and gives you what you asked for “period churn / starting period ARR.”  (You just need to realize that the your quarterly rates will only sum to your annual rate if you have zero new sales and ergo you should calculate the annual rate off the yearly churn and starting ARR.)  Churn rate 2 is somewhat more complicated.  If you live in a world of purely one-year contracts, I’d recommend churn rate 2.  But in a world of mixed one- and multi-year contracts, then lots of contracts are in starting period ARR aren’t in the renewal base for the year, so why would I exclude only some of them (i.e,. those signed in the year) as opposed to others.

[3] Dividing by the whole ARR base basically assumes that the base renews evenly across quarters.  Showing churn rates based on available-to-renew (ATR) is more accurate but becomes complicated quickly in a world of mixed, multi-year contracts of different duration (where you will need to annualize the rates on multi-year contracts and then blend the average to get a single, meaningful, annualized rate).  In this post, we’ll assume a world of exclusively one-year contracts, which sidesteps that issue.

[4] ATR is normally backloaded because enterprise sales are normally backloaded.  Here the linearity is 15%, 17.5%, 25%, 42.5% or a 32.5/67.5 split across the first vs. second half of the year (which is pretty backloaded even for enterprise software).

[5] The spreadsheet I used is available here if you want to play with it.

The Two Archetypal Marketing Messages: “Bags Fly Free” and “Soup is Good Food.”

There are only two archetypal marketing messages, exemplified by:

  • Bags Fly Free, a current advertising slogan used by Southwest airlines.
  • Soup is Good Food, a 1970s campaign slogan used by Campbell’s soup [1].

Screen-Shot-2014-12-29-at-11.26.14-PM

soup

Quick, what’s the difference between these two messages?

Soup is Good Food answers the question “why buy one (at all)?” while Bags Fly Free answers the question “why buy mine?”  Soup is Good Food markets the category while Bags Fly Free markets one vendor’s product/service within it.  In short, Soup is Good Food is about value.  Bags Fly Free is about differentiation.

Once you see things through his lens, you will be shocked how many marketers confuse one with the other.  Some never get the difference sorted out in the first place.  Others mix up value and differentiation messages, because they are bowing to adages or dictums [2] (e.g., “always sell value” or “benefits, not features”), instead acting based on the company’s business situation.

The simple fact is that some situations call for messaging value and others call for messaging differentiation. Somewhat perversely, the hotter your market, the less you need to message around value.  The cooler your market, the less you need to message around differentiation.

Why?  Hot markets definitionally have lots of buyers.  Those buyers already understand the value of the category and are trying to figure out which product to buy within it.  That’s why in hot markets you need a strong differentiation message.

During our hypergrowth phase at BusinessObjects nobody called up saying “why should I buy a BI tool?”   Everybody called up saying, “I’m going to buy a BI tool, my boss said to evaluate three, and Gartner said to look at BusinessObjects, Cognos, and Brio.”

When that buyer asks “why should I buy BusinessObjects?” think about how stupid you’ll look if you answer like this (thinking you need to sell value):

“Whoa, slow down there.  First, let’s talk about the business benefits of using BI in general.  We’ve found that compared to writing your own SQL queries and doing centralized report generation that you can lower IT support costs, reduce the backlog of requested reports, and empower end users to do their query and reporting.  This is why someone should buy an BI solution.”

The whole time you’re blabbering, the customer is wondering if Cognos or Brio can do a better job of answering their question.  In a hot category, you better be darn good at answering “why buy mine?” in a clear and compelling way.

Similarly, in hot categories, people don’t typically ask about return on investment (ROI) [3]:  they already know they want to buy one.  Ironically — and this surprises some — when you have a lot of people asking about ROI, you are probably in a cold category, not a hot one [4].

This is why some salespeople have such a hard time when they move from hypergrowth market leaders to early-stage startups.  In their prior job, all they had to sell was differentiation — “let me explain why mine’s better.”  In the new job, they can’t survive without selling value — “wait, before you hang up, please give me a second to explain why to buy one at all.”

If you’re not sure whether you’re in a hot or a cold category, I will refer you to Kellblog official Simple, Definitive, One-Step Hot Category Test:

If you have to ask whether you’re not a hot category, you’re not in one.

If you were, you’d be too busy to ask.  You’d be growing too fast.  In too many deals.  Running around with your hair on fire.  If you have time to sit around in meetings debating whether you’re in the hot category, I can assure you that you’re not in one.

Let’s look at cold markets for a bit.  I’ll pick the early days at MarkLogic when we were selling an XML database system.  There were two not-so-subtle indicators that it was not a hot market:  first, we had the time to ask and second, Gartner had literally published a note declaring that it wasn’t (“XML Database:  The Market That Never Was“).

The value of our system (to the information industry) was that we could help companies build new, powerful information products faster.  The differentiation was that we used a unique termlist-based indexing mechanism that allowed us to process essentially any XQuery statement with constraints on both structure and text at extremely high performance.

Imagine calling the SVP of Digital Strategy at McGraw-Hill and delivering the differentiation, instead of the value, message.

Sales:  Hi, I’m from MarkLogic and we have the world’s best XML database system.

Customer (if they didn’t hang up already):  I thought XML databases were, like Snake Plissken, dead.  Gartner said so.  Nobody’s using them, I need to —

Sales:  — Wait, don’t worry about that.  Let me explain for a minute why we have the best XML database because how we use termlists instead of traditional b-tree indices to process queries.

Customer: [dial tone]

You’re telling the customer why something she doesn’t want to buy is different from something else she doesn’t want to buy.  Instead, imagine delivering the value message, telling her why she should want to buy one:

Sales:  Hi, I’m from MarkLogic and we help media companies quickly build powerful information products.

Customer:  I’m in charge of our strategy for doing that.  Who uses you and what are they doing?

Ah.  Much better.

Another way to look at this is from a Geoffrey Moore lifecycle perspective:

messaging value vs diff

Early on, you need to message value — why do you want to buy one?  Once you cross the chasm into the high-growth “tornado,” you need to message differentiation — why buy from me. Once the market cools down, you need to start working to expand it by once again messaging value.  In three phases, Soup is Good Food, then My Soup’s Better, then Soup is Good Food.

All marketers should be able to answer both questions (e.g., why buy yours, why buy one at all) [5] about their product.  But which one you develop most deeply and push most in the market should be a function of your business situation.

Think value:  Soup is Good Food
Think differentiation:  Bags Fly Free

# # #

Notes
[1] And in my humble opinion much better than current messaging:  “Discover Flavor.  Convenient tasty solutions for everyone and every occasion.  Campell’s soups are made for real, real life (TM).”  First, let me save Campell’s $50K in legal fees — don’t bother registering that trademark — nobody’s ever going to steal it.  Presumably Discover Flavor is an attempt at differentiation, but … do the other guys’ soups really lack flavor?  I thought Campbell’s was getting hit at the high-end by tasty premium soups, not at the low-end with cheap, flavorless ones.  Seen in that light, Discover Flavor seems more a defensive message than either a differentiation or value message.  (“I know you may not think it, but our soups have flavor, too!”)  Finally, I can’t even classify “made for real, real life” as a message (other than as puffery) because it doesn’t mean anything.  Are other soups made for “fake, real life” or “real, fake life”?  Drivel, but I’m sure somehow it “tested well” in focus groups.

discover flavor

[2] Apologies to my high school Latin teacher, Mr. Maddaloni, for not using the more proper, dicta.

[3] As I often said when I lived in France, “ROI is King” (in cold categories, at least).

[4] The exception would be in a hot category where the ROI is quite different among competing solutions.  Usually, this is not the case — the return is generally more a property of the category than any given product.  When there is a difference, it’s typically due not to return, but investment — i.e., the total cost of ownership (TCO) can often vary significantly among different systems.

[5] We’ll leave the next logical question (“why buy now?”) for another post.

The Domo S-1: Does the Emperor Have Clothes?

I preferred Silicon Valley [1] back in the day when companies raised modest amounts of capital (e.g., $30M) prior to an IPO that took 4-6 years from inception, where burn rates of $10M/year looked high, and where $100M raise was the IPO, not one or more rounds prior to it.  When cap tables had 1x, non-participating preferred and that all converted to a single class of common stock in the IPO. [2]

How quaint!

These days, companies increasingly raise $200M to $300M prior to an IPO that takes 10-12 years from inception, the burn might look more like $10M/quarter than $10M/year, the cap table loaded up with “structure” (e.g., ratcheting, multiple liquidation preferences).  And at IPO time you might end up with two classes common stock, one for the founder with super-voting rights, and one for everybody else.

I think these changes are in general bad:

  • Employees get more diluted, can end up alternative minimum tax (AMT) prisoners unable to leave jobs they may be unhappy doing, have options they are restricted from selling entirely or are sold into opaque secondary markets with high legal and transaction fees, and/or even face option expiration at 10 years. (I paid a $2,500 “administrative fee” plus thousands in legal fees to sell shares in one startup in a private transaction.)
  • John Q. Public is unable to buy technology companies at $30M in revenue and with a commission of $20/trade. Instead they either have to wait until $100 to $200M in revenue or buy in opaque secondary markets with limited information and high fees.
  • Governance can be weak, particularly in cases where a founder exercises directly (or via a nuclear option) total control over a company.

Moreover, the Silicon Valley game changes from “who’s smartest and does the best job serving customers” on relatively equivalent funding to “who can raise the most capital, generate the most hype, and buy the most customers.”  In the old game, the customers decide the winners; in the new one, Sand Hill Road tries to, picking them in a somewhat self-fulfilling prophecy.

The Hype Factor
In terms of hype, one metric I use is what I call the hype ratio = VC / ARR.  On the theory that SaaS startups input venture capital (VC) and output two things — annual recurring revenue (ARR) and hype — by analogy, heat and light, this is a good way to measure how efficiently they generate ARR.

The higher the ratio, the more light and the less heat.  For example, Adaptive Insights raised $175M and did $106M in revenue [3] in the most recent fiscal year, for a ratio of 1.6.  Zuora raised $250M to get $138M in ARR, for a ratio of 1.8.  Avalara raised $340M to $213M in revenue, for a ratio of 1.6.

By comparison, Domo’s hype ratio is 6.4.  Put the other way, Domo converts VC into ARR at a 15% rate.  The other 85% is, per my theory, hype.  You give them $1 and you get $0.15 of heat, and $0.85 of light.  It’s one of the most hyped companies I’ve ever seen.

As I often say, behind every “marketing genius” is a giant budget, and Domo is no exception [4].

Sometimes things go awry despite the most blue-blooded of investors and the greenest of venture money.  Even with funding from the likes of NEA and Lightspeed, Tintri ended up a down-round IPO of last resort and now appears to be singing its swan song.  In the EPM space, Tidemark was the poster child for more light than heat and was sold in what was rumored to be fire sale [5] after raising over $100M in venture capital and having turned that into what was supposedly less than $10M in ARR, an implied hype ratio of over 10.

The Top-Level View on Domo
Let’s come back and look at the company.  Roughly speaking [6], Domo:

  • Has nearly $700M in VC invested (plus nearly $100M in long-term debt).
  • Created a circa $100M business, growing at 45% (and decelerating).
  • Burns about $150M per year in operating cash flow.
  • Will have a two-class common stock system where class A shares have 40x the voting rights of class B, with class A totally controlled by the founder. That is, weak governance.

Oh, and we’ve got a highly unprofitable, venture-backed startup using a private jet for a bit less than $1M year [7].  Did I mention that it’s leased back from the founder?  Or the $300K in catering from a company owned by the founder and his brother.  (Can’t you order lunch from a non-related party?)

As one friend put it, “the Domo S-1 is everything that’s wrong with Silicon Valley in one place:  huge losses, weak governance, and now modest growth.”

Personally, I view Domo as the Kardashians of business intelligence – famous for being famous.  While the S-1 says they have 85 issued patents (and 45 applications in process), does anyone know what they actually do or what their technology advantage is?  I’ve worked in and around BI for nearly two decades – and I have no idea.

Maybe this picture will help.

domosolutionupdated

Uh, not so much.

The company itself admits the current financial situation is unsustainable.

If other equity or debt financing is not available by August 2018, management will then begin to implement plans to significantly reduce operating expenses. These plans primarily consist of significant reductions to marketing costs, including reducing the size and scope of our annual user conference, lowering hiring goals and reducing or eliminating certain discretionary spending as necessary

A Top-to-Bottom Skim of the S-1
So, with that as an introduction, let’s do a quick dig through the S-1, starting with the income statement:

domo income

Of note:

  • 45% YoY revenue growth, slow for the burn rate.
  • 58% blended gross margins, 63% subscription gross margins, low.
  • S&M expense of 121% of revenue, massive.
  • R&D expense of 72% of revenue, huge.
  • G&A expense of 29% of revenue, not even efficient there.
  • Operating margin of -162%, huge.

Other highlights:

  • $803M accumulated deficit.  Stop, read that number again and then continue.
  • Decelerating revenue growth, 45% year over year, but only 32% Q1 over Q1.
  • Cashflow from operations around -$150M/year for the past two years.  Stunning.
  • 38% of customers did multi-year contracts during FY18.  Up from prior year.
  • Don’t see any classical SaaS unit economics, though they do a 2016 cohort analysis arguing contribution margin from that cohort of -196%, 52%, 56% over the past 3 years.  Seems to imply a CAC ratio of nearly 4, twice what is normally considered on the high side.
  • Cumulative R&D investment from inception of $333.9M in the platform.
  • 82% revenues from USA in FY18.
  • 1,500 customers, with 385 having revenues of $1B+.
  • Believe they are <4% penetrated into existing customers, based on Domo users / total headcount of top 20 penetrated customers.
  • 14% of revenue from top 20 customers.
  • Three-year retention rate of 186% in enterprise customers (see below).  Very good.
  • Three-year retention rate of 59% in non-enterprise customers.  Horrific.  Pay a huge CAC to buy a melting ice cube.  (Only the 1-year cohort is more than 100%.)

As of January 31, 2018, for the cohort of enterprise customers that licensed our product in the fiscal year ended January 31, 2015, the current ACV is 186% of the original license value, compared to 129% and 160% for the cohorts of enterprise customers that subscribed to our platform in the fiscal years ended January 31, 2016 and 2017, respectively. For the cohort of non-enterprise customers that licensed our product in the fiscal year ended January 31, 2015, the current ACV as of January 31, 2018 was 59% of the original license value, compared to 86% and 111% for the cohorts of non-enterprise customers that subscribed to our platform in the fiscal years ended January 31, 2016 and 2017, respectively.

  • $12.4M in churn ARR in FY18 which strikes me as quite high coming off subscription revenues of $58.6M in the prior year (21%).  See below.

Our gross subscription dollars churned is equal to the amount of subscription revenue we lost in the current period from the cohort of customers who generated subscription revenue in the prior year period. In the fiscal year ended January 31, 2018, we lost $12.4 million of subscription revenue generated by the cohort in the prior year period, $5.0 million of which was lost from our cohort of enterprise customers and $7.4 million of which was lost from our cohort of non-enterprise customers.

  • What appears to be reasonable revenue retention rates in the 105% to 110% range overall.  Doesn’t seem to foot to the churn figure about.  See below:

For our enterprise customers, our quarterly subscription net revenue retention rate was 108%, 122%, 116%, 122% and 115% for each of the quarters during the fiscal year ended January 31, 2018 and the three months ended April 30, 2018, respectively. For our non-enterprise customers, our quarterly subscription net revenue retention rate was 95%, 95%, 99%, 102% and 98% for each of the quarters during the fiscal year ended January 31, 2018 and the three months ended April 30, 2018, respectively. For all customers, our quarterly subscription net revenue retention rate was 101%, 107%, 107%, 111% and 105% for each of the quarters during the fiscal year ended January 31, 2018 and the three months ended April 30, 2018, respectively.

  • Another fun quote and, well, they did take about the cash it takes to build seven startups.

Historically, given building Domo was like building seven start-ups in one, we had to make significant investments in research and development to build a platform that powers a business and provides enterprises with features and functionality that they require.

  • Most customers invoiced on annual basis.
  • Quarterly income statements, below.

domo qtr

  • $72M in cash as of 4/30/18, about 6 months worth at current burn.
  • $71M in “backlog,” multi-year contractual commitments, not prepaid and ergo not in deferred revenue.  Of that $41M not expected to be invoiced in FY19.
  • Business description, below.  Everything a VC could want in one paragraph.

Domo is an operating system that powers a business, enabling all employees to access real-time data and insights and take action from their smartphone. We believe digitally connected companies will increasingly be best positioned to manage their business by leveraging artificial intelligence, machine learning, correlations, alerts and indices. We bring massive amounts of data from all departments of a business together to empower employees with real-time data insights, accessible on any device, that invite action. Accordingly, Domo enables CEOs to manage their entire company from their phone, including one Fortune 50 CEO who logs into Domo almost every day and over 10 times on some days.

  • Let’s see if a computer could read it any better than I could.  Not really.

readability

  • They even have Mr. Roboto to help with data analysis.

Through Mr. Roboto, which leverages machine learning algorithms, artificial intelligence and predictive analytics, Domo creates alerts, detects anomalies, optimizes queries, and suggests areas of interest to help people focus on what matters most. We are also developing additional artificial intelligence capabilities to enable users to develop benchmarks and indexes based on data in the Domo platform, as well as automatic write back to other systems.

  • 796 employees as of 4/30/18, of which 698 are in the USA.
  • Cash comp of $525K for CEO, $450K for CFO, and $800K for chief product officer
  • Pre-offering it looks like founder Josh James owns 48.9M shares of class A and 8.9M shares of class B, or about 30% of the shares.  With the 40x voting rights, he has 91.7% of the voting power.

Does the Emperor Have Any Clothes?
One thing is clear.  Domo is not “hot” because they have some huge business blossoming out from underneath them.  They are “hot” because they have raised and spent an enormous amount of money to get on your radar.

Will they pull off they IPO?  There’s a lot not to like:  the huge losses, the relatively slow growth, the non-enterprise retention rates, the presumably high CAC, the $12M in FY18 churn, and the 40x voting rights, just for starters.

However, on the flip side, they’ve got a proven charismatic entrepreneur / founder in Josh James, an argument about their enterprise customer success, growth, and penetration (which I’ve not had time to crunch the numbers on), and an overall story that has worked very well with investors thus far.

While the Emperor’s definitely not fully dressed, he’s not quite naked either.  I’d say the Domo Emperor’s donning a Speedo — and will somehow probably pull off the IPO parade.

###

Notes

[1] Yes, I know they’re in Utah, but this is still about Silicon Valley culture and investors.

[2] For definitions and frequency of use of various VC terms, go to the Fenwick and West VC survey.

[3] I’ll use revenue rather than trying to get implied ARR to keep the math simple.  In a more perfect world, I’d use ARR itself and/or impute it.  I’d also correct for debt and a cash, but I don’t have any MBAs working for me to do that, so we’ll keep it back of the envelope.

[4] You can argue that part of the “genius” is allocating the budget, and it probably is.  Sometimes that money is well spent cultivating a great image of a company people want to buy from and work at (e.g., Salesforce).  Sometimes, it all goes up in smoke.

[5] Always somewhat truth-challenged, Tidemark couldn’t admit they were sold.  Instead, they announced funding from a control-oriented private equity firm, Marlin Equity Partners, as a growth investment only a year later be merged into existing Marlin platform investment Longview Solutions.

[6] I am not a financial analyst, I do not give buy/sell guidance, and I do not have a staff working with me to ensure I don’t make transcription or other errors in quickly analyzing a long and complex document.  Readers are encouraged to go the S-1 directly.  Like my wife, I assume that my conclusions are not always correct; readers are encouraged to draw their own conclusions.  See my FAQ for complete disclaimer.

[7] $900K, $700K, and $800K run-rate for FY17, FY18, and 1Q19 respectively.