CFOs: More Strategic Than Ever

I was digging through my reading pile and found this about nine-month-old report by Accenture and Oracle entitled The CFO as Corporate Strategist by Donniel Schulman and David Axson of Accenture.  Those who follow Host Analytics might remember David Axson as he’s spoken at several of our user conferences.  (Note:  the 2015 conference is May 18-21 — save the date!)

The overall theme of the paper is that the traditional “bean counter” positioning of CFOs is as outdated as the hula hoop, with CFOs becoming more strategic over time, and partnering with the CEO to run the company.

Here’s one chart from the report that shows just that:

cfo influence

We definitely seeing this trend with our customers at Host Analytics.

As I’ve always said, “CEOs live in the future,” so if CFOs want to partner with them, they are going to de-emphasize a lot of their backwards-looking role and join their CEOs in the future.  This means automating and delegating backwards-looking functions like consolidations and reporting.  And it means getting more involved with both financial planning & analysis (FP&A) and their cousins in the various “ops” teams springing up around the organization — e.g., salesops — who also do lot of planning, modeling, and scenario building.

It Ain’t Easy Making Money in Open Source:  Thoughts on the Hortonworks S-1

It took me a week or so to get to it, but in this post I’ll take a dive into the Hortonworks S-1 filing in support of a proposed initial public offering (IPO) of their stock.

While Hadoop and big data are unarguably huge trends driving the industry and while the future of Hadoop looks very bright indeed, on reading the Hortonworks S-1, the reader is drawn to the inexorable conclusion that  it’s hard to make money in open source, or more crassly, it’s hard to make money when you give the shit away.

This is a company that,  in the past three quarters, lost $54M on $33M of support/services revenue and threw in $26M in non-recoverable (i.e., donated) R&D atop that for good measure.

Let’s take it top to bottom:

  • They have solid bankers: Goldman Sachs, Credit Suisse, and RBC are leading the underwriting with specialist support from Pacific Crest, Wells Fargo, and Blackstone.
  • They have an awkward, jargon-y, and arguably imprecise marketing slogan: “Enabling the Data-First Enterprise.”  I hate to be negative, but if you’re going to lose $10M a month, the least you can do is to invest in a proper agency to make a good slogan.
  • Their mission is clear: “to establish Hadoop as the foundational technology of the modern enterprise data architecture.”
  • Here’s their solution description: “our solution is an enterprise-grade data management platform built on a unique distribution of Apache Hadoop and powered by YARN, the next generation computing and resource management framework.”
  • They were founded in 2011, making them the youngest company I’ve seen file in quite some years. Back in the day (e.g., the 1990s) you might go public at age 3-5, but these days it’s more like age 10.
  • Their strategic partners include Hewlett-Packard, Microsoft, Rackspace, Red Hat, SAP, Teradata, and Yahoo.
  • Business model:  “consistent with our open source approach, we generally make the Hortonworks Data Platform available free of charge and derive the predominant amount of our revenue from customer fees from support subscription offerings and professional services.”  (Note to self:  if you’re going to do this, perhaps you shouldn’t have -35% services margins, but we’ll get to that later.)
  • Huge market opportunity: “According to Allied Market Research, the global Hadoop market spanning hardware, software and services is expected to grow from $2.0 billion in 2013 to $50.2 billion by 2020, representing a compound annual growth rate, or CAGR, of 58%.”  This vastness of the market opportunity is unquestioned.
  • Open source purists: “We are committed to serving the Apache Software Foundation open source ecosystem and to sharing all of our product developments with the open source community.”  This one’s big because while it’s certainly strategic and it certainly earns them points within the Hadoop community, it chucks out one of the better ways to make money in open source:  proprietary versions / extensions.  So, right or wrong, it’s big.
  • Headcount:  The company has increased the number of full-time employees from 171 at December 31, 2012 to 524 at September 30, 2014

Before diving into the financials, let me give readers a chance to review open source business models (Wikipedia, Kellblog) if they so desire, before making the (generally true but probably slightly inaccurate) assertion:  the only open source company that’s ever made money (at scale) is Red Hat.

Sure, there have been a few great exits.  Who can forget MySQL selling to Sun for $1B?  Or VMware buying SpringSource for $420M?  Or RedHat buying JBoss for $350M+?  (Hortonworks CEO Rob Bearden was involved in both of the two latter deals.)   Or Citrix buying XenSource for $500M?

But after those deals, I can’t name too many others.  And I doubt any of those companies was making money.

In my mind there are a two common things that go wrong in open source:

  • The market is too small. In my estimation open source compresses the market size by 10-20x.  So if you want to compress the $30B DBMS market 10x, you can still build several nice companies.  However, if you want to compress the $1B enterprise search market by 10x, there’s not much room to build anything.  That’s why there is no Red Hat of Lucene or Solr, despite their enormous popularity in search.    For open source to work, you need to be in a huge market.
  • People don’t renew. No matter which specific open source business model you’re using, the general play is to sell a subscription to <something> that complements your offering.  It might be a hardened/certified version of the open source product.  It might be additions to it that you keep proprietary forever or, in a hardcover/paperback analogy, roll back into the core open source projects with a 24 month lag.  It might be simply technical support.  Or, it might be “admission the club” as one open source CEO friend of mine used to say:  you get to use our extensions, our support, our community, etc.  But no matter what you’re selling, the key is to get renewals.  The risk is that the value of your extensions decreases over time and/or customers become self-sufficient.    This was another problem with Lucene.  It was so good that folks just didn’t need much help and if they did, it was only for a year or so.

So Why Does Red Hat work?

Red Hat uses a professional open source business model  applied to primarily two low-level infrastructure categories:  operating systems and later middleware.   As general rules:

  • The lower-level the category the more customers want support on it.
  • The more you can commoditize the layers below you, the more the market likes it. Red Hat does this for servers.
  • The lower-level the category the more the market actually “wants” it standardized in order to minimize entropy. This is why low-level infrastructure categories become natural monopolies or oligopolies.

And Red Hat set the right price point and cost structure.  In their most recent 10-Q, you can see they have 85% gross margins and about a 10% return on sales.  Red Hat nailed it.

But, if you believe this excellent post by Andreessen Horowitz partner Peter Levine, There Will Never Be Another Red Hat.  As part of his argument Levine reminds us that while Red Hat may be a giant among open source vendors, that among general technology vendors they are relatively small.  See the chart below for the market capitalization compared to some megavendors.

rhat small fish

Now this might give pause to the Hadoop crowd with so many firms vying to be the Red Hat of Hadoop.  But that hasn’t stopped the money from flying in.  Per Crunchbase, Cloudera has raised a stunning $1.2B in venture capital, Hortonworks has raised $248M, and MapR has raised $178M.  In the related Cassandra market, DataStax has raised $190M.  MongoDB (with its own open source DBMS) has raised $231M.  That’s about $2B invested in next-generation open source database venture capital.

While I’m all for open source, disruption, and next-generation databases (recall I ran MarkLogic for six years), I do find the raw amount of capital invested pretty crazy.   Yes, it’s a huge market today.  Yes, it’s exploding as do data volumes and the new incorporation of unstructured data.  But we will be compressing it 10-20x as part of open-source-ization.  And, given all the capital these guys are raising – and presumably burning (after all, why else would you raise it), I can assure you that no one’s making money.

Hortonworks certainly isn’t — which serves as a good segue to dive into the financials.  Here’s the P&L, which I’ve cleaned up from the S-1 and color-annotated.

horton pl

  •  $33M in trailing three quarter (T3Q) revenues ($41.5M in TTM, though not on this chart)
  • 109% growth in T3Q revenues
  • 85% gross margins on support
  • Horrific -35% gross margins on services which given the large relative size of the services business (43% of revenues) crush overall gross margins down to 34%
  • More scarily this calls into question the veracity of the 85% subscription gross margins — I recall reading in the S-1 that they current lack VSOE for subscription support which means that they’ve not yet clearly demonstrated what is really support revenue vs. professional services revenue.  [See footnote 1]
  • $26M in T3Q R&D expense.  Per their policy all that value is going straight back to the open source project which begs the question will they ever see return on it?
  • Net loss of $86.7M in T3Q, or nearly $10M per month

Here are some other interesting tidbits from the S-1:

  • Of the 524 full-time employee as of 9/30/14, there are 56 who are non-USA-based
  • CEO makes $250K/year in base salary cash compensation with no bonus in FY13 (maybe they missed plan despite strong growth?)
  • Prior to the offering CEO owns 6.8% of the stock, a pretty nice percentage, but he was a kind-of a founder
  • Benchmark owns 18.7%
  • Yahoo owns 19.6%
  • Index owns 9.5%
  • $54.9M cash burn from operations in T3Q, $6.1M per month
  • Number of support subscription customers has grown from 54 to 233 over the year from 9/30/13 to 9/30/14
  • A single customer represented went from 47% of revenues for the T3Q ending 9/30/13 down to 22% for the T3Q ending 9/30/14.  That’s a lot of revenue concentration in one customer (who is identified as “Customer A,” but who I believe is Microsoft based on some text in the risk factors.)

Here’s a chart I made of the increase in value in the preferred stock.  A ten-bagger in 3 years.

horton pref

One interesting thing about the prospectus is they show “gross billings,” which is an interesting derived metric that financial analysts use to try and determine bookings in a subscription company.  Here’s what they present:

horton billings

While gross billings is not a bad stab at bookings, the two metrics can diverge — primarily when the duration of prepaid contracts changes.  Deferred revenue can shoot up when sales sells longer prepaid contracts to a given number of customers as opposed to the same-length contract to more of them.  Conversely, if happy customers reduce prepaid contract duration to save cash in a downturn, it can actually help the vendor’s financial performance (they will get the renewals because the customer is happy and not discount in return for multi-year), but deferred revenue will drop as will gross billings.  In some ways, unless prepaid contract duration is held equal, gross billings is more of a dangerous metric than anything else.  Nevertheless Hortonworks is showing it as an implied metric of bookings or orders and the growth is quite impressive.

Sales and Marketing Efficiency

Let’s now look at sales and marketing efficiency, not using the CAC which is too hard to calculate for public companies but using JMP’s sales and marketing efficiency metric = gross profit [current] – gross profit [prior] / S&M expense [prior].

On this metric Hortonworks scores a 41% for the T3Q ended 9/30/14 compared to the same period in 2013.  JMP considers anything above 50% efficient, so they are coming in low on this metric.  However, JMP also makes a nice chart that correlates S&M efficiency to growth and I’ve roughly hacked Hortonworks onto it here:

JMP

I’ll conclude the main body of the post by looking at their dollar-based expansion rate.  Here’s a long quote from the S-1:

Dollar-Based Net Expansion Rate.    We believe that our ability to retain our customers and expand their support subscription revenue over time will be an indicator of the stability of our revenue base and the long-term value of our customer relationships. Maintaining customer relationships allows us to sustain and increase revenue to the extent customers maintain or increase the number of nodes, data under management and/or the scope of the support subscription agreements. To date, only a small percentage of our customer agreements has reached the end of their original terms and, as a result, we have not observed a large enough sample of renewals to derive meaningful conclusions. Based on our limited experience, we observed a dollar-based net expansion rate of 125% as of September 30, 2014. We calculate dollar-based net expansion rate as of a given date as the aggregate annualized subscription contract value as of that date from those customers that were also customers as of the date 12 months prior, divided by the aggregate annualized subscription contract value from all customers as of the date 12 months prior. We calculate annualized support subscription contract value for each support subscription customer as the total subscription contract value as of the reporting date divided by the number of years for which the support subscription customer is under contract as of such date.

This is probably the most critical section of the prospectus.  We know Hortonworks can grow.  We know they have a huge market.  We know that market is huge enough to be compressed 10-20x and still have room to create a a great company.  What we don’t know is:  will people renew?   As we discussed above, we know it’s one of the great risks of open source

Hortonworks pretty clearly answers the question with “we don’t know” in the above quote.  There is simply not enough data, not enough contracts have come up for renewal to get a meaningful renewal rate.  I view the early 125% calculation as a very good sign.  And intuition suggests that — if their offering is quality — that people will renew because we are talking low-level, critical infrastructure and we know that enterprises are willing to pay to have that supported.

# # #

Appendix

In the appendix below, I’ll include a few interesting sections of the S-1 without any editorial comments.

A significant portion of our revenue has been concentrated among a relatively small number of large customers. For example, Microsoft Corporation historically accounted for 55.3% of our total revenue for the year ended April 30, 2013, 37.8% of our total revenue for the eight months ended December 31, 2013 and 22.4% of our total revenue for the nine months ended September 30, 2014. The revenue from our three largest customers as a group accounted for 71.0% of our total revenue for the year ended April 30, 2013, 50.5% of our total revenue for the eight months ended December 31, 2013 and 37.4% of our total revenue for the nine months ended September 30, 2014. While we expect that the revenue from our largest customers will decrease over time as a percentage of our total revenue as we generate more revenue from other customers, we expect that revenue from a relatively small group of customers will continue to account for a significant portion of our revenue, at least in the near term. Our customer agreements generally do not contain long-term commitments from our customers, and our customers may be able to terminate their agreements with us prior to expiration of the term. For example, the current term of our agreement with Microsoft expires in July 2015, and automatically renews thereafter for two successive twelve-month periods unless terminated earlier. The agreement may be terminated by Microsoft prior to the end of its term. Accordingly, the agreement with Microsoft may not continue for any specific period of time.

# # #

We do not currently have vendor-specific objective evidence of fair value for support subscription offerings, and we may offer certain contractual provisions to our customers that result in delayed recognition of revenue under GAAP, which could cause our results of operations to fluctuate significantly from period-to-period in ways that do not correlate with our underlying business performance.

In the course of our selling efforts, we typically enter into sales arrangements pursuant to which we provide support subscription offerings and professional services. We refer to each individual product or service as an “element” of the overall sales arrangement. These arrangements typically require us to deliver particular elements in a future period. We apply software revenue recognition rules under U.S. generally accepted accounting principles, or GAAP. In certain cases, when we enter into more than one contract with a single customer, the group of contracts may be so closely related that they are viewed under GAAP as one multiple-element arrangement for purposes of determining the appropriate amount and timing of revenue recognition. As we discuss further in “Management’s Discussion and Analysis of Financial Condition and Results of Operations—Critical Accounting Policies and Estimates—Revenue Recognition,” because we do not have VSOE for our support subscription offerings, and because we may offer certain contractual provisions to our customers, such as delivery of support subscription offerings and professional services, or specified functionality, or because multiple contracts signed in different periods may be viewed as giving rise to multiple elements of a single arrangement, we may be required under GAAP to defer revenue to future periods. Typically, for arrangements providing for support subscription offerings and professional services, we have recognized as revenue the entire arrangement fee ratably over the subscription period, although the appropriate timing of revenue recognition must be evaluated on an arrangement-by-arrangement basis and may differ from arrangement to arrangement. If we are unexpectedly required to defer revenue to future periods for a significant portion of our sales, our revenue for a particular period could fall below  our expectations or those of securities analysts and investors, resulting in a decline in our stock price

 # # #

We generate revenue by selling support subscription offerings and professional services. Our support subscription agreements are typically annual arrangements. We price our support subscription offerings based on the number of servers in a cluster, or nodes, data under management and/or the scope of support provided. Accordingly, our support subscription revenue varies depending on the scale of our customers’ deployments and the scope of the support agreement.

 Our early growth strategy has been aimed at acquiring customers for our support subscription offerings via a direct sales force and delivering consulting services. As we grow our business, our longer-term strategy will be to expand our partner network and leverage our partners to deliver a larger proportion of professional services to our customers on our behalf. The implementation of this strategy is expected to result in an increase in upfront costs in order to establish and further cultivate such strategic partnerships, but we expect that it will increase gross margins in the long term as the percentage of our revenue derived from professional services, which has a lower gross margin than our support subscriptions, decreases.

 # # #

Deferred Revenue and Backlog

Our deferred revenue, which consists of billed but unrecognized revenue, was $47.7 million as of September 30, 2014.

Our total backlog, which we define as including both cancellable and non-cancellable portions of our customer agreements that we have not yet billed, was $17.3 million as of September 30, 2014. The timing of our invoices to our customers is a negotiated term and thus varies among our support subscription agreements. For multiple-year agreements, it is common for us to invoice an initial amount at contract signing followed by subsequent annual invoices. At any point in the contract term, there can be amounts that we have not yet been contractually able to invoice. Until such time as these amounts are invoiced, we do not recognize them as revenue, deferred revenue or elsewhere in our consolidated financial statements. The change in backlog that results from changes in the average non-cancelable term of our support subscription arrangements may not be an indicator of the likelihood of renewal or expected future revenue, and therefore we do not utilize backlog as a key management metric internally and do not believe that it is a meaningful measurement of our future revenue.

 # # #

We employ a differentiated approach in that we are committed to serving the Apache Software Foundation open source ecosystem and to sharing all of our product developments with the open source community. We support the community for open source Hadoop, and employ a large number of core committers to the various Enterprise Grade Hadoop projects. We believe that keeping our business model free from architecture design conflicts that could limit the ultimate success of our customers in leveraging the benefits of Hadoop at scale is a significant competitive advantage.

 # # #

International Data Corporation, or IDC, estimates that data will grow exponentially in the next decade, from 2.8 zettabytes, or ZB, of data in 2012 to 40 ZBs by 2020. This increase in data volume is forcing enterprises to upgrade their data center architecture and better equip themselves both to store and to extract value from vast amounts of data. According to IDG Enterprise’s Big Data Survey, by late 2014, 31% of enterprises with annual revenues of $1 billion or more expect to manage more than one PB of data. In comparison, as of March 2014 the Library of Congress had collected only 525 TBs of web archive data, equal to approximately half a petabyte and two million times smaller than a zettabyte.

# # #

Footnotes:

[1]  Thinking more about this, while I’m not an accountant, I think the lack of VSOE has the following P&L impact:  it means that in contracts that mix professional services and support they must recognize all the revenue ratably over the contract.  That’s fine for the support revenue, but it should have the effect of pushing out services revenue, artificially depressing services gross margins.  Say, for example you did a $240K that was $120K of each.  The support should be recognized at $30K/quarter.  However, if the consulting is delivered in the first six months it should be delivered at $60K/quarter for the first and second quarters and $0 in the third and fourth.  Since, normally, accountants will take the services costs up-front this should have the effect of hurting services by taking the costs as delivered but by the revenue over a longer period.

[2] See here for generic disclaimers and please note that in the past I have served as an advisor to MongoDB

Don’t Be a Metrics Slave

I love metrics.  I live for metrics.  Every week and every quarter I drown my team in metrics reviews.  Why?  Because metrics are the instrumentation — the flight panel — of our business.   Good metrics provide clear insights.  They cut through politics, spin, and haze.  They spark amazing debates.   They help you understand your business and compare it to others.

I love metrics, but I’ll never be a slave to them.  Far too often in business I see people who are metrics slaves.  Instead of mastering metrics to optimize the business, the metrics become the master and the manager a slave.

I define metrics slavery as the case when managers stop thinking and work blindly towards achieving a metric regardless of whether they believe doing so leads to what they consider is best for the business.

One great thing about sports analytics is that despite an amazing slew of metrics, everyone remembers it’s the team with the most goals that wins, not the one who took the most shots.  In business, we often get that wrong in both subtle and not-so-subtle ways.

Here are metrics mistakes that often lead to metrics slavery.

  1. Dysfunctional compensation plans, where managers actively and openly work on what they believe are the wrong priorities in response to a compensation plan that drives them to do so. The more coin-operated the type of people in a department, the more carefully you must define incentives.  While strategic marketers might challenge a poorly aligned compensation plan, most salespeople will simply behave exactly as dictated by the compensation plan.  Be careful what you ask for, because you will often get it.
  1. Poor metric selection. Marketers who count leads instead of opportunities are counting shots instead of goals.  I can’t stand to see tradeshow teams giving away valuable items so they can run the card of every passing attendee.  They might feel great about getting 500 leads by the end of the day, but if 200 are people who will never buy, then they are not only useless but actually have negative value because the company’s nurture machine is going to invest fruitless effort in converting them.
  1. Lack of leading indicators. Most managers are more comfortable with solid lagging indicators than they are with squishier leading indicators.  For example, you might argue that leads are a great leading indicator of sales, and you’d be right to the extent that they are good leads.  This then requires you to define “good,” which is typically done using some ABC-style scoring system.  But because the scoring system is complex, subjective, and requires iteration and regression to define, some managers find the whole thing too squishy and say “let’s just count leads.” That’s the equivalent of counting shots, including shots off-goal that never could have scored.  While leading indicators require a great deal of thought to get right, you must include them in your key metrics, lest you create a company of backwards-looking managers.
  1. Poorly-defined metrics. The plus/minus metric in hockey is one of my favorite sports metrics because it measures teamwork, something I’d argue is pretty hard to measure [1].  However, there is a known problem with the plus/minus rating.  It includes time spent on power plays [2] and penalty kills [3].  Among other problems, this unfairly penalizes defenders on the penalty-killing unit, diluting the value of the metric.  Yet, far as I know, no one has fixed this problem.   So while it’s tracked, people don’t take it too seriously because of its known limitations.  Do you have metrics like this at your company?  If so, fix them.
  1. Self-fulfilling metrics. These are potential leading metrics where management losses sight of the point and accidentally makes their value a self-fulfilling prophecy.  Pipeline coverage (value of oppties in the pipeline / plan) is such a metric.  Long ago, it was good leading indicator of plan attainment, but over the past decade literally every sales organization I know has institutionalized beating salespeople unless they have 3x coverage.  What’s happened?  Today, everyone has 3x coverage. It just doesn’t mean anything anymore.  See this post for a long rant on this topic.
  1. Ill-defined metrics, which happen a lot in benchmarking where we try to compare, for example, our churn rate to an industry average. If you are going to make such comparisons, you must begin with clear definitions or else you are simply counting angels on pinheads.   See this post where I give an example where, off the same data, I can calculate a renewals rate of 69%, 80%, 100%, 103%, 120%, 208%, or 310%, depending on how you choose to calculate.  If you want to do a meaningful benchmark, you better be comparing the 80% to the 80%, not the 208%.
  1. Blind benchmarking. The strategic mistake that managers make in benchmarking is that they try to converge blindly to the industry average.  This reminds me of the Vonnegut short-story where ballerinas have to wear sash-weights and the intelligentsia have music blasted into their ears in order to make everyone equal.  Benchmarks should be tools of understanding, not instruments of oppression.   In addition, remember that benchmarks definitionally blend industry participants with different strategies.  One company may heavily invest in R&D in product-leadership strategy.  One may heavily invest in S&M as part of market-share leadership strategy.  A third may invest heavily in supply chain optimization as part of cost-leadership strategy.  Aspiring to the average of these companies is a recipe for failure, not success, as you will end up in a strategic No Man’s Land.  In my opinion, this is the most dangerous form of metrics slavery because it happens at the boardroom level, and often with little debate.
  1. Conflicting metrics. Let’s take a concrete example here.  Imagine you are running a SaaS business that’s in a turnaround.  This year bookings growth was flat.  Next year you want to grow bookings 100%.  In addition, you want to converge your P&L over time to an industry average of S&M expenses at 50% of revenues, whereas today you are running at 90%.  While that may sound reasonable it’s actually a mathematical impossibility.   Why?  Because the company is changing trajectories and in a SaaS business revenues lag bookings by a year.   So next year revenue will be growing slowly [4] and that means you need to grow S&M even slower if you want to meet the P&L convergence goal.  But if you want to meet the 100% bookings growth goal, with improving efficiency, you’ll need to increase S&M cost by say 70%.  It’s impossible.  #QED.  There will always be a tendency to split the difference in such scenarios but that is a mistake.  The question is which is the better metric off which to anchor?   The answer, in a SaaS business is bookings.  Ergo, the correct answer is not to split the difference (which will put the bookings goal at risk) but to recognize that bookings is the better metric and anchor S&M expense to bookings growth.  This requires a deep understanding of the metrics you use and the courage to confront two conflicting rules of conventional wisdom in so doing.

In the end, metrics slavery, while all too common, is more about the people than the metrics.  Managers need to be challenged to understand metrics.  Managers need to be empowered to define new and better metrics.  Managers must to be told to use their brains at all times and never do something simply to move a metric.

If you’re always thinking critically, you’ll never be a metrics slave.  The day you stop, you’ll become one.

# # #

[1] The way it works is simple:  if you’re on the ice when your team scores, you get +1.  If you’re on the ice when the opponent scores you get -1.  When you look at someone’s plus/minus rating over time, you can see, for example, which forwards hustle back on defense and which don’t.

[2] When, thanks to an opponent’s penalty you have more players on the ice then they do.

[3] When, thanks to your team’s penalty, your opponent has more players on the ice than you do.

[4] Because bookings grew slowly this year

Churn:  Net-First or Sum-First?

While I’ve already done a comprehensive post on the subject of churn in SaaS companies and some perils in how companies analyze it, in talking with fellow SaaS metrics lovers of late, I’ve discovered a new problem that isn’t addressed by my posts.

The question?   When calculating churn, should you sum first (adding up all the shrinkage ARR) or net first (net shrinkage vs. expansion ARR and then sum that).  It seems like a simple question, but like so many subtitles in SaaS metrics, whether you net-first or sum-first, and how you report in so doing, can make a big difference in how you see the business through the numbers.

Let’s see an example.

net1

So what’s our churn rate:  a healthy -1% or a scary 15%?  The answer is both.  In my other post, I define about 5 churn rates, and when you sum first you get my “net ARR churn” rate [1], which comes in at a rather disturbing 15%.  When, however, you net first you end up a healthy -1% (“gross ARR churn”) rate because expansion ARR has more than offset shrinkage.  At my company we track both rates because each tells you a different story.

Thanks to the wonders of math, both the net-first and sum-first calculations take you to the same ending ARR number.  That’s not the problem.

The problem is that many companies report churn in a format not like my table above, but in something simpler like that looks like this below [2].

net2

As you can see, this net-first format doesn’t show expansion and shrinkage by customer.  I think this is dangerous because it can obscure real problems when shrinkage ARR is offset, or more than offset, by expansion ARR.

For example, customer 2 looks great in the second chart (“wow, $20K in negative churn!”).  In the first chart, however, you can see customer dropped 4 seats of product A and more than offset that by buying 8 seats of product B.  In fact, in the first chart, you can see that everyone is dropping product A and buying product B which is hidden in the second chart that neither breaks out shrinkage from expansion nor provides a comment as to what’s going on.  My advice is simple:  do sum-first churn and report both the “net ARR” and “gross ARR” renewal rates and you’ll get the whole picture.

Aside 1:  The Reclaimed ARR Issue
This debate prompted a second one with my Customers For Life (CFL) team who wanted to introduce a new metric called “reclaimed ARR,” the ARR that would have been lost on renewal but was saved by CFL through cross-sells, up-sells, and price increases.  Thus far, I’m not in love with the concept as it adds complexity, but I understand why they like it and you can see how I’d calculate it below.

net3

Aside 2:  Saved ARR
The first aside was prompted by the fact that CFL/renewals teams primarily play defense, not offense.  Like goalies on a hockey team, they get measured by a negative metric (i.e., the churn ARR that got away).   Even when they deliver offsetting expansion ARR, there is still some ARR that gets away, and a lot of their work (in the customer support and customer success parts of CFL) is not about offsetting-upsell, it’s about protecting the core of the renewal.  For that reason, so as to reflect that important work in our metrics, we’ve taken a lesson from baseball and the notion of a “save.”  Once the renewals come in, we add up all the ARR that came from customers who were, at any point in time since their last renewal, in our escalated accounts program and call that Saved ARR.    It’s best metric we’ve found thus far to reflect that important work.

# # #

[1] I have backed into the rather unfortunate position of using the word “net” in two different ways.  When I say “net ARR churn” I mean churn ARR net of (i.e., exclusive of) expansion ARR.  When I say net-first churn, I meant to net-out shrinkage vs. expansion first, before summing the customers to get total churn.

[2] Note that I properly inverted the sign because negative churn is good and positive churn is bad.

Make a Plan That You Can Beat

Seven words that changed the world:  “make a plan that you can beat.”

This pithy piece of wisdom was first passed onto me by the sage of Sequoia Capital, Mike Moritz, on the first day of my six-year journey at MarkLogic, during which time we grew the company from effectively zero to an $80M run-rate.  Thanks to Mike’s advice, we made plan in about 90% of those 24 quarters.

What’s so important about making a plan that you can beat?

  • For starters, it helps keep you employed. Few CEOs get axed when they are making plan.  (It can be done, but takes real skill at board alienation.)
  • It forces you to make a balanced plan: sufficiently realistic and sufficiently aggressive.  (“Can beat” means neither “will certainly beat” nor “can achieve if a miracle occurs.”)
  • It means you can predictably manage your cash – the oxygen of any startup. As another quotable Sequoia partner, Don Valentine, used to say:  “all companies go out of business for the same reason; they run out of money.”
  • It forces you to debate important issues up front. To the extent the board wants 80% growth  in 2015 and you believe that you can only deliver 30%, it is far better to have that uncomfortable conversation during the planning process in November 2014 (while you are still achieving this year’s plan) than in July 2015, after you’ve missed Q1 and Q2.   (In July, the uncomfortable conversation is more likely to be about your severance package than the aggressiveness of the approved plan.)
  • It says that you are in control of your business. Whether or not the board loves the plan the eventually approve, the first step in running any business is to be in control of it.  That means being able to predict with reasonable accuracy the results you can achieve.
  • It reduces the tendency to sign up for too much bookings/revenue to “get” more expense. Often managers somewhat arbitrarily decide what expenses they need to be successful, anchor emotionally to that number, and then get “talked up” on the bookings/revenue side in order to hit a given cash flow or EBITDA goal.  This is exactly backwards.  You should put a huge amount of energy into your bookings/revenue plan and work from that to set expense targets.  If you can’t find a workable solution, then argue you have the wrong EBITDA or cashflow goal.  Don’t get talked up on revenue because it’s unpleasant to ask your passionate and anchor-biased managers to cut expenses.
  • It is philosophically aligned with most executive compensation plans. Most boards like gated compensation plans where, for example, executives get 0% of their target bonus up to 85% of plan performance (the “gate”), payout 50% of target at 80% of plan, go linearly to 100% payout at 100% of plan, and then have accelerators beyond that.  These plans reward above-plan performance and severely punish below-plan performance.  As such, any executive who looks at his/her compensation plan should understand the not-so-subtle message it sends:  beat plan [1] (which is, of course, most easily achieved by making a plan that you can beat).
  • You can always speed up later. If you’re ahead of plan after Q1 and your leading indicators look solid for Q2, no board on Earth will not approve a revision to the plan that accelerates growth.   Think of your plan growth rate not as what you aspire to achieve, but rather as what you are willing to be fired for not achieving.  It takes real skill to grow a company at 100% and get fired for missing plan, but I’ve seen that done, too [2].

Some of you may be thinking:  isn’t this all a fancy of way of saying “sandbag” [3].  I think not.  Even if you reject every other argument above, you cannot deny that cash is oxygen to startups, that startups that run out of cash get crushed by dilution when they need to raise money when running on fumes, and thus making a plan that you can beat is critical to managing cash, and indirectly, to the eventual value of company’s common stock.

Make a plan that you can beat.  Seven words to understand.  Seven words to internalize.  Seven words to live by.

 # # #

Footnotes

[1] Whether boards should like this style of compensation plan is debatable because they arguably do not incent risk-taking.  That debate aside, the fact remains that most board do like this style of plan so managers should listen to the message that is very clearly sent.

[2] The real way to know if 100% is good enough should be to look at the market.  If you’re gaining share when growing 100% but missing plan of 120% then in my book you are planning poorly, but executing well.  However, if you are losing share when growing at 100%, you are in a hot market but not executing aggressively enough to win it.  Performance measures should always be normalized to the market, otherwise target-setting and plan-performance ratings are more about negotiating skills than actual performance in the market.  (I’ve seen this one done wrong many times, too.)

[3] Aside:  I believe there are two different types of sandbagging:  (1) consistently under-forecasting – i.e., landing at a result significantly higher than you forecast early in the quarter, and (2) consistently overachieving plan – i.e., landing well above operating plan targets.  Type 1 is bad because it leads to either to needlessly cutting quarterly expenses in response to a weak early-quarter forecast (if you believe it) or simply ignoring the forecast (if you don’t) – in which case what good is it?  Type 2 means either the company is performing tremendously or they are too good at negotiating targets.  Looking at whether you’re gaining or losing market share (or grabbing a greenfield opportunity fast enough) will tell you which.

SAP Rumored to Launch Hana-Based Cloud Planning Solution Next Week

dwtweet1

Well, it’s pretty clear that SAP will be announcing something in Chicago on Monday, judging by the above re-tweet from SAP head of EPM product management David Williams, and by the content in the SAP Conference for EPM program, itself:

SAP Keynote and Panel Discussion: Next-Generation EPM in the Cloud

In fact, SAP’s vision is to be “The Cloud Company, powered by SAP HANA.” What does this mean for SAP? What does it mean for SAP’s EPM portfolio? And what does it mean for you as a customer? In this interactive keynote, SAP executives, partners, and customers will dive into how the cloud and software as a service (SaaS) are impacting finance transformation and how the next generation of EPM in the cloud has arrived!

If looks like a product launch, walks like a product launch, and quacks like a product launch, then it’s probably a product launch.  Judging by the description, SAP will be launching a new, cloud-based, Hana-based EPM solution.  We’ve heard elsewhere that it will be focused specifically on planning.

A Brief Rant on Hana
Before diving more into EPM, let me comment a bit on SAP’s Hana strategy, which I find quite confusing.  In my estimation, SAP has two problems with Hana.

  • An unhealthy obsession with the database market born from their historical dependence on Oracle.  Instead of letting time make databases irrelevant (as cloud services do), SAP chose to enter the database fray both through the acquisition of Sybase and the development of Hana.  In my mind, SAP would have been better off to simply let databases commoditize (and put their apps on Postgres or a NoSQL system).  Instead, they did the exact opposite — and market it heavily with an ingredient-branding strategy around Hana.  The message isn’t “our apps are better.”  It’s “our apps are better because they’re on Hana” which is both only partially true today and requires the logical leap of faith that being Hana-based would invariably make one app better than another.  (My cake is better because I use cane sugar.  Maybe.)
  • An illogical desire to conflate Hana and cloud.  The two concepts are orthogonal.  Hana is a column-oriented, in-memory relational database system.  Cloud computing is a delivery (and business) model for software.  You can build cloud services on whatever kind of database you want — the world’s most successful cloud company, Salesforce.com, builds atop Oracle.   A key idea of cloud computing is that infrastructure becomes irrelevant.  Just as you don’t know where Salesforce’s data centers are, what brand of servers they run, and what operating system runs on those servers, you don’t need to know what database system they run.  That’s the point.  So to conflate cloud and Hana is illogical and confusing.  SAP badly wants Hana to mean “cloud” and if they keep pushing it eventually will, but then it won’t mean column-oriented, in-memory database.  Because the concepts are so different, Hana can mean one or the other, but it can’t simultaneously mean both.  (Heinz used to mean pickles.  Now it means ketchup.)

While I am on the record saying that SAP’s Hana strategy will “work,” I believe that is not because it is a good strategy, but rather because it is a strategy to which they are highly committed.  And with enough financial might,  you can drive almost any marketing message home (e.g., “with a name like Smuckers, it has to be good.”)

Back to EPM
Nevertheless, the new EPM solution will invariably be Hana-based and thus a good deal of the launch presentation will therefore describe the supposed benefits it inherits from so being, but is being Hana-based a good thing for an EPM system?

Hana is an in-memory relational database.  Because multi-dimensional analysis is so critical in EPM, virtually every commercial EPM system in the market runs atop a (typically in-memory) multi-dimensional database.  (In fact, in-memory multidimensional databases predate in-memory relational databases by a decade or two, dating all the way back to TM1.)

So, SAP’s new EPM solution will not only be the world’s only EPM solution atop Hana, it will the world’s only EPM atop a column-oriented relational database.  Other than BPC, of course, where Hana is positioned as an accelerator.  But if being Hana-based made an EPM system both great and cloud, then why do we need the new offering?  Because Hana has nothing to do with cloud.  (See other rant.)

Net/net:  While SAP will invariably position multi-dimensional databases as evil (“rogue cubes,” as Sanjay Poonen called them when announcing BPC on Hana), I’m not convinced that running an EPM system on Hana is a great idea, as opposed to running on a multi-dimensional database. (But SAP doesn’t have one of those).  Performance and scalability will be the test here and time will tell.

Keep It Simple
The other big message coming out of SAP of late is simple.  (So the company that stands for applications and complexity talks most about databases and simplicity.)   Nevertheless, the simple message may provide a clue for what’s coming so let’s take a look at SAP’s 21-page statement of direction on simplification of their business intelligence solutions.

SIMPLIFYING THE END-USER EXPERIENCE
As business intelligence (BI) solutions from
SAP have evolved over the years to address new
customer needs, introduce new innovations,
and take advantage of emerging technologies,
the portfolio has grown to encompass a large
number of client tools. While these tools provide
best-of-breed experiences in specific use cases
and together provide a comprehensive BI suite,
it can be difficult and confusing today to choose
which tool to use.

From this you might conclude product convergence on the EPM side, but the introduction of a new cloud-based solution actually increases product line complexity as SAP will need to differentiate, among other things, when to use BPC, when to use BPC on Hana, and when to use the new offering.  This is common in large vendors with complex product lines and is sometimes called “having to sell against yourself.”

The only EPM-specific information I could find in the 21-page statement of direction on simplification was this:

Rounding out these two central tools, we plan to offer a single interface for Microsoft Office integration, based around the edition for Microsoft Office of SAP BusinessObjects Analysis software. Our intention is to address use cases covered today with SAP BusinessObjects Analysis, SAP solutions for enterprise performance management (EPM), add-in for Microsoft Excel, and SAP BusinessObjects Live Office software through a single add-on that is planned to provide access to any data, analysis, and planning.  We also anticipate embedding live visualizations and dashboards created by SAP Lumira and SAP BusinessObjects Design Studio within Microsoft Office documents.

This means a single new Office interface is coming that includes EPM and goes beyond it.  Complex in the sense that it will another interface and a new one, but simple in that it should replace several different interfaces over time.  I’d guess this is all about BPC and not the new cloud offering, but I can’t be sure at this time.

Deja Vu All Over Again
To be a little snarky, I feel compelled to remind readers that SAP has already announced a cloud performance management solution, SAP EPM on Demand a little more than two years ago

sap epm 1

And which they subsequently shut down

sap epm 2.

So What Does This All Mean?
We’ll obviously know more after the announcement, but I drew several conclusions from recent history and anticipated moves:

  • Cloud computing is continuing to transform enterprise IT and that the megavendors continue to increasingly realize that.  They are squarely out of the “denial” phase of the market transformation and working actively through both M&A and in-house development to offer cloud services.
  • The Innovator’s Dilemma is a very, very hard problem for businesses to manage when dealing with disruptive change.  Not only do megavendor incumbents have legacy products that add complexity and risk cannibalization, they have legacy business practices as well.  I’ll be very interested to see SAP’s pricing and hope that they’re not copying Oracle’s strategy of “any color you want as long as it’s same number as on-premises” (to paraphrase Henry Ford).
  • SAP believes that finance is increasingly ready to go cloud.  Because finance has generally been slow to “go cloud,” I view this as yet another sign of increasing cloud-readiness in finance.
  • While SAP remains highly committed to Hana, they are “out on their own” in running an EPM system on a column-oriented, as opposed to multi-dimensional, database system.
  • SAP’s new cloud EPM offering will definitionally be a v1 product and likely take a few years to reach maturity and functional completeness.

Why I’m Against Succession Planning at Startups

I have to admit I’m not a fan of succession planning in general, at startups in particular, and especially when the successee is involved in the process. Why? Because the process quickly ends up presumptuous and political.

In my experience, the successee is more concerned with being a “good guy” on the way out than with what’s best for the business. Consider the retiring CFO of a $500M company. Eighteen months before he wants to retire, he starts succession planning, picks his favorite division-level finance chief, anoints her the chosen one, and starts the grooming process (“one day all this will be yours”). The chosen one starts showing at meetings to which she’s not usually invited, and demonstrates some new swagger with peers.

The CFO eventually retires and the CEO and board replace him not with the chosen one, but with an experienced CFO coming from a $2B company. Feelings are hurt, strong performers are demotivated, and hub-bub generated — all for nothing. The chosen one didn’t even make the first cut of requirements in the job spec. The retiring CFO didn’t (and shouldn’t) get a vote.

The thing to remember with startups (and high-growth companies in general) is that you don’t want to hire the person you need now; you want to hire the person you need three years from now. And the odds that the person you need three years from now is working for the current boss today are pretty low. Put differently (and most certainly when going outside for a hire), the job should grow into the person; the person shouldn’t grow into the job.

The default succession plan for almost any startup executive – including the CEO – is therefore to go hire someone from outside who’s overqualified for the current job. If you wonder why someone overqualified would take the job … well, that’s why the Gods created stock options.

Before you think I’m an anti-career-development cretin, this is not to say that companies should always go outside to backfill key roles. Sometimes people are able to grow within fast-growing organizations. I myself did this as I rose from technical support engineer to director of product marketing over 7 years at a company that grew from $30M to $240M along the way. So I’m all in favor of it; it just doesn’t happen very often. And more often than not, managers who consistently only want to promote from within are actually saying they’re afraid to go outside and find strong direct reports who will challenge them. Remember, I’m talking about patterns and rules here; there will always be exceptions.

The reality is in high-growth startups, just “holding on” to your current management or executive job is both hard enough and a big growth opportunity. Running product management, sales, or HR at $10M is quite different from running it at $300M. During my tenure at Business Objects, as we grew from $30M to over $1B in revenues, only one other team member and myself “held on” during that growth. Out of about 15-20 people that made up the broadly defined leadership team, every other person got replaced, sometimes two or three times, along the way.

That’s why I think succession planning – making plans for how to replace Jane when Jane is healthy, happy, and doing a great job for the company – is a waste of time. Let’s keep Jane focused on growing the business, which is hard enough. If she gets hit by the proverbial bus, well, let’s just deal with that when it happens. We pretty much know what we’re going to do anyway (i.e., call a recruiter).

The best argument against my viewpoint is the case we’ll call Marty. Let’s say Marty would be a great candidate for the CFO job. He’s a great controller, has great leadership skills, and strong business sense — but hasn’t spent much time in FP&A. After Jane gets hit by the bus, we might think “darn, Marty would have been great if we’d moved him into FP&A last year to develop him.”

My two-part response to this is:

  • Yes, sometimes it makes sense and if Marty’s got his act together he’ll be pushing for the FP&A job if it opens up along the way — best developing himself and positioning himself for any eventual CFO opportunity. Since there is always risk associated with any outside hire, Marty should pitch that the risks associated with him learning the job are less than those associated with taking a new person into the organization.
  •  The decision whether to give Marty the job will come down to how fast the company’s growing and whether the company is better off with a talented-but-rookie FP&A head, an internally promoted FP&A manager, or a veteran outsider. Yes, we want to help develop Marty, but if the company’s growing super-fast, then just “hanging on” should provide plenty of development and financial benefit (i.e., stock option appreciation) for him along the way.

Some would note that if we turn down Marty for the FP&A job, he may quit because he feels he has no opportunity for career growth. I understand; I quit a job myself once for that very reason. But I did so in an environment where company growth had stalled and I wasn’t going to get either financial reward or career development for sticking around. If the company is growing fast, then Marty will get both. If it’s not, most of the principles I describe here don’t apply because this post is about succession planning at startups and high-growth companies.

In fact, succession planning makes a lot of sense at low-growth companies, where the organization is static and people move through it. If you want to retain your people over time, you better think about those career paths, and rotate your Marty’s through FP&A to keep them having fun and learning. And, in those environments, the best person to take over for the retiring CFO might well be one of his/her direct reports (and dangling that opportunity might well help retain a few of them along the way).

The real problem is when big company types come to a high-growth company and say “let’s do succession planning (because we did it at my last company and it’s just something that one does)” – and nobody asks why.

Most of the time, in a high-growth startup, it won’t make sense. Or, if you make a succession plan, it will simply be 1-800-HEIDRICK, 1-800-DAVERSA, or 1-800-SCHWEICHLER.

Average Contract Duration and SaaS Renewals: All Is Not As It Appears

Chatting with some SaaS buddies the other day, we ran into a fun — and fairly subtle — SaaS metrics question.  It went something like this:

VP of Customer Success:  “Our average contract duration (ACD) on renewals was 1.5 years last quarter and –”

VP of Sales:  “– Wait a minute, our ACD on new business is 2.0 years.  If customers are renewing for shorter terms than those of the initial sale, it  means they are less confident about future usage at renewals time than they are at the initial purchase. Holy Moly, that means we have a major problem with the product or with our customer success program.”

Or do we?  At first blush, the argument makes perfect sense.  If new customers sign two-year contracts and renewing ones sign 1.5-year contracts, it would seem to indicate that renewing customers are indeed less bullish on future usage than existing ones.  Having drawn that conclusion, you are instantly tempted to blame the product, the customer success team, technical support, or some other factor for the customers’ confidence reduction.

But is there a confidence reduction?  What does it actually mean when your renewals ACD is less than your new business ACD?

The short answer is no.  We’re seeing what I call the “why are there so many frequent flyers on airplanes” effect.  At first blush, you’d think that if ultra-frequent flyers (e.g., United 1K) represent the top 1%, then a 300-person flight might have three or four on board, while in reality it’s more like 20-30.  But that’s it — frequent flyers are over-represented on airplanes because they fly more; just like one-year contracts are over-represented in renewals because they renew more.

Let’s look at an example.  We have a company that signs one-year, two-year, and three-year deals.  Let’s assume customers renew for the same duration as their initial contract — so there is no actual confidence reduction in play.  Every deal is $100K in annual recurring revenue (ARR).  We’ll calculate ACD on an ARR-weighted basis.  Let’s assume zero churn.

If we sign five one-year, ten two-year, and fifteen three-year deals, we end up with $3M in new ARR and an ACD of 2.3 years.

renewals and acd

In year 1, only the one-year deals come up for renewal (and since we’ve assumed everyone renews for the same length as their initial term), we have an ACD of one year.  The VP of Sales is probably panicking — “OMG, customers have cut their ACD from 2.3 to 1.0 years!  Who’s to blame?  What’s gone wrong?!”

Nothing.  Only the one-year contracts had a shot at renewing and they all renewed for one year.

In year 2, both the (re-renewing) one-year and the (initially renewing) two-year contracts come up for renewal.  The ACD is 1.7 — again lower than the 2.3-year new business ACD.  While, again, the decrease in ACD might lead you to suspect a problem, there is nothing wrong.  It’s just math and the fact that the shorter-duration contracts renew more often which pulls down the renewals ACD.

What To Do About This?
First, understand it.  As with many SaaS metrics, it’s counter-intuitive.

As I’ve mentioned before, SaaS metrics and unit economics are often misunderstood.  While I remain a huge fan of using them to run the business, I strongly recommend taking the time to develop a deep understanding of them.  In addition, the more I see counter-intuitive examples, the more I believe in building full three- to five-year financial models of SaaS businesses in order to correctly see the complex interplay among drivers.

For example, if a company does one-year, two-year, and three-year deals, a good financial model should have drivers for both new business contract duration (i.e., percent of 1Y, 2Y, and 3Y deals) and a renewals duration matrix that has renewals rates for all nine combinations of {1Y, 2Y, 3Y} x (1Y, 2Y, 3Y} deals (e.g., a 3Y to 1Y renewal rate).  This will produce an overall renewals rate and an overall ACD for renewals.  (In a really good model, both the new business breakdown and the renewals matrix should vary by year.)

Armed with that model, built with assumptions based on both history and future goals for the new business breakdown and the renewals matrix, you can then have meaningful conversations how ACD is varying on new and renewals business relative to plan.  Without that, by just looking at one number and not understanding how it’s produced, you run the very real risk of reacting to math effects setting off a false alarm on renewals.

Host Analytics Rocks the New Ovum Decision Matrix for EPM

Every leading industry analyst firm has their own 2×2 matrix — Garter has the magic quadrant, Forrester has the wave, and Ovum has the decision matrix.

The intent of each of these graphical devices is the same:  to provide a simple picture that selects the top vendors in a category and positions them by (1) a rating on the quality of their strategy and (2) a rating on the quality of the execution of their strategy.

While the ratings are inherently subjective, each customers has his/her own unique requirements, and “your mileage may vary,” these matrices are useful tools in helping customers make IT supplier decisions.

To start with a brief word from our sponsor, I’m pleased to note that:

  • Host Analytics is the best-positioned cloud EPM vendor on Gartner’s magic quadrant for what they call CPM (corporate performance management.)
  • Host Analytics is the only cloud vendor in the leaders segment on Forrester’s wave for what they call FPM (financial performance management).

While the temptation is to immediately examine small positioning deltas of the charted vendors (as I just did above), I’d note that one of the best uses of these diagrams is to instead look at who’s not there.  For example,

  • Anaplan is omitted from Gartner’s MQ, Forrester’s Wave, and Ovum’s DM.  I believe this is because they come to market with a value proposition more around platform than app, and that most analysts and customers define EPM as an applications market.  In plain English:  there is a difference between saying “you can build an X using our stuff” and “we have built an X and can sell you one.”
  • Tidemark is present on Forrester’s wave, but omitted from both the Gartner MQ and the Ovum DM.  I believe this is because of I what I’d characterize as “strategic schizophrenia” on Tidemark’s part with an initial message (back in the Proferi era) around EPM/GRC convergence, followed by an enterprise analytics message (e.g., infographics, visualization) with a strong dose of SoLoMo, which bowed to Sand Hill Road sexiness if not actual financial customer demand.  Lost in the shuffle for many years was EPM (and along with it, much of their Workday partnership).

I’m pleased to announce that Host Analytics has once again received an excellent rating on one of these matrices, the Ovum Decision Matrix for EPM 2014-15.

dm14

  • The only cloud vendors on the matrix are Host Analytics and Adaptive Insights (fka, Adaptive Planning).
  • Host Analytics is shown edging out Adaptive Insights on overall technology assessment.
  • Adaptive Insights is shown edging out Host Analytics on execution, which is quite ironic given that Adaptive recently ousted its CEO, something, shall we say, that typically doesn’t happen when execution is going well.

Thoughts on Hiring:  Working for TBH

One of the most awkward situations in business is trying to recruit someone who will work for to-be-hired (TBH).   For example, say you’ve started a search for a director of product marketing, have a few great candidates in play, only to have your marketing VP suddenly quit the company to take care of a sick parent.   Boom, you’re in a working-for-TBH situation.

These are hard for many reasons:

  • Unknown boss effect. While your product marketing candidate may love the company, the market space, the would-be direct reports, and the rest of the marketing team, the fact is (as a good friend says) your boss is the company.  That is, 80% of your work experience is driven by your boss, and only 20% by the company.
  • Entourage effect. Your top product marketing candidate is probably worried that the new marketing VP has a favorite product marketing director, and that they’ve worked together through the past 10 years and 3 startups.  In which case, if there is an entourage effect in play, the candidate sees himself as having basically no chance of surviving it.
  • False veto effect. You may have tried to reassure product marketing candidates by telling them that they will “be part of the process” in recruiting the new boss, but the smart candidate will know that if everybody else says yes, then the real odds of stopping the train will be zero.

So who takes jobs working for TBH?  Someone who sees the net gain of taking job the job as exceeding the risk imposed by the unknown boss, entourage, and false veto effects.

That net gain might be:

  • The rare chance to switch industries. Switching industries is hard as most companies want to hire not only from within their industry (e.g., enterprise software) but ideally from within their category (e.g., BI).  For example, Adaptive Insights recently hired president and CRO Keith Nealon (announced via what is generally regarded as among the most bizarre press releases in recent history) despite an open CEO position and ongoing CEO search.   Nealon took the job joined from Shoretel, a telecommunications company, and offered him the chance to switch (back) into enterprise SaaS and switch into the hot category of BI and EPM.
  • The rare chance to get a cross-company promotion. Most companies promote from within but when they go outside for talent, they want to hire veterans who have done the job before.  For example, when LinkedIn needed a new CEO they promoted Jeff Weiner from within.  When ServiceNow needed a new CEO and didn’t find anyone internally who fit the bill, they didn’t hire a first-timer, they hired Frank Slootman, who had been CEO at Data Domain for six years and lead a spectacular exit to EMC.  By contrast, when Nealon joined Adaptive Insights, it offered him the chance to get promoted from the GM level to the CXO level, something not generally seen in a cross-company move, but likely enabled by the working-for-TBH situation.
  • The rare chance to get promoted into the TBH job. Sometimes this is explicitly pitched as a benefit to person working for TBH.  In reality, while this rarely happens, it’s always possible that the new hire does so well in the job – and it takes so long to hire TBH – that the person gets promoted up into the bigger job.  This is generally not a great sign for the company because it’s a straight-up admission that they viewed the working-for-TBH hire as not heavy enough for the TBH job, but eventually gave up because they were unable to attract someone in line with their original goals.

Who doesn’t take jobs working for TBH?  Veterans — who, by the way — are precisely the kind of people you want building your startup.  So, in general, I advise companies to avoid the working-for-TBH situation stalling the next-level search and hiring the boss first.

Making the working-for-TBH hire is particularly difficult when the CEO slot is open for two reasons:

  • E-staff direct reports are among the most sophisticated hires you will make, so they will be keenly aware of the risks associated with the unknown-boss, entourage, and false-veto effects. Thus the “win” for them personally needs to offset some serious downside risk.  And since that win generally means giving them opportunities they might not otherwise have, it means an almost certain downgrading in the talent that you can attract for any given position.
  • New CEO hires fail a large percentage of the time, particularly when they are “rock star” hires. For every Frank Slootman who has lined up consecutive major wins, there are about a dozen one-hit wonders, suggesting that CEO success is often as much about circumstance as it is about talent.  You need to look no farther than Carly Fiorina at HP, or any of the last 5 or so CEOs of Yahoo, for some poignant examples.  Enduring a failed new-hire CEO is painful for everyone — the company, the board — but no more group feels the pain more than the e-staff.  Frequently, they are terminated due to the entourage effect, but even if they survive their “prize” for doing so is to pull the slot-machine arm one more time and endure a second, new CEO.