Average Contract Duration and SaaS Renewals: All Is Not As It Appears

Chatting with some SaaS buddies the other day, we ran into a fun — and fairly subtle — SaaS metrics question.  It went something like this:

VP of Customer Success:  “Our average contract duration (ACD) on renewals was 1.5 years last quarter and –“

VP of Sales:  “– Wait a minute, our ACD on new business is 2.0 years.  If customers are renewing for shorter terms than those of the initial sale, it  means they are less confident about future usage at renewals time than they are at the initial purchase. Holy Moly, that means we have a major problem with the product or with our customer success program.”

Or do we?  At first blush, the argument makes perfect sense.  If new customers sign two-year contracts and renewing ones sign 1.5-year contracts, it would seem to indicate that renewing customers are indeed less bullish on future usage than existing ones.  Having drawn that conclusion, you are instantly tempted to blame the product, the customer success team, technical support, or some other factor for the customers’ confidence reduction.

But is there a confidence reduction?  What does it actually mean when your renewals ACD is less than your new business ACD?

The short answer is no.  We’re seeing what I call the “why are there so many frequent flyers on airplanes” effect.  At first blush, you’d think that if ultra-frequent flyers (e.g., United 1K) represent the top 1%, then a 300-person flight might have three or four on board, while in reality it’s more like 20-30.  But that’s it — frequent flyers are over-represented on airplanes because they fly more; just like one-year contracts are over-represented in renewals because they renew more.

Let’s look at an example.  We have a company that signs one-year, two-year, and three-year deals.  Let’s assume customers renew for the same duration as their initial contract — so there is no actual confidence reduction in play.  Every deal is $100K in annual recurring revenue (ARR).  We’ll calculate ACD on an ARR-weighted basis.  Let’s assume zero churn.

If we sign five one-year, ten two-year, and fifteen three-year deals, we end up with $3M in new ARR and an ACD of 2.3 years.

renewals and acd

In year 1, only the one-year deals come up for renewal (and since we’ve assumed everyone renews for the same length as their initial term), we have an ACD of one year.  The VP of Sales is probably panicking — “OMG, customers have cut their ACD from 2.3 to 1.0 years!  Who’s to blame?  What’s gone wrong?!”

Nothing.  Only the one-year contracts had a shot at renewing and they all renewed for one year.

In year 2, both the (re-renewing) one-year and the (initially renewing) two-year contracts come up for renewal.  The ACD is 1.7 — again lower than the 2.3-year new business ACD.  While, again, the decrease in ACD might lead you to suspect a problem, there is nothing wrong.  It’s just math and the fact that the shorter-duration contracts renew more often which pulls down the renewals ACD.

What To Do About This?
First, understand it.  As with many SaaS metrics, it’s counter-intuitive.

As I’ve mentioned before, SaaS metrics and unit economics are often misunderstood.  While I remain a huge fan of using them to run the business, I strongly recommend taking the time to develop a deep understanding of them.  In addition, the more I see counter-intuitive examples, the more I believe in building full three- to five-year financial models of SaaS businesses in order to correctly see the complex interplay among drivers.

For example, if a company does one-year, two-year, and three-year deals, a good financial model should have drivers for both new business contract duration (i.e., percent of 1Y, 2Y, and 3Y deals) and a renewals duration matrix that has renewals rates for all nine combinations of {1Y, 2Y, 3Y} x (1Y, 2Y, 3Y} deals (e.g., a 3Y to 1Y renewal rate).  This will produce an overall renewals rate and an overall ACD for renewals.  (In a really good model, both the new business breakdown and the renewals matrix should vary by year.)

Armed with that model, built with assumptions based on both history and future goals for the new business breakdown and the renewals matrix, you can then have meaningful conversations how ACD is varying on new and renewals business relative to plan.  Without that, by just looking at one number and not understanding how it’s produced, you run the very real risk of reacting to math effects setting off a false alarm on renewals.

Host Analytics Rocks the New Ovum Decision Matrix for EPM

Every leading industry analyst firm has their own 2×2 matrix — Garter has the magic quadrant, Forrester has the wave, and Ovum has the decision matrix.

The intent of each of these graphical devices is the same:  to provide a simple picture that selects the top vendors in a category and positions them by (1) a rating on the quality of their strategy and (2) a rating on the quality of the execution of their strategy.

While the ratings are inherently subjective, each customers has his/her own unique requirements, and “your mileage may vary,” these matrices are useful tools in helping customers make IT supplier decisions.

To start with a brief word from our sponsor, I’m pleased to note that:

  • Host Analytics is the best-positioned cloud EPM vendor on Gartner’s magic quadrant for what they call CPM (corporate performance management.)
  • Host Analytics is the only cloud vendor in the leaders segment on Forrester’s wave for what they call FPM (financial performance management).

While the temptation is to immediately examine small positioning deltas of the charted vendors (as I just did above), I’d note that one of the best uses of these diagrams is to instead look at who’s not there.  For example,

  • Anaplan is omitted from Gartner’s MQ, Forrester’s Wave, and Ovum’s DM.  I believe this is because they come to market with a value proposition more around platform than app, and that most analysts and customers define EPM as an applications market.  In plain English:  there is a difference between saying “you can build an X using our stuff” and “we have built an X and can sell you one.”
  • Tidemark is present on Forrester’s wave, but omitted from both the Gartner MQ and the Ovum DM.  I believe this is because of I what I’d characterize as “strategic schizophrenia” on Tidemark’s part with an initial message (back in the Proferi era) around EPM/GRC convergence, followed by an enterprise analytics message (e.g., infographics, visualization) with a strong dose of SoLoMo, which bowed to Sand Hill Road sexiness if not actual financial customer demand.  Lost in the shuffle for many years was EPM (and along with it, much of their Workday partnership).

I’m pleased to announce that Host Analytics has once again received an excellent rating on one of these matrices, the Ovum Decision Matrix for EPM 2014-15.

dm14

  • The only cloud vendors on the matrix are Host Analytics and Adaptive Insights (fka, Adaptive Planning).
  • Host Analytics is shown edging out Adaptive Insights on overall technology assessment.
  • Adaptive Insights is shown edging out Host Analytics on execution, which is quite ironic given that Adaptive recently ousted its CEO, something, shall we say, that typically doesn’t happen when execution is going well.

Thoughts on Hiring:  Working for TBH

One of the most awkward situations in business is trying to recruit someone who will work for to-be-hired (TBH).   For example, say you’ve started a search for a director of product marketing, have a few great candidates in play, only to have your marketing VP suddenly quit the company to take care of a sick parent.   Boom, you’re in a working-for-TBH situation.

These are hard for many reasons:

  • Unknown boss effect. While your product marketing candidate may love the company, the market space, the would-be direct reports, and the rest of the marketing team, the fact is (as a good friend says) your boss is the company.  That is, 80% of your work experience is driven by your boss, and only 20% by the company.
  • Entourage effect. Your top product marketing candidate is probably worried that the new marketing VP has a favorite product marketing director, and that they’ve worked together through the past 10 years and 3 startups.  In which case, if there is an entourage effect in play, the candidate sees himself as having basically no chance of surviving it.
  • False veto effect. You may have tried to reassure product marketing candidates by telling them that they will “be part of the process” in recruiting the new boss, but the smart candidate will know that if everybody else says yes, then the real odds of stopping the train will be zero.

So who takes jobs working for TBH?  Someone who sees the net gain of taking job the job as exceeding the risk imposed by the unknown boss, entourage, and false veto effects.

That net gain might be:

  • The rare chance to switch industries. Switching industries is hard as most companies want to hire not only from within their industry (e.g., enterprise software) but ideally from within their category (e.g., BI).  For example, Adaptive Insights recently hired president and CRO Keith Nealon (announced via what is generally regarded as among the most bizarre press releases in recent history) despite an open CEO position and ongoing CEO search.   Nealon took the job joined from Shoretel, a telecommunications company, and offered him the chance to switch (back) into enterprise SaaS and switch into the hot category of BI and EPM.
  • The rare chance to get a cross-company promotion. Most companies promote from within but when they go outside for talent, they want to hire veterans who have done the job before.  For example, when LinkedIn needed a new CEO they promoted Jeff Weiner from within.  When ServiceNow needed a new CEO and didn’t find anyone internally who fit the bill, they didn’t hire a first-timer, they hired Frank Slootman, who had been CEO at Data Domain for six years and lead a spectacular exit to EMC.  By contrast, when Nealon joined Adaptive Insights, it offered him the chance to get promoted from the GM level to the CXO level, something not generally seen in a cross-company move, but likely enabled by the working-for-TBH situation.
  • The rare chance to get promoted into the TBH job. Sometimes this is explicitly pitched as a benefit to person working for TBH.  In reality, while this rarely happens, it’s always possible that the new hire does so well in the job – and it takes so long to hire TBH – that the person gets promoted up into the bigger job.  This is generally not a great sign for the company because it’s a straight-up admission that they viewed the working-for-TBH hire as not heavy enough for the TBH job, but eventually gave up because they were unable to attract someone in line with their original goals.

Who doesn’t take jobs working for TBH?  Veterans — who, by the way — are precisely the kind of people you want building your startup.  So, in general, I advise companies to avoid the working-for-TBH situation stalling the next-level search and hiring the boss first.

Making the working-for-TBH hire is particularly difficult when the CEO slot is open for two reasons:

  • E-staff direct reports are among the most sophisticated hires you will make, so they will be keenly aware of the risks associated with the unknown-boss, entourage, and false-veto effects. Thus the “win” for them personally needs to offset some serious downside risk.  And since that win generally means giving them opportunities they might not otherwise have, it means an almost certain downgrading in the talent that you can attract for any given position.
  • New CEO hires fail a large percentage of the time, particularly when they are “rock star” hires. For every Frank Slootman who has lined up consecutive major wins, there are about a dozen one-hit wonders, suggesting that CEO success is often as much about circumstance as it is about talent.  You need to look no farther than Carly Fiorina at HP, or any of the last 5 or so CEOs of Yahoo, for some poignant examples.  Enduring a failed new-hire CEO is painful for everyone — the company, the board — but no more group feels the pain more than the e-staff.  Frequently, they are terminated due to the entourage effect, but even if they survive their “prize” for doing so is to pull the slot-machine arm one more time and endure a second, new CEO.

A Missive to Human Resources (HR)

I built a very successful marketing career based on a three-word mission statement that I picked up from Chris Greendale, back when I was a product marketer at Ingres.  Chris always said that marketing exists to make sales easier.

I loved those three words.  I embraced that simple concept.  I made it my reductionist mission statement.  I taught it to every marketer I knew.  I loved it because it just made so much sense — if you derived a startup from scratch you’d first hire a developer and then salesperson.  Only after you had a bunch of salespeople would you then hire marketing, with the purpose of making the salespeople more effective.

Across my career, many people — ironically often from sales — challenged my “make sales easier” mission statement.  “It’s too tactical,” I’d hear.  Or, it “completely overlooks the strategic value of marketing.”  Not so, I’d counter.

  • Does picking a corporate strategy where we can win key market segments help make sales easier?  You bet it does.
  • Does designing better products for the target customer make sales easier?  You bet it does.

Simply put, while “make sales easier” might at first blush sound tactical in nature, the clever marketer can make sales easier in both tactical (e.g., lead generation) and very strategic ways.

Once, when I was thinking about human resources (HR), I wondered if I could come up with a similarly effective, reductionist mission statement.  I landed upon HR exists to help managers manage.

Like “make sales easier,” “help managers manage” often generates instant push-back.

  • Shouldn’t HR be focused on employee experience?  Yes, but don’t all employees work for managers, and isn’t it true if all our managers are doing a better job at managing, won’t our employees then have a better experience?
  • Doesn’t HR have an important legal and compliance role?  Yes, but don’t our managers want us in compliance?  I suppose it’s a bit like like a police force motto of “to protect and serve,” but frankly I’d rather have the police define themselves as protecting and serving the community than any likely alternative.
  • Shouldn’t HR be focused on organizational development or talent management?  Yes and yes.  And helping managers to develop their teams and/or recruit talented new members is all part of helping managers manage.

Finally, it begs the question, shouldn’t HR represent the employee point of view, the vox populi, to the company?  My answer is no.  If I want to know what the average employee thinks about the company, I can run a survey.  In fact, I’ll probably ask HR to run that survey.  But I do not view chatting, jawboning,  or gossiping with employees as a core HR function.  Why?

  • Because when HR people enter the gossip chain, they are no longer an observer of the story, they are now part of the story.  Their opinion is just one in a sea of opinions and to assume that simply by virtue of having HR printed on their business card, that they can somehow be impartial aggregators of truth is not realistic.
  • Because I do not want to pay people to stir the pot.  Every company has issues, problems and challenges.  If you allow HR to define themselves as employee advocates or the keeper of the public-voice flame, you are, in effect, asking them to go stir the pot.  I greatly prefer chartering HR with a “help managers manage” mission which often translates to “help managers get stuff done” and then, when people/conflict/cultural/managerial issues come up they are not doing so in a vacuum, but in the specific context of what’s blocking progress on key organizational goals.

As I told marketers, “the more time a salesperson has to spend with you, the less you should care about his/her opinion” because the best people want to be out selling, not chatting with marketing.  I’d argue the same logic holds true for employees in general with HR.  As CEO, I want people focused on getting stuff done.  I care enormously about “soft issues” when they impede the organization’s progress on key goals.  When framed, however, against one individual employee’s views about how a company theoretically should work, well, I care less.

One sometimes difficult concept to grasp for support staff is that help is defined in the mind of the recipient.  HR staff, particularly those who come with a legal/compliance bent, may think that rejecting a poorly done performance improvement plan (PIP) is helping the company.  From the manager’s perspective, that’s not help:  showing them an example of  good one would be.

Alternatively, telling someone 10 reasons why they can’t terminate someone in the short-term isn’t help.  Sitting down and helping them understand the correct process and helping to make a PIP would be.

The more you are a cop who says no, the less you’re helping.  The more you are asking managers what they are trying to accomplish, devil’s advocating their viewpoints, and then helping them accomplish what they want to do, the more you are helping.

In the end, help is a way of approaching things.  So, HR, stop thinking about what people can’t or shouldn’t do and starting thinking about

  • How can I help managers hire great employees?
  • How can I help managers understand how their employees are doing?
  • How can I help managers — often in a very applied way — execute the annual review process and decide of annual raises for their team?
  • How can I help managers manage-out employees who aren’t succeeding?
  • How can I help managers develop their talent so they can move up within the organization?
  • How can I help managers become better managers overall?  Better interviewers?  Better feedback givers?  Better priortizers?  Better communicators?
  • How can I help managers live and communicate the core values?

Remember what’s sometimes called The Great Lie:  “we’re from corporate, we’re hear to help.”  How can you change this so that “we’re from HR, we’re here to help” instead becomes The Great Truth?

The Ultimate SaaS Metric: The Customer Lifetime Value to Customer Acquisition Cost Ratio (LTV/CAC)

I’m a big fan of software-as-a-service (SaaS) metrics.  I’ve authored very deep posts on SaaS renewals rates and customer acquisition costs.  I also routinely point readers to other great posts on the topic, including:

But in today’s post, I’m going to examine the question:  of the literally scores of SaaS metrics out there, if you could only pick one single metric, which one would it be?

Let’s consider some candidates:

  • Revenue is bad because it’s a lagging indicator in a SaaS business.
  • Bookings is good because it’s a leading indicator of both revenue and cash, but tells you nothing about the existing customer base.
  • ARR (annual recurring revenue) is good because it’s a leading indicator of revenue and includes the effects of both new sales and customer churn.  However, there are two ways to have slow ending ARR growth:  high sales and high churn or low sales and low churn — and they are very different.
  • Cashflow is good because it tends to net-out a lot of other effects, but can be misleading unless you understand the structure of a company’s bookings mix and payment terms.
  • Gross margin (GM) is nice because it gives you an indicator of how efficiently the service is run, but unfortunately tells you nothing else.
  • The churn rate is good because it helps you value the existing customer annuity, but tells you nothing about new sales.
  • Customer acquisition cost (CAC) is a great measure of sales and marketing efficiency, but by itself is not terribly meaningful because you don’t know what you’re buying:  are you paying, for example, $12K in sales and marketing (S&M) expense for a $1K/month customer who will renew for 3 months or 120?  There’s a big difference between the two.
  • Lifetime value (LTV) is good measure of the annuity value of your customer base, but says nothing about new sales.

Before revealing my single best-choice metric, let me make what might be an unfashionable and counter-intuitive statement.  While I love SaaS “unit economics” as much as anybody, to me there is nothing better than a realistic, four-statement, three-year financial model that factors everything into the mix.  I say this not only because my company makes tools to create such models, but more importantly because unit economics can be misleading in a complicated world of varying contract duration (e.g., 1 to 3+ years), payment terms (e.g., quarterly, annual, prepaid, non-prepaid), long sales cycles (typical CAC calculations assume prior-quarter S&M drives current-quarter sales), and renewals which may differ from the original contract in both duration and terms.

Remember that SaaS unit economics were born in an era of monthly recurring revenue (MRR), so the more your business runs monthly, the better those metrics work — and conversely.  For example, consider two companies:

  • Company A does month-to-month contracts charging $100/month and has a CAC ratio of 1.0.
  • Company B does annual contracts, does three-year prepaid deals, and has a CAC ratio of 2.0.

If both companies have 80% subscription gross margins (GM), then the CAC payback period is 15 months for company A and 30 months for company B.  (CAC payback period is months of subscription gross margin to recover CAC.)

This implies company B is much riskier than company A because company B’s payback period is twice as long and company B’s money is at risk for a full 30 months until it recovers payback.

But it’s completely wrong.  Note that because company B does pre-paid deals its actual, cash payback period is not 30 months, but 1 day.  Despite ostensibly having half the CAC payback period, company A is far riskier because it has to wait 15 months until recovering its S&M investment and each month presents an opportunity for non-renewal.  (Or, as I like to say, “is exposed to the churn rate.”)  Thus, while company B will recoup its S&M investment (and then some) every time, company A will only recoup it some percentage of the time as a function of its monthly churn rate.

Now this is not to say that three-year prepaid deals are a panacea and that everyone should do them.  From the vendor perspective, they are good for year 1 cashflow, but bad in years 2 and 3.  From the customer perspective, three-year deals make plenty of sense for “high consideration” purchases (where once you have completed your evaluation, you are pretty sure of your selection), but make almost no sense in try-and-buy scenarios.  So the point is not “long live the three-year deal,” but instead “examine unit economics, but do so with an awareness of both their origins and limitations.”

This is why I think nothing tells the story better than a full four-statement, three-year financial model.  Now I’m sure there are plenty of badly-built over-optimistic models out there.  But don’t throw the baby out with the bathwater.   It is just not that hard to model:

  • The mix of the different types of deals your company does by duration and prepayment terms — and how that changes over time.
  • The existing renewals base and the matrix of deals of one duration that renew as another.
  • The cashflow ramifications of prepaid and non-prepaid multi-year contracts.
  • The impact on ARR and cashflow of churn rates and renewals bookings.
  • The impact of upsell to the existing customer base

Now that I’ve disclaimed all that, let’s answer the central question posed by this post:  if you could know just one SaaS metric, which would it be?

The LTV/CAC ratio.

Why?  Because what you pay for something should be a function of what it’s worth.

Some people say, for example, that a CAC of 2.0 is bad.  Well, if you’re selling a month-to-month product where most customers discontinue by month 9, then a CAC of 2.0 is horrific.  However, if you’re selling sticky enterprise infrastructure, replacing systems that have been in place for a decade with applications that might well be in place for another decade, then a CAC is 2.0 is probably fine.  That’s the point:  there is no absolute right or wrong answer to what a company should be willing to pay for a customer.  What you are willing to pay for a customer should be a function of what they are worth.

The CAC ratio captures the cost of acquiring customers.  In plain English, the CAC ratio is the multiple you are willing to pay for $1 for annual recurring revenue (ARR).  With a CAC ratio of 1.5, you are paying $1.50 for a $1 of ARR, implying an 18 month payback period on a revenue basis and 18-months divided by subscription-GM on a gross margin basis.

Lifetime value (LTV) attempts to calculate what a customer is worth and is typically calculated using gross margin (the profit from a customer after paying the cost of operating the service) as opposed to simply revenue.  LTV is calculated first by inverting the annual churn rate (to get the average customer lifetime in years) and then multiplying by subscription-GM.

For example, with a churn rate is 10%, subscription GM of 75%, and a CAC ratio of 1.5, the LTV/CAC ratio is (1/10%) * 0.75 / 1.5 = 5.0.

The general rule of thumb is that LTV/CAC should be 3.0 or higher, with of course, the higher the better.

There are three limitations I am aware of in working with LTV/CAC as a metric.

  • Churn rate.  Picking the right churn rate isn’t easy and is made complicated in the presence of a mix of single- and multi-year deals.  All in, I think simple churn is the best rate to use as it reflects the “auto-renewal” of multi-year deals as well as the very real negative churn generated by upsell.
  • Statistics and distributions.  I’m not a hardcore stats geek, but I secretly worry that many different distributions can produce an average of 10%, and thus inverting a 10% churn rate to produce an average 10-year customer lifetime scares me a bit.  It’s the standard way to do things, but I do worry late at night that averages can be misleading.
  • Light from a distant star.  Remember that today’s churn rate is a function of yesterday’s deals.  The more you change who you sell to and how, the less reflective yesterday’s churn is of tomorrow’s.  It’s like light arriving from a star that’s three light-years away:  what you see today happened three years ago.  To the extent that LTV is a forward-looking metric, beware that it’s based on churn which is backward-looking.  In perfect world, you’d use predicted-churn in an LTV calculation but since calculating that would be difficult and controversial, we take the next best thing:  past churn.  But remember that the future doesn’t always look like the past.

 

You Can’t Analyze Churn by Analyzing Churn

One thing that amazes me is when I hear people talk about how they analyze churn in a cloud, software as a service (SaaS), or other recurring revenue business.

You hear things like:

  • “17% of our churn comes from emerging small business (ESB) segment, which is normal because small businesses are inherently unstable.”
  • “22% of our churn comes from companies in the $1B+ revenue range, indicating that we may have a problem meeting enterprise needs.”
  • “40% of the customers in the residential mortgage business churned, indicating there is something wrong our product for that vertical.”

There are three fallacies at work here.

The first is assumed causes.  If you that 17% of your churn comes from the ESB segment, you know one and only one thing:  that 17% of your churn comes from the ESB segment.  Asserting small business stability as the cause is pure speculation.  Maybe they did go out of business or get bought.  Or maybe they didn’t like your product.  Or maybe they did like your product, but decided it was overkill for their needs.  If you want to how much of your churn came from a given segment, ask a finance person.  If you want to know why a customer churned, ask them.  Companies with relatively small customer bases can do it via a phone.  Customers with big bases can use an online survey.  It’s not hard.  Use metrics to figure out where your churn comes from.  Use surveys to figure out why.

The second is not looking at propensities and the broader customer base. If I said that 22% of your annual recurring revenue (ARR) comes from $1B+ companies, then you shouldn’t be surprised that 22% of your churn comes from them as well.  If I said that 50% of your ARR comes from $1B+ companies (and they were your core target market), then you’d be thrilled that only 22% of your churn comes from them.  The point isn’t how much of your churn comes from a given segment:  it’s how much of your churn comes from a given segment relative to how much of your overall business comes from that segment.  Put differently, what is the propensity of someone to churn in one segment versus another.

And you can’t perform that analysis without getting a full data set — of both customers who did churn and customers who didn’t.  That’s why I say you can’t analyze churn by analyzing churn.  Too many people, when tasked with churn analysis:  say, “quick, get me a list of all the customers who churned in the past 6 months and we’ll look for patterns.”   At that instant you are doomed.  All you can do is decompose churn into buckets, but know nothing of propensities.

For example, if you noticed that in one country that a stunning 99% of churn came from customers with blue eyes, you might be prompted to launch an immediate inquiry into how your product UI somehow fails for blue-eyed customers.  Unless, of course, the country was Estonia where 99% of the population has blue eyes, and ergo 99% of your customers do.  Bucketing churn buys you nothing without knowing propensities.

The last is correlation vs. causation.  Knowing that a large percentage of customers in the residential mortgage segment churned (or even have higher propensity to churn) doesn’t tell you why they are churning.  Perhaps your product does lack functionality that is important in that segment.  Or perhaps it’s 2008, the real estate crisis is in full bloom, and those customers aren’t buying anything from anybody.  The root cause is the mortgage crisis, not your product.   Yes, there is a high correlation between customers in that vertical and their churn rate.  But the cause isn’t a poor product fit for that vertical, it’s that the vertical itself is imploding.

A better, and more fun, example comes from The Halo Effect, which tells the story that a famous statistician once showed a precise correlation between the increase in the number of Baptist preachers and the increase in arrests for public drunkenness during the 19th Century.  Do we assume that one caused the other?  No.  In fact, the underlying driver was the general increase in the population — with which both were correlated.

So, remember these two things before starting your next churn analysis

  • If you want to know why someone churned, ask them.
  • If you want to analyze churn, don’t just look at who churned — compare who churned to who didn’t

CEO Out at Adaptive Planning / Adaptive Insights

[See bottom for update / new information as well as disclaimer]

Although I don’t know the circumstances of the seemingly sudden CEO change at Adaptive Insights (formerly known as Adaptive Planning) I can share what appears to be known at this point along with a few observations.

Adaptive Insights CEO John Herr, appointed on 10/31/2011, is no longer listed on the management section of the company’s web page or listed a member of the company’s board of directors, and is instead listed as a company advisor.  In his biography as advisor, he is explicitly referred to as “former CEO.”

ap-herr2a

 

While I don’t have much to work with, I can make the following observations:

  • This appears to have happened rather hastily as there is no new CEO listed on the management page.  Were the board working on an organized plan to replace the CEO (whose tenure was about 2.5 years) they would have executed this as a “Adaptive Planning appoints New CEO” as opposed to simply removing the existing one.
  • Time will tell if this is part of an organized replacement that we are catching in the middle.  If this is the result of a planned CEO replacement, then we should expect to see a new CEO appointed next week.
  • Otherwise, in the absence of an imminent new CEO announcement, I would conclude that the separation decision was made suddenly, perhaps in response to operational challenges (see disclaimer), a board dispute, or a personal issue.

The fact is that, barring personal issues, the majority of all Silicon Valley startup CEOs — particularly those hired one once a company already has some scale — stay on until one of three things happens:

  1. The company gets to a “liquidity event.”
  2. The CEO is asked to leave because things are not fine for some reason from the board perspective.
  3. The CEO and the board hit “irreconcilable differences” and are able to work out an amicable agree-to-disagree transition.

Note that the notion of “just quitting because you are unhappy”  basically doesn’t exist for a CEO because the CEO is the captain of the ship and few future investors will invest a CEO who has previously abandoned ship.  This is why I say the CEO job is unique because you are truly marrying the company (and in a country where only the spouse’s parents can ask for a divorce.)

In this situation, we are not in case 1 as there no liquidity announcement.  This is probably not case 3 as the whole point of case 3 is to deliver a smooth transition despite a major disagreement.  Ergo, I’d say we are in case 2 though one can never be sure of either the case or the reason for it.

I suppose it could also be the curse of the new building striking again, since they recently announced a new headquarters in Palo Alto.

Whatever happened, I can say that I’ve met John Herr a few times, found him smart and personable, and wish him (if not his former company) all the best going forward.

Update 7/21, 2:07 PM

Adaptive Insights has made its short-term plans clear with this press release announcing:

  • The appointment of Keith Nealon, formerly of phone supplier ShoreTel, to a new position as president and chief revenue officer (CRO).  Nealon is based in Austin, Texas.
  • The re-appointment of founder Rob Hull to chairman of the board, but not to CEO.
  • The appointment of a new Audit Committee Chair and board member, Jim Kelliher, CFO of LogMeIn.

Note that the company did not appoint a CEO and is thus going CEO-less at this time.  Time will tell, but this implies the company was caught somewhat flat-footed and is quite possibly launching a CEO search as we speak.

###

Disclaimer:  Host Analytics competes with Adaptive Insights, primarily at the low-end of the market.  In our competition with them, we have sensed recent operational challenges on their part, but we are certainly not unbiased observers.

Why, as CEO, I Love Driver-Based Planning

While driver-based planning is a bit of an old buzzword (the first two Google hits date to 2009 and 2011 respectively), I am nevertheless a huge fan of driver-based planning not because the concept was sexy back in the day, but because it’s incredibly useful.  In this post, I’ll explain why.

When I talk to finance people, I tend to see two different definitions of driver-based planning:

  • Heavy in detail, one where you build a pretty complete bottom-up budget for an organization and play around with certain drivers, typically with a strong bias towards what they have historically been.  I would call this driver-based budgeting.
  • Light in detail where you struggle to find the minimum set of key drivers around which you can pretty accurately model the business and where drivers tend to be figures you can benchmark in the industry.  I call this driver-based modeling.

While driver-based budgeting can be an important step in building an operating plan, I am actually bigger fan of driver-based modeling.  Budgets are very important, no doubt.  We need them to run plan our business, align our team, hold ourselves accountable for spending, drive compensation, and make our targets for the year.  Yes, a good CEO cares about that as a sine qua non.

But a great CEO is really all about two things:

  • Financial outcomes (and how they create shareholder value)
  • The future (and not just next year, but the next few)

The ultimate purpose of driver-based models is to be able answer questions like what happens to key financial outcomes like revenue growth, operating margins, and cashflow given set of driver values.

I believe some CEOs are disappointed with driver-based planning because their finance team have been showing them driver-based budgets when they should have been showing them driver-based models.

The fun part of driver-based modeling is trying to figure out the minimum set of drivers you need to successfully build a complete P&L for a business.  As a concrete example I can build a complete, useful model of a SaaS software company off the following minimum set of drivers

  • Number and type of salesreps
  • Quota/productivity for each type
  • Hiring plans for each type
  • Deal bookings mix for each (e.g., duration, prepayments, services)
  • Intra-quarter bookings linearity
  • Services margins
  • Subscription margins
  • Sales employee types and ratios (e.g., 1 SE per 2 salesreps)
  • Marketing as % of sales or via a set of funnel conversion assumptions (e.g., responses, MQLs, oppties, win rate, ASP)
  • R&D as % of sales
  • G&A as % of sales
  • Renewal rate
  • AR and AP terms

With just those drivers, I believe I can model almost any SaaS company.  In fact, without the more detailed assumptions (rep types, marketing funnel), I can pretty accurately model most.

Finance types sometimes forget that the point of driver-based modeling is not to build a budget, so it doesn’t have to be perfect.  In fact, the more perfect you make it, the heavier and more complex it gets.  For example, intra-quarter bookings linearity (i.e., % of quarterly bookings by month) makes a model more accurate in terms of cash collections and monthly cash balances, but it also makes it heavier and more complex.

Like each link in Marley’s chains, each driver adds to the weight of the model, making it less suited to its ultimate purpose.  Thus, with the additional of each driver, you need to ask yourself — for the purposes of this model, does it add value?  If not, throw it out.

One of the most useful models I ever built assumed that all orders came in on the last day of quarter.  That made building the model much simpler and any sales before the last day of the quarter — of which we hope there are many — become upside to the conservative model.

Often you don’t know in advance how much impact a given driver will make.  For example, sticking with intra-quarter bookings linearity, it doesn’t actually change much when you’re looking at quarter granularity a few years out.  However, if your company has a low cash balance and you need to model months, then you should probably keep it in.  If not, throw it out.

This process makes model-building highly iterative.  Because the quest is not to build the most accurate model but the simplest, you should start out with a broad set of drivers, build the model, and then play with it.  If the financial outcomes with which you’re concerned (and it’s always a good idea to check with the CEO on which these are — you can be surprised) are relatively insensitive to a given driver, throw it out.

Finance people often hate this both because they tend to have “precision DNA” which runs counter to simplicity, and because they have to first write and then discard pieces of their model, which feels wasteful.  But if you remember the point — to find the minimum set of drivers that matter and to build the simplest possible model to show how those key drivers affect financial outcomes — then you should discard pieces of the model with joy, not regret.

The best driver-based models end up with drivers that are easily benchmarked in the industry.  Thus, the exercise becomes:  if we can converge to a value of X on industry benchmark Y over the next 3 years, what will it do to growth and margins?  And then you need to think about how realistic converging to X is — what about your specific business means you should converge to a value above or below the benchmark?

At Host Analytics we do a lot of driver-based modeling and planning internally.  I can say it helps me enormously as CEO think about industry benchmarks, future scenarios, and how we create value for the shareholders.  In fact, all my models don’t stop at P&L, they go onto implied valuation given growth/profit and ultimately calculate a range of share prices on the bottom line.

The other reason I love driver-based planning is more subtle.  Much as number theory helps you understand the guts of numbers in mathematics, so does driver-based modeling help you understand the guts of your business — which levers really matter, and how much.

And that knowledge is invaluable.

Ten Classic Business Books for Entrepreneurs / Startup Founders

I often get asked by technical founders what business / marketing / strategy books they should read.  While there are many excellent relatively new books (e.g., The Lean Startup), the primary purpose of this post is to list a set of classic business books that most (older) business people have read — and that I think every budding entrepreneur should read as part of their basic business education.

  • Ogilvy on Advertising by David Ogilvy.  It’s getting a bit dated at this point, but still well worth the read.  The media have changed, but the core ideas remain the same.
  • Positioning by Al Ries and Jack Trout.  They, well, wrote the book on positioning.  Very focused on the mind of the customer.
  • Public Relations by Edward Bernays.  Another classic which studies PR in both history and application.  (I’m told Autonomy’s Mike Lynch swore by Bernays and Propoganda.)
  • The Innovator’s Dilemma by Clayton Christensen.  A newer book than many of the above, but an instant classic on the theory of disruptive innovation.
  • Guerrilla Marketing by Jay Conrad Levinson.  Oldie but goodie reinforcing the important idea that marketing doesn’t have to be expensive.
  • Blue Ocean Strategy by Renee Mauborgne and W. Chan Kim.  Again, a newer book than many of those on the list, but still an instant classic in my mind.  I particularly like their strategic levers analysis as shown in, e.g., the Cirque du Soleil case study.
  • Solution Selling by Michael Bosworth.  There as almost as many books on sales as there are salespeople.  I’ve read dozens and this, while superseded by Bosworth himself, remains the classic in my mind.
  • The Art of War by Sun Tzu.  The oldest book on the list by a few thousand years, so you want to find a version that is adapted to business.  While I like military-business analogies, On War remains on my to-read list.

Note that I have deliberately omitted Good to Great for three reasons:  (1) the case studies have largely under-performed undermining the book’s core thesis, (2) the book has generally been discredited, and (3) in my experience it is the most abused business book I have seen in terms of misapplication.  Despite reasons 1 and 2,  it nevertheless remains a top-seller; so much for rationality in business.

As a supplement, here are some newer books of which I’m a big fan:

  • The Halo Effect by Phil Rosenzweig.  A must read for anyone who wants to understand the weaknesses of business books and the business press.
  • Trust Me, I’m Lying by Ryan Holiday.  A simply amazing book by a self-confessed media manipulator and how he worked the top blogs.
  • The Lean Startup by Eric Ries.  Quickly becoming a new classic, on the art of iterative innovative (and frugal) strategy.
  • Thinking, Fast and Slow by Daniel Kahneman.  Amazing book by a psychologist who won the  Nobel prize in economics on human rationality and irrationality.

And finally, here are some near classics that didn’t quite make my top ten list.

  • The Wisdom of Crowds by James Surowiecki.  A great book on groups and their functions and dysfunctions.
  • Permission Marketing by Seth Godin.  Godin is an amazing speaker and thinker, but I have trouble identifying his one classic; he’s written too many books so it’s hard to find one to recommend.  This is my best shot.
  • The Five Dysfunctions of a Team by Patrick Lencioni.  Lencioni has also written numerous strong books on leadership, teamwork, and organizational dynamics, but I think this was his best.

Product is Not a Four-Letter Word

“Customers buy 1/4″ holes, not 1/4″ bits.”
Theodore Levitt, Harvard Business School

At some point in every marketer’s career they produce a data sheet that looks like this:

Our product uses state-of-the-art technology including a MapReduce distributed backend processing engine with predictive analytics including multivariate adaptive regression splines, support vector machine classification, and naive Bayesean machine learning.

When the draft review comes back someone invariably says “Yo! We sell solutions to problems here, not products.”  The author then revises the copy to:

Our solution uses state-of-the-art technology including a MapReduce distributed backend processing engine with predictive analytics including multivariate adaptive regression splines, support vector machine classification, and naive Bayesean machine learning.

And then, in most companies, everyone would be happy.  “Way to sell solutions!”

This, of course, would be called missing the point.  Completely.

Nothing drives me crazier than marketers who “sell solutions” by doing a global replacement of “product” for “solution” in their work.

While I am big believer in Theodore Levitt’s quote, it is not tantamount to saying never discuss product.  If I run a machine shop, while I am indeed “buying holes” at the macro level, I might nevertheless care very much about your drill bits:  are they carbon or titanium?  What is their useful life?  Can they drill into concrete?

Saying don’t lose sight of the fact that customers buy solutions to problems is not equivalent to declaring product a four-letter word.  There are both appropriate and inappropriate times to talk about features or “feeds and speeds” when discussing your product.  The problem in high technology is many marketers are so in love with the technology that all they talk about is features and technology at the cost discussing benefits.

That is, they are so in love with the bit that they forget people are buying it to drill holes.

There are two basic frameworks for doing product marketing:  FFB and FAB.

  • Feature/function/benefit (FFB).  Discuss the feature, describe how it works, and the first-order positive result from using it.
  • Feature/advantage/benefit (FAB).  Discuss the feature, the first-order positive result from using it, and the second-order results that come from the first-order result.

Here is an example showing elements from both frameworks.

  • Feature:  the green spots in Cheer laundry detergent.
  • Function:  some amazing chemical process that removes stains
  • Benefit 1:  whiter towels (and if you like puffery, towels that are whiter than white.)
  • Benefit 2:  you receive compliments on your towels’ whiteness at your pool party.
  • Benefit 3:  you receive a kiss from your spouse for getting complimented by the neighbors

You can see that the benefits are in effect a stack that you can climb arbitrarily high.  Here’s a business example:

  • New programming tool.
  • Makes your programmers more productive.
  • Means you output more product than your predecessor.
  • Means you get promoted.
  • Means you get nicer office.
  • Means you get a raise.
  • Means you get a bigger house.

Benefit-oriented marketers spend their time talking about this stack.  They talk about positive consequences for both you personally (cited above) and your company (imagine forking a different company benefits stack off more productive programmers).  There’s nothing wrong with this.

Since most tech marketers tend to forget it, a lot of sales and business people spend a lot time telling marketing “stop talking feeds and speeds,” “stop all the bits and bytes,” “don’t forget the benefits,” and “remember, we sell solutions to problems.”

But that is not to say that product is a four-letter word.  There is a time and a place to talk about product and marketers who answer clear product-oriented questions with benefits-stack answers will be seen as stupid and quite possibly evasive.

Think:  “yes, I know if goes faster I can buy fewer computers that will save my company, but what I’m asking is what makes it go faster?”

This means three things for product marketers

  • Never, ever do the product/solution global substitution as it accomplishes nothing.
  • Always know whether you are working a on primarily feature/benefit piece (e.g., a data sheet) or a feature/function piece (e.g., a white paper)
  • Get very, very good at clearly articulating the function of a feature.

Here’s a concrete example from my past of the FFB and the before/after of the “function” description, for a database feature called group commit.

  • Feature:  group commit
  • Function:  groups the commit records from different users into a single I/O to the transaction log file.
  • Benefit:  enables system performance in the 100 TPS range by eliminating a potential logging system bottleneck at around 30 TPS.

I spent hours talking with the engineers trying to understand the function of group commit.  I heard all kinds of stuff that I needed to filter before I finally could distill it:

Well you know when we commit a transaction we have to flush a record to the transaction log file in case the system crashes so we can guarantee the atomicity of transactions, you know so that we can either rollback or commit the transaction to the system and, as you know, those same transactions logs can be used in the roll-forward process in recovery, where we restore the entire database from a checkpoint and then systematically roll-forward the transactions applied to it up to some point in time.

Well in order to make all that stuff happen we need to flush records at commit time into the transaction logs and — this is important — it’s not enough to write them to some cache because if there’s a power failure and we lose that cache then we’ll lose the commit records and particularly because we now also have fast commit, we are not guaranteed to write all the database changes to the database at commit time, so it’s absolutely critical that we write the log records and then flush them to the disk.

Now the trick with flushing log records is that there is only current one logfile in the system and that can live on only 1 disk at a time.  And since then-current technology mean the most I/Os per second you could do to a disk, then you’ve got a built-in bottleneck that will prevent the system from going faster than 30 TPS.  Now that’s not to say that if you eliminate that specific bottleneck that we won’t find other bottlenecks that limit system performance, or — heck — there may be other bottlenecks in the system that cause us not to even get up to this 30 TPS limit, but as long as you are flushing one transaction in an I/O then you are about 30 TPS-limited.

Now, in a high-transaction environment, if you could make a few transactions wait just a bit before flushing them, you could probably pick up a few more transactions seeking to commit in the same timeframe and then group those commit records together and write them all out in a single I/O.  Thus your new bottleneck becomes the 30 times the number of commit records flushable in a single I/O …

That is the kind of stream of consciousness you sometimes get from an engineer when discussing product details.  Sometimes you’re lucky and get handed a very precise, terse definition.  Sometimes you get the rambling stuff above and it’s up to you to distill it.

The great product marketer, both because they want to be articulate and because they want to free up time to talk about benefits, thus seeks to describe the function of the feature as clearly and succinctly as possible.

Remember, product and feature are not four-letter words.  But you do need to be careful to when to talk product, when to talk function, and when to talk benefits.