Survivor Bias in Churn Calculations: Say It’s Not So!

I was chatting with a fellow SaaS executive the other day and the conversation turned to churn and renewal rates.  I asked how he calculated them and he said:

Well, we take every customer who was also a customer 12 months ago and then add up their ARR 12 months ago and add up their ARR today, and then divide today’s ARR by year-ago ARR to get an overall retention or expansion rate.

Well, that sounds dandy until you think for a minute about survivor bias, the often inadvertent logical error in analyzing data from only the survivors of a given experiment or situation.  Survivor bias is subtle, but here are some common examples:

  • I first encountered survivor bias in mutual funds when I realized that look-back studies of prior 5- or 10-year performance include only the funds still in existence today.  If you eliminate my bogeys I’m actually an below-par golfer.
  • My favorite example is during World War II, analysts examined the pattern of anti-aircraft fire on returning bombers and argued to strengthen them  in the places that were most often hit.  This was exactly wrong — the places where returning bombers were hit were already strong enough.  You needed to reinforce them in the places that the downed bombers were hit.

So let’s turn back to churn rates.  If you’re going to calculate an overall expansion or retention rate, which way should you approach it?

  1. Start with a list of customers today, look at their total ARR, and then go compare that to their ARR one year ago, or
  2. Start with a list of customers from one year ago and look at their ARR today.

Number 2 is the obvious answer.  You should include the ARR from customers who choose to stop being customers in calculating an overall churn or expansion rate.  Calculating it the first way can be misleading because you are looking at the ARR expansion only from customers who chose to continue being customers.

Let’s make this real via an example.

survivor bias

The ARR today is contained in the boxed area.  The survivor bias question comes down to whether you include or exclude the orange rows from year-ago ARR.  The difference can be profound.  In this simple example, the survivor-biased expansion rate is a nice 111%.  However, the non-biased rate is only 71% which will get you a quick “don’t let the door hit your ass on the way out” at most VCs.  And while the example is contrived, the difference is simply one of calculation off identical data.

Do companies use survivor-biased calculations in real life?  Let’s look at my post on the Hortonworks S-1 where I quote how they calculate their net expansion rate:

We calculate dollar-based net expansion rate as of a given date as the aggregate annualized subscription contract value as of that date from those customers that were also customers as of the date 12 months prior, divided by the aggregate annualized subscription contract value from all customers as of the date 12 months prior.

When I did my original post on this, I didn’t even catch it.  But therein lies the subtle head of survivor bias.

# # #

Disclaimers:

  • I have not tracked the Hortonworks in the meantime so I don’t know if they still report this metric, at what frequency, how they currently calculate it, etc.
  • To the extent that “everyone calculates it this way” is true, then companies might report it this way for comparability, but people should be aware of the bias.  One approach is to create a present back-looking and a past forward-looking metric and show both.
  • See my FAQ for additional disclaimers, including that I am not a financial analyst and do not make recommendations on stocks.

One More Time: What Drives SaaS Company Valuation? Growth!!

About two years ago, I did a post with a chart from JMP that showed the correlation between the value of a SaaS business and its growth rate.  Today, I’m back with a chart from RBC [1] that shows things haven’t changed.

saas growth

The correlation here is pretty amazing.  What’s even more amazing is that valuation is also closely correlated to customer acquisition cost (CAC) ratio [1].

cac correlation

Because of how RBC defines CAC, a low percentage above equates to a high customer acquisition cost.  That is, 50% above means that the company is getting 50 cents of ARR growth for every $1 of S&M.  Or, in my preferred form, the company is spending $2 for every $1 in new ARR.

Now, if I’m thinking correctly, if thing X and thing Y are each correlated to thing Z, then they are also correlated to each other, which implies that growth rate and CAC are themselves correlated.  I suppose this makes sense because it’s more expensive to grow fast when you spend a lot on customer acquisition, therefore companies that grow more efficiently can also grow faster.

Footnotes

[1] RBC Analysis:  The Economics of SaaS in Public Markets, April 2015.

[2]  RBC defines the CAC upside down relative to the Kellblog CAC — i.e., the RBC definition is ARR growth / prior-quarter S&M expense.

A Disney Parking Lot Attendant Gets More Training than Your Typical $250K Enterprise Sales Rep: Thoughts on Bootcamps

At Disney — a company that is truly focused on customer experience — every “cast member” (i.e., employee) gets six weeks of training before they see a “guest” (i.e., customer). “Face characters” (e.g., Snow White walking through the park) spend an additional 40 hours just watching and re-watching the movie to ensure they get every nuance right.

Oh, and how much training does your company give your $250K enterprise salesreps?

Anecdotally, I think the typical answer is a one-week bootcamp. Two weeks is on the long side. Once in a blue moon, you’ll hear 4 to 6 weeks, but that’s typically one to two weeks of corporate training followed by two to four weeks of deep technical training.

This is genuinely strange because a typical enterprise software or SaaS company freely spends between 40% and 100% of revenue on sales. Sales is typically the single biggest expense line in the firm. Sales runs 2-5x cost of goods sold.  It runs 2-5x R&D expense.  So, if we’re going to spend all this money on salespeople, then why don’t we want to train them?

I think there are a number of rationalizations:

  • “We hire experienced people so we don’t need to.” This is dangerous because your new people are experienced at someone else’s company and may have learned norms quite different from those you desire at yours.
  • “We train them on the job.” Either by throwing them in the pool and seeing if they sink or by building a conveyor-belt model where we hire folks as in-bound call-takers who we promote into outbound call-makers then into SMB reps then into mid-market reps. While there is nothing wrong with these models and they do very much help develop reps, it still doesn’t answer why we don’t give them deep training at the start.
  • “We never really developed it as a competency.” When you only have three reps you’re not going to create a six-week training program because — among other reasons — you don’t know what to teach. But as you scale your business that quickly becomes more excuse than reason.

I think the root answer is simple: most senior executives just don’t believe in training. (Think: “those who can, do; those who can’t, teach; and those who can’t teach do marketing.”)

Having competed against the output of some great internal training programs at Oracle and MicroStrategy, having created and run Business Objects University for several years, and then having gone through the outstanding on-boarding program at Salesforce, I’d like to share some perspective.

First, given my experience I would argue that by far the #1 key success criteria for these programs is a dictum from the CEO that they are important, they will be funded, and the organization will support them. Barring that, they get launched to lots of hype and then slowly erode into a self-fulfilling prophecy of mediocrity.

Here are some thoughts on how to run a great bootcamp.

  • Make it mandatory. Everybody goes. No one is too important to skip it from the new accountant to the new COO.
  • Make it long. Shoot for two weeks, minimum. Three is better. A double-dip is probably best of all (2 weeks initially followed by 3 months on the job followed by 2 weeks of reinforcement.)
  • Do it live. Some virtual pre-work and post-work is fine, but the core of your program should be live and in person. It shows commitment. It helps people build relationships. It enables better progress tracking and assessment.
  • Engage practitioners. Don’t learn how to sell from only a bootcamp trainer; hear from one of your top 5 reps on a rotating basis. (And pulling those top reps out of the field is an example of just one thing requires top-level support for the program.
  • Teach culture. Hit values. Train in how you define “The Your-Company Way.”
  • Be operational. Teach how the company wants deals entered in the pipeline, what your stage definitions are, and how to value deals. (These are critical items to maintaining a comparable set of pipeline metrics over time.)
  • Mix up the format. Have lectures, panels, individual exercises, group projects, videos, homework, reading, and team building exercises. Where applicable, do a volunteering session. (If volunteering is a key part of your culture, do some right from the get-go in the bootcamp – as Salesforce does.)
  • Keep it applied. Don’t just teach facts or theory (“Competitor A uses a proprietary, non-Excel formula language.”) Show them how to apply that fact in everyday life (e.g., suggest prospects to build some models to get a taste of what that feels like versus good-old Excel).
  • Everyone’s in sales. Teach everyone how the company sells, what problems it solves, and why customers buy from it.
  • Fire people who don’t take it seriously. The University head should be able to fire any employee during the training period. If you’re skipping sessions, not paying attention, late, disrupting, etc., then boom, you’re gone. It sends a message that won’t soon be forgotten.
  • Send home a report card. Build a culture where managers are embarrassed when their new hire gets a B- and the put people immediately on a performance plan when they get a C. List specific student strengths and development areas. Build the University program into the management process right from the start. Train managers on how work with fresh bootcamp graduates.
  • Try to use it for prediction. Give granular objective grades in different areas (e.g., delivery of corporate message, fluency in finance, consultative selling) along with an instructor success prediction and do regressions over time to see what really drives sales success as opposed to what you might think does. Try to answer the question: do people who do better in the University do better in real life?
  • Hire a consultant. My colleague Elay Cohen is a sales productivity expert, the author of Saleshood (Kellblog review here), and ran the outstanding program at Salesforce — I’m pretty sure he’d be happy to help you setup yours. You don’t have to invent this stuff anymore. Plenty of people know how to do it.

Finally, don’t stop with bootcamp. Build ongoing training programs that take care of your existing hires as much as your new ones. But that’s the subject of a different post.

Career Development:  What It Really Means to be a Manager, Director, or VP

It’s no secret that I’m not a fan of big-company HR practices.  I’m more of the First Break all the Rules type.  Despite my general skepticism of many standard practices, we still do annual performance reviews at my company, though I’m thinking seriously of dropping them.  (See Get Rid of the Performance Review.)

Another practice I’m not hugely fond of is “leveling” — the creation of a set of granular levels to classify jobs across the organization.  Leveling typically results in something that looks like this:

level

While I am a huge fan of compensation benchmarking (i.e., figuring out what someone is worth in the market before they do by getting another job), I think classical leveling has a number of problems:

  • It’s futile to level across functions. Yes, you might discover that a Senior FPA Analyst II earns the same as a Product Marketing Director I, but why does that matter?  It’s a coincidence.  It’s like saying with $3.65 I can buy either a grande non-fat latte or a head of organic lettuce.  What matters is the fair price of each of those goods in the market — not they that happen to have the same price.  So I object to the whole notion of levels across the organization.  It’s not canonical; it’s coincidence.
  • Most leveling systems are too granular, with the levels separated by arbitrary characterizations. It’s makework.  It’s fake science.  It’s bureaucratic and encourages a non-thinking “climb the ladder” approach to career development.  (“Hey, let’s develop you to go from somewhat-independent to rather-independent this year.”)
  • It conflates career development and salary negotiation. It encourages a mindset of saying, “what must I do to make L10” when you want to say, “I want a $10K raise.”  I can’t tell you the number of times people have asked me for “development” or “leveling” conversations where I get excited and start talking about learning, skills gaps, and such and it’s clear all they wanted to talk about was salary.  Disappointing.

That said, I do believe there are three meaningful levels in management and it’s important to understand the differences among them.  I can’t tell you the number of times someone has sincerely asked me, “what does it take to be a director?” or, “how can I develop myself into a VP?”

It’s a hard question.  You can turn to the leveling system for an answer, but it’s not in there.  For years, in fact, I’ve struggled to find what I consider to be a good answer to the question.

I’m not talking about Senior VP vs. Executive VP or Director vs. Senior Director.  I view such adjectives as window dressing or stripes:  important recognition along the way, but nothing that fundamentally changes one’s level.

I’m not talking about how many people you manage.  In call centers, a director might manage 500 people.  In startups, a VP might manage zero.

I am talking about one of three levels at which people operate:  manager, director, and vice president.  Here are my definitions:

  • Managers are paid to drive results with some support. They have experience in the function, can take responsibility, but are still learning the job and will have questions and need support.  They can execute the tactical plan for a project but typically can’t make it.
  • Directors are paid to drive results with little or no supervision (“set and forget”). Directors know how to do the job.  They can make a project’s tactical plan in their sleep.  They can work across the organization to get it done.  I love strong directors.  They get shit done.
  • VPs are paid to make the plan. Say you run marketing.  Your job is to understand the company’s business situation, make a plan to address it, build consensus to get approval of that plan, and then go execute it.

The biggest single development issue I’ve seen over the years is that many VPs still think like directors. [1]

Say the plan didn’t work.   “But, we executed the plan we agreed to,” they might say, hoping to play a get-out-of-jail-free card with the CEO (which is about to boomerang).

Of course, the VP got approval to execute the plan.  Otherwise, you’d be having a different conversation, one about termination for insubordination.

But the plan didn’t work.  Because directors are primarily execution engines, they can successfully play this card.  Fair enough.  Good directors challenge their plans to make them better.  But they can still play the approval card successfully because their primary duty is to execute the plan, not make it.

VP’s, however, cannot play the approval card.  The VP’s job is to get the right answer.  They are the functional expert.  No one on the team knows their function better than they do.  And even if someone did, they are still playing the VP of function role and it’s their job – and no one else’s — to get the right answer.

Now, you might be thinking, “glad I don’t work for Dave” right now — he’s putting failure of a plan to which he and the team agreed on the back of the VP.  And I am.

But it’s the same standard to which the CEO is held.  If the CEO makes a plan, gets it approved by the board, and executes it well but it doesn’t work, they cannot tell the board “but, but, it’s the plan we agreed to.”  Most CEOs wouldn’t even dream of saying that.  It’s because CEOs understand they are held accountable not for effort or activity, but results.

Part of truly operating at the VP level is to internalize this fact.  You are accountable for results.  Make a plan that you believe in.  Because if the plan doesn’t work, you can’t hide behind approval.  Your job was to make a plan that worked.  If the risk of dying on a hill is inevitable, you may as well die on your own hill, and not someone else’s.

Paraphrasing the ancient Fram oil filter commercial, I call this “you can fire me now or fire me later” principle.  An executive should never sign up for a plan they don’t believe in.  They should risk being fired now for refusing to sign up for the plan (e.g., challenging assumptions, delivering bad news) as opposed to halfheartedly executing a plan they don’t believe in and almost certainly getting fired for its failure later.  The former is a far better way to go than the latter.

This is important not only because it prepares the VP to one day become a  CEO, but also because it empowers the VP in making their plan.  If this my plan, if I am to be judged on its success or failure, if I am not able to use approval as a get-out-of-jail-free card, then is it the right plan?

That’s the thinking I want to stimulate.  That’s how great VPs think.

# # #

Footnotes:

[1] Since big companies throw around the VP title pretty casually, this post is arguing that many of those VPs are actually directors in thinking and accountability.  This may be one reason why big company VPs have trouble adapting to the e-staff of startups.

Why Can't PR People Do Math?

I think in today’s world that we need to ask PR people to be not just literate, but numerate.  What does that mean?

  • They need to do basic math correctly.  Most PR people think that going from size $100K to size $700K is 700% growth.  It’s 600%.  I cannot tell you the number of times I have caught this error.  Growth % = ((year N+1 / year N) -1).  2.4x is 140% growth.  1.3x is 30% growth.
  • They need to understand the law of small numbers as well understanding the scale of large ones.  It’s not hard to grow 1000% off a tiny base.  And the typical reader response to mega-growth claims is not “wow, look how big you are this year,” it’s “oh, I didn’t know how small you were last year.”  In addition, PR needs to understand the scale of large numbers — i.e., that 10% growth off $1B is $100M.  Technically speaking whenever company A is growing faster than company B, company B is losing relative market share.  However, remember that if you compare a $10M startup that doubled to a $1B that grew 10%, the latter company still had 10x the new sales of the former.  So you need to be careful making claims in that light.
  • They need to understand how people will react to the numbers.  There is tendency in PR to throw out any numbers you can because, sadly, much of the Silicon Valley trade press will consume them wholesale.  But PR needs to be careful.  Some analysts (e.g., the 451 group) are famous for detailed note-taking and cross-checking and will challenge you if your own figures are inconsistent over time.  In addition, there are fairly normal ratios for, e.g., sales/salesperson or revenue/employee so saying one thing definitely implies another.  Savvy readers will try to triangulate things like revenue, bookings, or cashflow based on the tidbits you hand out.  And if the triangulation produces inconsistent results, it’s going to be a headache for your company and drive credibility questions about the figures and your claims.
  • They need to understand what metrics mean.  One favorite PR trick is talk about undefined metrics like sales (e.g., “company reported that sales grew 57% last year”).  It sounds good.  But wait a minute — what’s “sales”?  Do you mean revenue (and if so, why not say it) or bookings (and if so, how you define it).  Another is to discuss poorly defined product-line growth rates, where companies try to classify anything they can as related to the BNI (big new initiative — e.g., cloud at most mega-vendors).  What do those numbers actually mean?  If a purchase order has products 1, 2, and 3 on it and has $100K at the bottom, how does the company allocate the sales across product lines and does it do so consistently over time.  Product line sales figures might sound meaningful but they are often not.  Another favorite is three-division company growing 10% where each division says they’re growing 30%.  Hey, wait a minute — that’s not possible.

If you net all this out, the best advice is that PR needs to become more like IR (investor relations).  IR people know their numbers.  They’re consistent about what they release over time.  They understand how people will triangulate and the implications of so doing.  And they ensure consistency of the message as told by both the English and the math.
[Rewritten and decomposed from a prior interim version, focusing the content to better align with the title.  I removed the “beware of SaaS Companies talking bookings” meme as, while it remains a great topic that raises interesting yellow/red flags, it’s not one you can reasonably expect a PR person to understand or control.]