Category Archives: SaaS

My SaaStr 2018 Presentation: Ten Non-Obvious Things About Scaling SaaS

Below please find the slides from the presentation I gave today at SaaStr 2018, about which I wrote a teaser blog post last week.  I hope you enjoy it as much as I enjoyed making it.

I hope to see everyone next year at SaaStr — I think it’s the preeminent software, SaaS, and startups conference.

My SaaStr Talk Abstract: 10 Non-Obvious Things About Scaling SaaS

In an effort to promote my upcoming presentation at SaaStr 2018, which is currently on the agenda for Wednesday, February 7th at 9:00 AM in Studio C, I thought I’d do a quick post sharing what I’ll be covering in the presentation, officially titled, “The Best of Kellblog:  10 Non-Obvious Things About Scaling SaaS.”

Before jumping in, let me say that I had a wonderful time at SaaStr 2017, including participating on a great panel with Greg Schott of MuleSoft and Kathryn Minshew of The Muse hosted by Stacey Epstein of Zinc that discussed the CEO’s role in marketing.  There is a video and transcript of that great panel here.

saastr

For SaaStr 2018, I’m getting my own session and I love the title that the folks at SaaStr came up with because I love the non-obvious.  So here they are …

The 10 Non-Obvious Things About Scaling a SaaS Business

1. You must run your company around ARR.  Which this may sound obvious, you’d be surprised by how many people either still don’t or, worse yet, think they do and don’t.  Learn my one-question test to tell the difference.

2.  SaaS metrics are way more subtle than meets the eye.  Too many people sling around words without knowing what they mean or thinking about the underlying definitions.  I’ll provide a few examples of how fast things can unravel when you do this and how to approach SaaS metrics in general.

3.  Former public company SaaS CFOs may not get private company SaaS metrics.  One day I met with the CFO of a public company whose firm had just been taken private and he had dozens of questions about SaaS metrics.  It had never occurred to me before, but when your job is to talk with public investors who only see a limited set of outside-in metrics, you may not develop fluency in the internal SaaS metrics that so obsess VC and PE investors.

4.  Multi-year deals make sense in certain situations.  While many purists would fight me to the death on this, there are pros and cons to multi-year deals and circumstances where they make good sense.  I’ll explain how I think about this and the one equation I use to make the call.

5.  Bookings is not a four-letter word.  While you need to be careful where and when you use the B-word in polite SaaS company, there is a time and place to measure and discuss bookings.  I’ll explain when that is and how to define bookings the right way.

6.  Renewals and satisfaction are more loosely correlated than you might think.  If you think your customers are all delighted because they’re renewing, then think again.  Unhappy customer sometimes renew and happy ones don’t.  We’ll discuss why that happens and while renewal rates are often a reasonable proxy for customer satisfaction, why you should also measure customer satisfaction using NPS, and present a smart way to do so.

7.  You can’t analyze churn by analyzing churn.  To understand why customers churn, too many companies grab a list of all the folks who churned in the past year and start doing research and interviews.  There’s a big fallacy in this approach.  We’ll discuss the right way to think about and analyze this problem.

8.  Finding your own hunter/farmer metaphor is hard.  Boards hate double compensation and love splitting renewals from new business.  But what about upsell?  Which model is right for you?  Should you have hunters and farmers?   Hunters in a zoo?  Farmers with shotguns?  An autonomous collective?  We’ll discuss which models and metaphors work, when.

9.  You don’t have to lose money on services.  Subsidizing ARR via free or low-cost services seems a good idea and many SaaS companies do it.  But it’s hell on blended gross margins, burns cash, and can destroy your budding partner ecosystem.  We’ll discuss where and when it makes sense to lose money on services — and when it doesn’t.

10.  No matter what your board says, you don’t have to sacrifice early team members on the altar of experienced talent.  While rapidly growing a business will push people out of their comfort zones and require you to build a team that’s a mix of veterans and up-and-comers, with a bit creativity and caring you don’t have to lose the latter to gain the former.

I hope this provides you with a nice and enticing sample of what we’ll be covering — and I look forward to seeing you there.

Win Rates, Close Rates and Milestone vs. Flow Analysis

Hey, what’s your win rate?

It’s another seemingly simple question.  But, like most SaaS metrics, when you dig deeper you find it’s not.  In this post we’ll take a look at how to calculate win rates and use win rates to introduce the broader concept of milestone vs. flow analysis that applies to conversion rates across the entire sales funnel.

Let’s start with some assumptions.  Once an opportunity is accepted by sales (known as a sales-accepted opportunity, or SAL), it eventually will end up in one of three terminal states:

  • Won
  • Lost
  • Other (derailed, no decision)

Some people don’t like “other” and insist that opportunities should be exclusively either won or lost and that other is an unnecessary form of lost which should be tracked with a lost reason code as opposed to its own state.  I prefer to keep other, and call it derailed, because a competitive loss is conceptually different from a project cancellation, major delay, loss of sponsor, or a company acquisition that halts the project.  Whether you want to call it other, no decision, or derailed, I think having a third terminal state is warranted from first principles.  However, it can make things complicated.

For example, you’ll need to calculate win rates two ways:

  • Win rate, narrow = wins / (wins + losses)
  • Win rate, broad = wins / (wins + losses + derails)

Your narrow win rate tells you how good you are at beating the competition.  Your broad rates tells you how good you are at closing deals (that come to a terminal state).

Narrow win rate alone can be misleading.  If I told you a company had a 66% win rate, you might be tempted to say “time to add more salespeople and scale this thing up.”  If I told you they got the 66% win rate by derailing 94 out of every 100 opportunities it generated, won 4, and lost the other 2, then you’d say “not so fast.”  This, of course, would show up in the broad win rate of 4%.

This brings up the important question of timing.  Both these win rate calculations ignore deals that push out of a quarter.  So another degenerate case is a situation where you win 4, lose 2, derail 4, and push 90 opportunities.  In this case, narrow win rate = 66% and broad win rate = 40%.  Neither is shining a light on the problem (which, if it happens continuously, I call a rolling hairball problem.)

The issue here is thus far we’ve been performing what I call a milestone analysis.  In effect, we put observers by the side of the road at various milestones (created, won, lost, derailed) and ask them to count the number opportunities that pass by each quarter.  The issue, especially with companies that have long sales cycles, is that you have no idea of progression.  You don’t know if the opportunities that passed “win” this quarter came from the opportunities that passed “created” this quarter, or if they came from last quarter, the quarter before that, or even earlier.

Milestone analysis has two key advantages

  • It’s easy — you just need to count opportunities passing milestones
  • It’s instant — you don’t have to wait to see how things play out to generate answers

The big disadvantage is it can be misleading, because the opportunities hitting a terminal state this quarter were generated in many different time periods.  For a company with an average 9 month sales cycle, the opportunities hitting a terminal state in quarter N, were generated primarily in quarter N-3, but with some coming in quarters N-2 and N-1 and some coming in quarters N-4 and N-5.  Across that period very little was constant, for example, marketing programs and messages changed.  So a marketing effectiveness analysis would be very difficult when approached this way.

For those sorts of questions, I think it’s far better to do a cohort-based analysis, which I call a flow analysis.  Instead of looking at all the opportunities that hit a terminal state in a given time period, you go back in time, grab a cohort of opportunities (e.g., all those generated in 4Q16) and then see how they play out over time.  You go with the flow.

For marketing programs effectiveness, this is the only way to do it.  Instead of a time-based cohort, you’d take a programs-based cohort (e.g., all the opportunities generated by marketing program X), see how they play out, and then compare various programs in terms of effectiveness.

The big downside of flow analysis is you end up analyzing ancient history.  For example, if you have a 9 month average sales cycle with a wide distribution around the mean, you may need to wait 15-18 months before the vast majority of the opportunities hit a terminal state.  If you analyze too early, too many opportunities are still open.  But if you put off analysis then you may get important information, but too late.

You can compress the time window by analyzing programs effectiveness not to sales outcomes but to important steps along the funnel.  That way you could compare two programs on the basis of their ability to generate MQLs or SALs, but you still wouldn’t know whether and at what relative rate they generate actual customers.  So you could end up doubling down on a program that generates a lot of interest, but not a lot of deals.

Back to our original topic, the same concept comes up in analyzing win rates.  Regardless of which win rate you’re calculating, at most companies you’re calculating it on a milestone basis.  I find milestone-based win rates more volatile and less accurate that a flow-based SAL-to-close rate.  For example, if I were building a marketing funnel to determine how many deals I need to hit next year’s number, I’d want to use a SAL-to-close rate, not a win rate, to do so.  Why?  SAL-to-close rates:

  • Are less volatile because they’re damped by using long periods of time.
  • Are more accurate because they actually tracking what you care about — if I get 100 opportunities, how many close within a given time period.
  • Automatically factor in derails and slips (the former are ignored in the narrow win rate and the latter ignored in both the narrow and broad win rates).

Let’s look at an example.  Here’s a chart that tracks 20 opportunities, 10 generated in 1Q17 and 10 generated in 2Q17, through their entire lifetime to a terminal stage.

oppty tracking

In reality things are a lot more complicated than this picture because you have opportunities still being generated in 3Q17 through 4Q18 and you’ll have opportunities that are still in play generated in numerous quarters before 1Q17.  But to keep things simple, let’s just analyze this little slice of the world.  Let’s do a milestone-based win/loss analysis.

win-loss

First, you can see the milestone-based win/loss rates bounce around a lot.  Here it’s due in part due to law of small numbers, but I do see similar volatility in real life — in my experience win rates bounce within a fairly broad zone — so I think it’s a real issue.  Regardless of that, what’s indisputable is that in this example, this is how things will look to the milestone-based win/loss analyzer.  Not a very clear picture — and a lot to panic about in 4Q17.

Let’s look at what a flow-based cohort analysis produces.

cohort1

In this case, we analyze the cohort of opportunities generated in the year-ago quarter.  Since we only generate opportunities in two quarters, 1Q17 and 2Q17, we only have two cohorts to analyze, and we get only two sets of numbers.  The thin blue box shows in opportunity tracking chart shows the data summarized in the 1Q18 column and the thin orange box shows the data for the 2Q18 column.  Both boxes depict how 3 opportunities in each cohort are still open at the end of the analysis period (imagine you did the 1Q18 analysis in 1Q18) and haven’t come to final resolution.  The cohorts both produce a 50% narrow win rate, a 43% vs. 29% broad win rate, and a 30% vs. 20% close rate.  How good are these numbers?

Well, in our example, we have the luxury of finding the true rates by letting the six open opportunities close out over time.  By doing a flow-based analysis in 4Q18 of the 1H17 cohort, we can see that our true narrow win rate is 57%, our true broad win rate is 40%, and our close rate is also 40% (which, once everything has arrived at a terminal state, is definitionally identical to the broad win rate).

cohort7

Hopefully this post has helped you think about your funnel differently by introducing the concept of milestone- vs. flow-based analysis and by demonstrating how the same business situation results in a very different rates depending on both the choice of win rate and analysis type.

Please note that the math in this example backed me into a 40% close rate which is about double what I believe is the benchmark in enterprise software — I think 20 to 25% is a more normal range. 

 

Kellblog (Dave Kellogg) Featured on the Official SaaStr Podcast

Just a quick post to highlight the fact that last week I was the featured guest on Episode 142 of the Official SaaStr  podcast produced by the SaaStr organization run by Jason Lemkin and interviewed by a delightful young Englishman named Harry Stebbings (who also runs his own podcast entitled The Twenty Minute VC).

In the 31-minute episode — which Harry very nicely says was “probably one of his favorite interviews to record” — we cover a wide range of my favorite topics, including:

    • How I got introduced to SaaS, including my experience as an early customer of Salesforce in about 2003.
    • Challenges in scaling a software business, learned at BusinessObjects as we scaled from $30M to $1B in revenues, as well as at MarkLogic and Host Analytics.
    • My favorite SaaS metric.  If you had to pick one, I’d pick LTV/CAC.
    • Why simple churn is the best way to value the annuity of a SaaS business.
    • The loose coupling of customer satisfaction and renewal rates.
    • Why SaaS companies need to “chew gum and walk at the same time” when it comes to driving the mix of new and renewal business.
    • User-based vs. usage-based pricing in SaaS and how the latter can backfire in disincenting usage of the application.
    • My thoughts on bookings vs. ARR as a SaaS metric.  (Bookings is generally seen as a four-letter word!)
    • Why SaaS companies should make “the leaky bucket” the first four lines of their financial presentation.
    • Why I think it’s a win/win when a SaaS company gives a multi-year prepaid discount that’s less than its churn rate.
    • Why I view non-prepaid, multi-year deals as basically equivalent to renewals (just collected by finance/legal instead of customer success.)
    • Why it’s OK to “double compensate” sales and customer success on renewals and incidental upsells, and why it’s OK to pay sales on non-incidental upsells to existing customers (don’t put your farmer against someone else’s hunter).
    • Why you can’t analyze churn by analyzing churn and why you should have a rigorous taxonomy of churn.
    • My responses to Harry’s “quick fire” round questions.

You can listen to the podcast via iTunes, here.  Enjoy!

 

Detecting and Eliminating the Rolling Hairballs in your Sales Pipeline

Quick:  what’s the biggest deal in this quarter’s sales pipeline?  Was that the biggest deal in last quarter’s pipeline?  How about the quarter before?  Do you have deals in your pipeline older than your children?

If you’re answering yes to these questions, then you’re probably dealing with “rolling hairballs” in your pipeline.  Rolling hairballs are bad:

  • They exaggerate the size of the pipeline.
  • They distort coverage and conversion ratios.
  • They mess up expected-value forecasts, like a forecast-category or stage-weighted sales forecast.

Maybe they’re real deals; maybe they’re figments of a rep’s imagination.  But, if you’re not careful, they pollute your pipeline and your metrics.

Let’s define a rolling hairball

A rolling hairball is a typically large opportunity that sits in your current-quarter pipeline every quarter, with a close date that slips every quarter.  At 2 quarters it’s a suspected rolling hairball; at 3 or more quarters it’s a confirmed one.

Rolling Hairball Detection

The first thing you need to do is find rolling hairballs.  They’re tricky because salesreps always swear they’re real deals that are supposed to finally close this quarter.  What makes rolling hairballs obvious is their ever-sliding close dates.  What makes them dangerous is their size (including an accumulation of them that aggregate to a material fraction of the pipeline).

If you want to find rolling hairballs, look for opportunities in the current-quarter pipeline that were also in last-quarter’s pipeline.  That will find numerous bona fide slipped deals, but it will also light-up potential rolling hairballs.  To determine if an opportunity is  a rolling hairball, for sure, you can do one of two things:

  • See if it also appeared in the current-quarter pipeline in any quarters prior to the previous one.
  • Look at its stage or forecast category.  If either of those suggest it won’t be closing this quarter, it’s another big hairball indicator.

The more sophisticated way to find them is to examine “stuck opportunity” reports that light-up deals that are moving through pipeline stages too slowly compared to your norms.

But typically, the hairball is a big opportunity hiding in plain sight.  You know it was in last quarter’s pipeline and the quarter before that.  You’ve just been deluded into believing it’s not a hairball.

Fixing Rolling Hairballs

There are two ways to fix rolling hairballs:

  • Fix the close date.  Reps are subtly incented to put deals in the current quarter (e.g., to show they’re working on something, to show they might bring in some big sales this quarter). The manager needs to get on the phone with the customer and, after having verified it’s a real opportunity, get the real timeframe in which it might close.  Assigning a realistic close date to the opportunity makes your pipeline more real and reminds the rep that they need to be working on other shorter-term opportunities as well.  (There is no mid-term if you fail enough in the short term.)  The deal will still remain in the all-quarters pipeline, but it won’t always be in the current-quarter pipeline, ever-sliding, and distorting metrics and ratios.

 

  • Fix the size. While a realistic close date is the best solution, what makes rolling hairballs dangerous is their size.  So, if the salesrep really believes it’s a current-quarter opportunity, you can either reduce its size or split it into two opportunities (particularly if that’s a possible outcome), a small one in the current quarter along with an upsell in the future.  Note that this approach can be dangerous, with lots of little hairball-lets flying below radar, so you should only try if it you’re sure your salesops team can produce the reports to find them and if you believe it reflects real customer buying patterns.

Don’t let rolling hairballs pollute your pipeline metrics and ratios.  Admit they exist, find them, and fix them.  Your sales and sales forecasting will be more consistent as a result.

A Look at the Tintri S-1

Every now and then I take a dive into an S-1 to see what clears the current, ever-changing bar for going public.  After a somewhat rocky IPO process, Tintri went public June 30 after cutting the IPO offering price and has traded flat thus far since then.

Let’s read an excerpt from this Business Insider story before taking a look at the numbers.

Before going public, Tintri had raised $260 million from venture investors and was valued at $800 million.

With the performance of this IPO, the company is now valued at about about $231 million, based on $7.50 a share and its roughly 31 million outstanding shares, (if the IPO’s bankers don’t buy their optional, additional roughly 1.3 million shares.)

In other words, this IPO killed a good $570 million of the company’s value.

In other words, Tintri looks like a “down-round IPO” (or an “IPO of last resort“) — something that frankly almost never happened before the recent mid/late stage private valuation bubble of the past 4 years.

Let’s look at some numbers.

tintri p+l

Of note:

  • $125M in FY2017 revenue.  (They have scale, but this is not a SaaS company so the revenue is mostly non-recurring, making it easier to get to grow quickly and making the revenue is worth less because only the support/maintenance component of it renews each year.)
  • 45% YoY total revenue growth.  (On the low side, especially given that they have a traditional license/maintenance model and recognize revenue on shipment.)
  • 65% gross margins  (Low, but they do seem to sell flash memory hardware as part of their storage solutions.)
  • 87% of revenue spent on S&M (High, again particularly for a non-SaaS company.)
  • 43% of revenue spent on R&D  (High, but usually seen as a good thing if you view the R&D money as well spent.)
  • -81% operating margins (Low, particularly for a non-SaaS company.)
  • -$70.4M in cashflow from operating activities in 2017 ($17M average quarterly cash burn from operations)
  • Incremental S&M / incremental product revenue = 73%, so they’re buying $1 worth of incremental (YoY) revenue for an incremental 73 cents in S&M.  Expensive but better than some.

Overall, my impression is of an on-premises (and to a lesser extent, hardware) company in SaaS clothing — i.e., Tintri’s metrics look like a SaaS company, but they aren’t so they should look better.  SaaS company metrics typically look worse than traditional software companies for two reasons:  (1) revenue growth is depressed by the need to amortize revenue over the course of the subscription and (2) subscriptions companies are willing to spend more on S&M to acquire a customer because of the recurring nature of a subscription.

Concretely, if you compare two 100-unit customers, the SaaS customer is worth twice the license/maintenance customer over 5 years.

saas compare

Moreover, even if Tintri were a SaaS company, it is quite out of compliance with the Rule of 40, that says growth rate + operating margin >= 40%.  In Tintri’s case, we get -35%, 45% growth plus -81% operating margin, so they’re 75 points off the rule.

Other Notes

  • 1250+ customers
  • 21 of the Fortune 100
  • 527 employees as of 1/31/17
  • CEO 2017 cash compensation $525K
  • CFO 2017 cash compensation $330K
  • Issued special retention stock grants in May 2017 that vest in the two years following an IPO
  • Did option repricing in May 2017 to $2.28/share down from weighted average exercise price of $4.05.
  • $260M in capital raised prior to IPO
  • Loans to CFO and CEO to exercise stock options at 1.6% to 1.9% interest in 2013
  • NEA 22.7% ownership prior to opening
  • Lightspeed 14.5% ownership
  • Insight Venture Partners 20.2% ownership
  • Silver Lake 20.4% ownership
  • CEO 3.8% ownership
  • CFO 0.7% ownership
  • $48.9M in long-term debt
  • $13.8M in 2017 stock-based compensation expense

Overall, and see my disclaimers, but this is one that I’ll be passing on.

 

The New 2017 Gartner Magic Quadrants for Cloud Strategic CPM (SCPM) and Cloud Financial CPM (FCPM) – How to Download; A Few Thoughts

For some odd reason, I always think of this scene — The New Phone Book’s Here – from an old Steve Martin comedy whenever Gartner rolls out their new Magic Quadrants (MQ) for corporate performance management (CPM). It’s probably because all of the excitement they generate.

Last year, Gartner researchers John Van Decker and Chris Iervolino kept that excitement up by making the provocative move of splitting the CPM quadrant in two — strategic CPM (SCPM) and financial CPM (FCPM). Never complacent, this year they stirred things up again by inserting the word “cloud” before the category name for each; we’ll discuss the ramifications of that in a minute.

Free Download of 2017 CPM Magic Quadrants

But first, let me provide some links where you can download the new FCPM and SCPM magic quadrants:

Significance of the New 2017 FPCM and SCPM Magic Quadrants

The biggest change this year is the insertion of the word “cloud” in the title of the magic quadrants.  This perhaps seemingly small change, like a butterfly effect, results in an entirely new world order where two of the three megavendors in the category (i.e., IBM, SAP) get displaced from market leadership due to the lack of the credibility and/or sophistication of their cloud offerings.

For example:

  • In the strategic CPM quadrant, IBM is relegated to the Visionary quadrant (bottom right) and SAP does not even make the cut.
  • In the financial CPM quadrant, IBM is relegated to the Challenger quadrant (top left) and SAP again does not even make the cut.

Well, I suppose one might then ask, well if IBM and SAP do poorly in the cloud financial and strategic CPM magic quadrants, then how do they do in the “regular” ones?

To which the answer is, there aren’t any “regular” ones; they only made cloud ones.  That’s the point.

So I view this as the mainstreaming of cloud in EPM [1].  Gartner is effectively saying a few things:

  • Who cares how much maintenance fees a vendor derives from legacy products?
  • The size of a vendor’s legacy base is independent of its position for the future.
  • The cloud is now the norm in CPM product selection, so it’s uninteresting to even produce a non-cloud MQ for CPM. The only CPM MQs are the cloud ones.

While I have plenty of beefs with Oracle as a prospective business partner — and nearly as many with their cloud EPM offerings — to their credit, they have been making an effort at cloud EPM while IBM and SAP seem to have somehow been caught off-guard, at least from an EPM perspective.

(Some of Oracle’s overall cloud revenue success is likely cloudwashing though they settled a related lawsuit with the whistleblower so we’ll never know the details.)

Unlikely Bedfellows:  Only Two Vendors are Leaders in Both FCPM and SCPM Magic Quadrants

This creates the rather odd situation where there are only two vendors in the Leaders section of both the financial and strategic CPM magic quadrants:  Host Analytics and Oracle.  That means only two vendors can provide the depth and breadth of products in the cloud to qualify for the Leaders quadrant in both the FCPM and SCPM MQ.

I know who I’d rather buy from.

In my view, Host Analytics has a more complete, mature, and proven product line – we’ve been at this a lot longer than they have — and, well, oligopolists aren’t really famous for their customer success and solutions orientation.  More infamous, in fact.  See the section of the FCPM report where it says Oracle ranks in the “bottom 25% of vendors in this MQ on ‘overall satisfaction with vendor.’”

Or how an Oracle alumni once defined “solution selling” for me:

Your problem is you are out of compliance with the license agreement and we’re going to shut down the system.  The solution is to give us money.

Nice.

For more editorial, you can read John O’Rourke’s post on the Host Analytics corporate blog.

Download the 2017 FCPM and SCPM Magic Quadrants

Or you can download the new 2017 Gartner CPM MQs here.

# # #

Notes:

[1] Gartner refers to the category as corporate performance management (CPM).  I generally refer to it as enterprise performance management (EPM), reflecting the fact that EPM software is useful not only for corporations, but other forms of organization such as not-for-profit, partnerships, government, etc.  That difference aside, I generally view EPM and CPM as synonyms.