Category Archives: Randomness

Is Another SaaSacre In The Offing?

I’m not a financial analyst and I don’t make stock recommendations [1], but as a participant and observer in the software investing ecosystem, I do keep an eye on macro market parameters and I read a fair bit of financial analyst research.  Once in an while, I comment on what I’m seeing.

In February 2016, I wrote two posts (SaaS Stocks:  How Much Punishment is in Store and The SaaSacre Part II:  Time for the Rebound?).  To remind you how depressed SaaS stocks were back then:

  • Workday was $49/share, now at $192
  • Zendesk was $15/share, now at $85
  • ServiceNow was $47/share, now at $247
  • Salesforce was $56/share, now at $160

Those four stocks are up 342% over the past 3 years and two months.  More broadly, the Bessemer Emerging Cloud Index is up 385% over the same period.  Given the increase, a seemingly frothy market for stocks (P/E of the S&P 500 at ~21), and plenty of global geopolitical and economic uncertainty, the question is whether there is another SaaSacre (rhymes with massacre) in the not-too-distant future?

Based just on gut feel, I would say yes.  (Hence my Kellblog prediction that markets would be choppy in 2019.)  But this morning, I saw a chart in a Cowen report that helped bring some data to the question:


I wish we had a longer time period to look at, but the data is still interesting.  The chart plots enterprise value (EV) divided by next twelve month (NTM) sales.  As a forward multiple, it’s already more aggressive than a trailing twelve month (TTM) multiple because revenue is growing (let’s guess 25% to 30% across the coverage universe), thus the multiple gets deflated when looking forward as opposed to back.

That said, let’s look at the shape of the curve.  When I draw a line through 7x, it appears to me that about half the chart is above the line and half below, so let’s guesstimate that median multiple during the period is 7x.  If you believe in regression to the mean, you should theoretically be a bearish when stocks are trading above the median and bullish when they’re below.

Because the average multiple line is pretty thick, it’s hard to see where exactly it ends, but it looks like 8.25x to me.  That means today’s multiples are “only” 18% above the median [2].  That’s good news, in one sense, as my gut was that it would be higher.  The bad news is:  (1) when things correct they often don’t simply drop to the line but well through it and (2) if anything happens to hurt the anticipated sales growth, the EV/NTM-sales multiple goes up at constant EV because  NTM-sales goes down.  Thus there’s kind of a double whammy effect because lower future anticipated growth increases multiples at a time when the multiples themselves want to be decreasing.

This is a long way of saying, in my opinion, as a chartist [3] using this chart, I would conclude that multiples are somewhat frothy, about 20% above the median, with a lot predicated on future growth.

This exercise shows that looking only at price appreciation presents a more dangerous-looking picture than looking at prices as related to revenues:  looking across the whole chart, prices are up a lot since April 2014 but so are forward-looking revenues, and the multiple is roughly the same at the start as at the end:  8x. [4]  Looking at things differently, of the ~350% gain since April 2016, half is due to multiple expansion (from a way-below-median ~4x to an above-median ~8x), and half is to stock revenue growth.

For me, when I look at overall markets (e.g., PE of the S&P), geopolitical uncertainty, price appreciation, and SaaS multiples, I still feel like taking a conservative position.  But somewhat less than so than before I saw this chart.  While it’s totally subjective:  SaaS is less frothy than I thought when looking only at price appreciation.

Switching gears, the same Cowen report had a nice rule of 40 chart that I thought I’d share as well:

r40 cowen

Since the R^2 is only 0.32, I continue to wonder if you’d get a higher R^2 using only revenue growth as opposed to rule of 40 score on the X axis.  For more on this topic, see my other Rule of 40 posts here.

# # #

[1] See disclaimers in my FAQ and terms of use in the blog license agreement.

[2] Nevertheless, 18% is a lot to lose if multiples instantly reset to the median.  (And they often don’t just drop to the median, but break well through it — e.g., in Jan 2016, they were as low as 4x.)

[3] And chartism doesn’t work.

[4] If you ignore most of the first month where it appeared to be falling from 10x to 8x.

The Loose Coupling of Decisions and Outcomes

There was a great column in the 12/10 Harvard Business Review entitled Good Decisions, Bad Outcomes by Dan Ariely, professor of behavioral economics at Duke and author of the excellent book Predictably Irrational.

In the column, he hits on one of my favorite topics, the loose coupling of decisions and outcomes.   Excerpt from the opening:

If you practice kicking a soccer ball with your eyes closed, it takes only a few tries to become quite good at predicting where the ball will end up. But when “random noise” is added to the situation—a dog chases the ball, a stiff breeze blows through, a neighbor passes by and kicks the ball—the results become quite unpredictable.

If you had to evaluate the kicker’s performance, would you punish him for not predicting that Fluffy would run off with the ball? Would you switch kickers in an attempt to find someone better able to predict Fluffy’s involvement?

In business, he argues that we do just that every day with outcome-based incentive compensation and outcome-based promotions and hiring.

As a (quite) results-oriented person, I very much believe in the “we are paid to get results” mantra that pervades business.  But as a marketing person, I also fully recognize that all market opportunities are not created equal.  Market opportunities  range across a spectrum from Sisyphean to land grab.

Note that I’m not arguing that any particular point on the spectrum is “easy” because they each have their challenges.  In Sisyphean markets the task itself is difficult, but you benefit from few competitors.  In land grabs, the selling task is easy because the need is obvious, but that obvious opportunity attracts swarms of competition.

Let’s take my favorite example.  Rate them:  Hero or Zero?

  • Guy 1.  Grows his business from $30M to $240M in 7 years.

By most measures, Guy 1 is looking pretty darn good.  I’d say Hero.  And then we meet Guy 2.

  • Guy 2.  Grows his business in the same market as Guy 1 from $30M to $1B in 7 years.

Ah, the problems of partial information.  It’s clear that Guy 1 is a Zero and Guy 2 is the Hero.  (The numbers are real, by the way.  Circa 1985, Guy 1 = Ingres, Guy 2 = Oracle.)

The point of this example is that everything is relative.  Today, the Ingres organic growth rate of 42% in enterprise software doesn’t look bad.  But in the mid 1980s, if you wanted to win in the market, you needed Oracle’s 80% growth.  It was a land grab, and poor Ingres never realized it.

My point is about relativity:  the quality of any performance should be judged on a relative basis to others performing a similar task  in a similar timeframe / market phase.

Ariely’s point is more about noise in general.  I think my argument helps to damp out a lot of Ariely’s noise, but it isn’t always possible (e.g., when there are no easy comparison points) and it certainly does not eliminate all of it.