Kellblog covers topics related to starting, managing, leading, and scaling enterprise software startups. My favorite topics include strategy, marketing, sales, SaaS metrics, and management. I also provide commentary on Silicon Valley, venture capital, and the business of software.
What’s the difference? While both of the above presentations are structured, the newspaper doesn’t let the template get in the way of story. The newspaper works within the template to tell the story.
I think because marketing departments are so often split between “design people” and “content people,” that (1) templates get over-weighted relative to content and (2) content people get so busy adhering to the template that they forget to tell the story.
Here’s a real, anonymized example:
What’s wrong here?
There is a lot of wasted vertical space at the top: all large font, bolded template items with generous line spacing.
The topic section gets lost among the other template items. Visually, author is as important as topic.
There is no storytelling. There is effectively no headline — “Latest Release of Badguy Product” takes no point-of-view and doesn’t create an angle for a story.
The metadata is not reader-first, preferring to remind Charles of his title over providing information on how to contact him.
But there is one, much more serious problem with this: the claim / rebuttal structure of the document lets the competitor, not the company, control the narrative.
For example, political affiliations aside, consider current events between Trump and Comey. Like him or not, Trump knows how to control a narrative. With the claim / rebuttal format, our competitive bulletin would read something like this if adapted to the Trump vs. Comey situation.
Competitive Update: Team Comey Trump says:
Comey is a coward
Comey is a leaker
Comey is a liar
But, don’t worry, our competitive team says:
Comey isn’t really a coward, but it is interesting that he released the information through a colleague at Columbia Law School
Comey isn’t really a leaker because not all White House conversations can be presumed confidential and logically speaking you can either leak or lie, but you can’t both at the same time.
Great. What are we talking about? Whether Comey is a leaker, liar, or coward. Who’s controlling the narrative? Not us.
Here’s a better way to approach this document where you rework the header and metadata, add a story to the title, recharacterize each piece of the announcement on first reference (rather than saying it once “their way” and then challenging it), and then providing some broader perspective about what’s happening at the company and how it relates to the Fall17 release.
This is a very common problem in marketing. It comes from a lack of storytelling and fill-in-the-template approach to the creation of marketing deliverables. Avoid it by always remembering to put the story ahead of the template.
The great reckoning begins. Correct/nailed. As predicted, since most of the bubble was tied up in private companies owned by private funds, the unwind would happen in slow motion. But it’s happening.
Silicon Valley cools off a bit. Partial. While IPOs were down, you couldn’t see the cooling in anecdotal data, like my favorite metric, traffic on highway101.
Porter’s five forces analysis makes a comeback. Partial. So-called “momentum investing” did cool off, implying more rational situation analysis, but you didn’t hear people talking about Porter per se.
Cyber-cash makes a rise. Correct. Bitcoin more doubled on the year (and Ethereum was up 8x) which perversely reinforced my view that these crypto-currencies are too volatile — people want the anonymity of cash without a highly variable exchange rate. The underlying technology for Bitcoin, blockchain, took off big time.
SAP realizes they are a complex enterprise application company. Incorrect. They’re still “running simple” and talking too much about enabling technology. The stock was up 9% on the year in line with revenues up around 8% thus far.
Oracle’s cloud strategy gets revealed – “we’ll sell you any deployment model you want as long as your annual bill goes up.” Partial. I should have said “we’ll sell you any deployment model you want as long as we can call it cloud to Wall St.”
Accounting irregularities discovered at one or more unicorns. Correct/nailed. During these bubbles the pattern always repeats itself – some people always start breaking the rules in order to stand out, get famous, or get rich. Fortune just ran an amazing story that talks about the “fake it till you make it” culture of some diseased startups.
Startup workers get disappointed on exits. Partial. I’m not aware of any lawsuits here but workers at many high flyers have been disappointed and there is a new awareness that the “unicorn party” may be a good thing for founders and VCs, but maybe not such a good thing for rank-and-file employees (and executive management).
The first cloud EPM S-1 gets filed. Incorrect. Not yet, at least. While it’s always possible someone did the private filing process with the SEC, I’m guessing that didn’t happen either.
2016 will be a great year for Host Analytics. Correct. We had a strong finish to the year and emerged stronger than we started with over 600 great customers, great partners, and a great team.
Now, let’s move on to my predictions for 2017 which – as a sign of the times – will include more macro and political content than usual.
Social media companies finally step up and do something about fake news. While per a former Facebook designer, “it turns out that bullshit is highly engaging,” these sites will need to do something to filter, rate, or classify fake news (let alone stopping to recommend it). Otherwise they will both lose credibility and readership – as well as fail to act in a responsible way commensurate with their information dissemination power.
Gut feel makes a comeback. After a decade of Google-inspired heavily data-driven and A/B-tested management, the new US administration will increasingly be less data-driven and more gut-feel-driven in making decisions. Riding against both common sense and the big data / analytics / data science trends, people will be increasingly skeptical of purely data-driven decisions and anti-data people will publicize data-driven failures to popularize their arguments. This “war on data” will build during the year, fueled by Trump, and some of it will spill over into business. Morale in the Intelligence Community will plummet.
Under a volatile leader, who seems to exhibit all nine of the symptoms of narcissistic personality disorder, we can expect sharp reactions and knee-jerk decisions that rattle markets, drive a high rate of staff turnover in the Executive branch, and fuel an ongoing war with the media. Whether you like his policies or not, Trump will bring a high level of volatility the country, to business, and to the markets.
With the new administration’s promises of $1T in infrastructure spending, you can expect interest rates to raise and inflation to accelerate. Providing such a stimulus to already strong economy might well overheat it. One smart move could be buying a house to lock in historic low interest rates for the next 30 years. (See my FAQ for disclaimers, including that I am not a financial advisor.)
Huge emphasis on security and privacy. Election-related hacking, including the spearfishing attack on John Podesta’s email, will serve as a major wake-up call to both government and the private sector to get their security act together. Leaks will fuel major concerns about privacy. Two-factor authentication using verification codes (e.g., Google Authenticator) will continue to take off as will encrypted communications. Fear of leaks will also change how people use email and other written electronic communications; more people will follow the sage advice in this quip:
Dance like no one’s watching; E-mail like it will be read in a deposition
In 2015, if you were flirting on Ashley Madison you were more likely talking to a fembot than a person. In 2016, the same could be said of troll bots. Bots are now capable of passing the Turing Test. In 2017, we will see more bots for both good uses (e.g., customer service) and bad (e.g., trolling social media). Left unchecked by the social media powerhouses, bots could damage social media usage.
Artificial intelligence hits the peak of inflated expectations. If you view Salesforce as the bellwether for hyped enterprise technology (e.g., cloud, social), then the next few years are going to be dominated by artificial intelligence. I’ve always believed that advanced analytics is not a standalone category, but instead fodder that vendors will build into smart applications. They key is typically not the technology, but the problem to which to apply it. As Infer founder Vik Singh said of Jim Gray, “he was really good at finding great problems,” the key is figuring out the best problems to solve with a given technology or modeling engine. Application by application we will see people searching for the best problems to solve using AI technology.
Megavendors mix up EPM and ERP or BI. Workday, which has had a confused history when it comes to planning, acquired struggling big data analytics vendor Platfora in July 2016, and seems to have combined analytics and EPM/planning into a single unit. This is a mistake for several reasons: (1) EPM and BI are sold to different buyers with different value propositions, (2) EPM is an applications sale, BI is a platform sale, and (3) Platfora’s technology stack, while appropriate for big data applications is not ideal for EPM/planning (ask Tidemark). Combining the two together puts planning at risk. Oracle combined their EPM and ERP go-to-market organizations and lost focus on EPM as a result. While they will argue that they now have more EPM feet on the street, those feet know much less about EPM, leaving them exposed to specialist vendors who maintain a focus on EPM. ERP is sold to the backward-looking part of finance; EPM is sold to the forward-looking part. EPM is about 1/10th the market size of ERP. ERP and EPM have different buyers and use different technologies. In combining them, expect EPM to lose out.
And, as usual, I must add the bonus prediction that 2017 proves to be a strong year for Host Analytics. We are entering the year with positive momentum, the category is strong, cloud adoption in finance continues to increase, and the megavendors generally lack sufficient focus on the category. We continue to be the most customer-focused vendor in EPM, our new Modeling product gained strong momentum in 2016, and our strategy has worked very well for both our company and the customers who have chosen to put their faith in us.
I thank our customers, our partners, and our team and wish everyone a great 2017.
While there is a strong argument that buyers should be nurtured before, during, and after the initial sale, I’m going to speak in this post about pre-sales lead nurturing, the purpose of which is to turn prospective buyers into marketing qualified leads, or MQLs.
The first point (the newness criterion) was a trap that I slipped in to see if you were paying attention. While some marketers will argue that MQLs need to be “new” (and there are some good reasons for this) others will increasingly question — in a lead nurturing world — what “new” actually means and why “new” matters.
After all, what should matter is that we have found a person more likely to buy than the other people. Whether they’ve been in our database 2 hours, 2 weeks, or 2 years shouldn’t matter. Or should it?
I think it does matter because:
Marketing needs to watch its image in front of sales. Declaring someone who’s come to our last 3 annual roadshows an MQL strikes me as a “Kick Me” sign, regardless of whether she’s just accumulated 50 points. There is a difference between someone who is new and someone we’ve been recycling for several years.
Marketing needs to track how many are new vs. recycled (1) to avoid a seemingly in-built tendency to be new-obsessed, (2) because few companies actually want 100% of either, and (3) because new and recycled MQLs will likely show very different downstream conversion rates, which should not be averaged away.
That’s why, in my view, a “new MQL” is a contact who has become an MQL for the first time (i.e., they are not necessarily new to our database, but they are new in hitting the MQL criteria). After that, if they don’t buy on the first round and if they later come back to life again (by accumulating enough points in the nurture system), they are a “recycled MQL.”
MQLs = new MQLs + recycled MQLs
When I first heard the term “nurture” about a decade ago, to me it was all about recycling. Nurture was what you did to people who were interested in your stuff, but who weren’t ready to buy now. The purpose, then, of nurture would be some combination of (1) maintaining awareness and positive opinion so that the customer would call when they were ready to buy, and (2) attempting to accelerate the customer’s buying timeframe by marketing the benefits of acting sooner rather than later.
Nurture, then, was a process that should take quarters or years — not days or weeks. Nurture could include emails, but it wouldn’t be limited to them. We might invite nurtured leads to local events, mail them schwag (aka, “dimensional pieces“), and even call them from time to time.
I now call this path “slow nurture” because marketers seem to increasingly define “nurture” as the process by which you take a new inquiry (or name) and turn them into an MQL. It becomes largely about email and is a speedy process that executes in hours, days, or maybe weeks. I now call this “fast nurture.”
Both types of nurture should involve point accumulation, use tracks, and be A/B tested. But there is a fundamental difference between fast nurture and slow nurture, related primarily to frequency.
This is what fast nurturing all too often feels like:
That’s why I also call fast nurture speed-bagging.
If you speed-bag someone who plans to buy in 12 months, what happens? You irritate the heck out of them. “Hey, I just wanted to read that white paper and you’ve emailed and called 4 times in a week. Go away.” Then they hit unsubscribe or junk-sender.
And that’s it. You’re done. You spent real money finding someone, they were the right person, they even have plans to buy — just not now — and you speed-bagged them into blocking your communications. Epic fail.
That’s why marketers need to think about Nurture, Fast and Slow. They need to never fast-nurture slow-nurture prospects. And they need worry about just how much they are speed-bagging even the fast-nurture prospects. Particularly in markets where the challenge is more finding the right buyer at right time than simply finding the right buyer, matching the pace of the nurture to the pace of the buyer is everything.
Perhaps it was the $168M FY14 operating loss. Maybe it was the $380M in financing raised during the last three years. Or the average quarterly burn rate of $23M. But somehow, I got sucked in.
I just had to know their CAC ratio. Of course, it’s not going to be easy to calculate. While they give us quarterly S&M expense, that’s only half the equation; we’re going to have a figure out –as best we can — quarterly new annual recurring revenue (ARR).
Billings as a Sales Metric
While many SaaS companies don’t disclose “billings,” Box does — but on an annual basis only — in their S-1.
[Click on the images to see full size.]
Billings is an attempt to triangulate on new sales (or bookings) in a SaaS company. The standard way to calculate billings is to add revenue plus change in deferred revenue.
The idea is that if you want to know how “sales” went during a given period, then revenue is not a great indicator because, in a SaaS company, revenue is an indicator of how much you sold in prior periods, not the current one. So you look at deferred revenue trying to pick up the volume of new orders. The problem is that things quickly get very complicated because (1) deferred revenue is moving both down (as past deals convert into revenue) and up (as new deals are signed) and (2) deferred revenue itself is limited only to deals that are prepaid — if a company does a constant business volume but suddenly starts doing a lot of two-year prepaids, then deferred revenue will skyrocket and if, for example, hard economic times drive loyal customers to ask for bi-annual billing, then deferred revenue will plummet, all without any “real” change in underlying subscription business. In addition, multi-year non-prepaid deals are invisible from a deferred revenue perspective (because there’s nothing, i.e., no cash prepayment, to defer).
In short, any metric built upon deferred revenue is only as a good as deferred revenue at reflecting the business.
To demonstrate the relationship between billings and new ARR, I built a model which assumes a SaaS company that starts from scratch, increases new ARR added each quarter by $500K (i.e., $500K in its first quarter, $1M in its second, $1.5M in its third), does only one-year prepaid deals, and has a 90% renewal rate. Here’s what happens.
(You can download the spreadsheet with Box financial summary and the full version of the model here. Be sure to download as an Excel file, not a PDF.)
While in year one, billings is equivalent to new ARR, as you build up the renewals base, it contributes more to revenue and muddies thing up. For a company of the above size, growth, and renewal rate, the ratio of new ARR to billings ends up 0.4.
When you take this same model and (manually) force fit the new ARR numbers to try approximate Box’s revenue and billings from 2012-2014, you get:
A CAC of ~1.6
In this case (and given my assumption set) you end up with a new-ARR/billings ratio of 0.6. To make life easier, I also calculated a new-ARR/revenue ratio (see the full sheet), which ends up around 0.8. I’ll use to this number to calculate my CAC, which comes out to between 1.5 and 1.8. While not quite an idyllic 1.o to 1.2, it’s well below 2.0 and helps explain why Box has been able to raise so much money: their growth has been deemed scalable.
Billings = Ending ARR
In reviewing my models, it’s hard not to notice that billings for a period equals ending ARR for that period. This turns out to be true under my assumption set of subscription-only (no services), one-year deals only, and everything pre-paid. Why? Because for any deal taken at any point during the year, we will recognize some percent of it (X) and the rest (Y) will go to deferred revenue. The difference between X and Y changes across the year but X+Y= the deal size at all times.
This is not true when you have consulting or do multi-year prepaid deals (which can make billings > ending ARR). It’s also not true when you do semi-annual billing (which can make billings < ending ARR).
If you assume for any given company that these factors are roughly constant, then even though uniformly inaccurate, it does provide a simple way to approximate new ARR: take the difference in ending ARR two periods, add a churn assumption, and bang you have new ARR during the period.
Key Metrics, Cashflow, and the P&L
Here are some summarized key metrics (using yellow to highlight points of interest).
Year over year growth, while high at 97% is slowly decelerating.
Gross margins are nice at nearly 80%
Operating expenses are massive: 278% of sales in 1Q12 down to “only” 182% in 4Q14.
S&M expense are a seemingly very high 121% of revenues. This looks bad, but to really know what’s going on we need to examine the CAC, which looks pretty good.
Return on sales is -112%
That burn rate sure grabs you: $22M per quarter
In many ways you see a typical “go big or go home” cloud computing firm, burning boatloads of cash but acquiring customers in a reasonably efficient manner and doing a nice job with retention/cross-sell/up-sell as judged by their retention numbers. When you look big picture, I believe they see themselves in a winner-take-all battle vs. DropBox and in this case, the strategy — while amazingly cash consumptive — does make sense.
Here is a look at cashflow and billings:
And last, but certainly not least, here is the P&L:
I’m always amazed by the R&D spend of seemingly simple consumer services. They spent $46M in R&D last year … on what?
The $171M in S&M expense sure grabs your attention
But now that you have an XML repository, what are you going to do with it? What and how may you deploy it? Simply having large masses of XML converted data doesn’t necessarily mean that the data in this form is even useful.
Norm then explains the problem with using other languages with XML documentbases
The problem with other programming languages isn’t that they aren’t able to process XML, it’s that they aren’t able to process XML efficiently. Data has to be converted from XML to the language’s native data structures. Once converted, it must be manipulated with functions that don’t understand the underlying model and are, consequently, not always a good fit. This “impedance mismatch” causes confusion and can introduce errors. Finally, the programming language structures have to be converted back into XML. Each of these steps is tedious, time consuming, and introduces the possibility of errors. In a sophisticated application, this process may have to occur several times for each XML resource.
If all this looks interesting, the full article is here.
Oxford University Press has organized its reference works on African-Americans into a central repository it calls the African American Studies Center (AASC), which allows researchers the ability to search through images and articles, arranging them in chronological order.
AASC is not only a very cool MarkLogic-based application, but also — perhaps more importantly — it’s just one slice of Oxford’s content.
Once a publisher builds their content application platform, it is relatively easy to take different slices of their content to build new and different information products. For example, Oxford Islamic Studies Online (OISO) is built on the same platform as the AASC, and I’m sure the OISO’s marginal development cost was reduced because it could leverage the fixed costs invested the development of OUP’s (MarkLogic-based) publishing platform.
[Revised, rewritten, and replacing a post from yesterday]
One question we encounter with our Information and Media customers is whether they should buy MarkLogic Server and build an application on top of it, or use a SaaS offering (which may or may not be based on MarkLogic) and effectively rent the use of an application to meet their online publishing needs.
The primary arguments in favor of the rent (SaaS) approach are:
You get up and running faster because you’re renting the use of an existing application
You have lower up-front fees because you need neither to build your application nor buy the hardware/software platform on which to run it
You can focus on what matters because you are liberated from the nitty-gritty of building and deploying production systems
The primary arguments in favor of the build approach are:
You create a unique offering which you can use to differentiate from your competition
Your costs are potentially lower over the mid-term (SaaS’s relatively high annual payments reverse the initial savings over a few years; if you don’t believe me, remember that Wall Street values a dollar of SaaS revenue at about 2-3x a dollar of perpetual revenue)
You create a strategic platform on which you build future applications, reducing the marginal cost of experimentation and new product development
To me, SaaS is not a religious issue; it’s a practical one.
While we typically sell our software on a perpetual license basis, we nevertheless are a big user of SaaS solutions at Mark Logic. We happily use Salesforce and somewhat less happily use Netsuite. I was also a champion of bringing Salesforce into Business Objects, where we became one of their earliest, large enterprise customers. (As I told IT at the time: if you won’t treat me as a customer, then I’ll go find someone who will.)
Turning back to the question of publishers and SaaS, like most questions in business, the answer should derive from strategy.
If you are trying to compete solely on the basis of your proprietary content, then you should consider a “rent” strategy.
If you are trying to compete on the basis of mixing content and its delivery mechanism, then should consider a “buy” strategy.
If you are in between, then you’ll need to figure out where you are on the continuum and what you’re willing to trade for what.
As I always say, there are two things that money can’t buy: love and competitive advantage. Applied here, if you can rent a solution then your competitor down the street can rent it, too, and no amount of application configuration is going to result in competitive advantage (or disadvantage) for either of you.
What does this mean? It means that SaaS is great for what Geoffrey Moore calls “context” and rotten for what he calls “core.” Excerpt from the referred page:
Core – See Core/context analysis Any activity which creates sustainable differentiation in the target market resulting in premium prices or increased volume. Core management seeks to dramatically outperform all competitors within the domain of core.
Context – See Core/context analysis Any activity which does not differentiate the company from the customers’ viewpoint in the target market. Context management seeks to meet (but not exceed) appropriate accepted standards in as productive a manner as possible.
That’s why we happily use Salesforce and Netsuite at Mark Logic — we aren’t trying to differentiate on the basis of our accounts receiveable or pipeline management systems. (We are trying to differentiate on technology, market focus, and services excellence.)
So, for publishers
The more your basis of competition is ownership of a proprietary content set, the more delivery becomes context, and the more you should consider SaaS
The more your basis of competition is (1) uniting your content with other content, (2) delivering content in unique in-context ways, and (3) rapid innovation in online product development, the more delivery is core, and the more you should build custom applications (i.e., new information products) on a standardized platform.
I’m Dave Kellogg, technology executive, investor, independent director, adviser, and blogger. I’m also a hiker, oenophile, and fly fisher.
From 2012 to 2018, I was CEO of cloud enterprise performance management vendor Host Analytics, where we quintupled ARR while halving customer acquisition costs in a highly competitive market, ultimately selling the company in a private equity transaction.
Previously, I was SVP/GM of Service Cloud at Salesforce and CEO at NoSQL database provider MarkLogic. Before that, I was CMO at Business Objects for nearly a decade as we grew from $30M to over $1B. I started my career in technical and product marketing positions at Ingres and Versant.
I love disruption, startups, and Silicon Valley and have had the pleasure of working in varied capacities with companies including ClearedIn, FloQast, GainSight, Lecida, MongoDB, Recorded Future, Tableau and TopOPPs. I currently sit on the boards of Alation (data catalogs) and Nuxeo (content management) and previously sat on the boards of agtech leader Granular (acquired by DuPont for $300M) and big data leader Aster Data (acquired by Teradata for $325M).
I periodically speak to strategy and entrepreneurship classes at the Haas School of Business (UC Berkeley) and Hautes Études Commerciales de Paris (HEC).