Kellblog’s 10 Predictions for 2014

Since it is the season of predictions, I thought I’d offer up a few of my own for 2014, based on my nearly three decades of experience working in enterprise software with databases, BI tools, and enterprise applications.

See the bottom for my disclaimer, and off we go.  Here are my ten predictions for 2014.

  • Despite various ominous comparisons to 1914 made by The Economist, I think 2014 is going to be a good year for Silicon Valley.  I think the tech IPO market will continue to be strong.  While some Bubble 2.0 anxiety is understandable, remember that while some valuations today may seem high, that the IPO bar is much higher today (at around $50M TTM revenues) than it was 13 years ago, when you could go public on $0 to $5M in revenues.  In addition, remember that most enterprise software companies (and many Internet companies) today rely on subscription revenue models (i.e., SaaS) which are much more reliable than the perpetual license streams of the past.  Not all exuberance is irrational.
  • Cloud computing will continue to explode.  IDC predicts that aggregate cloud spending will exceed $100B in 2014 with amazing growth, given the scale, of 25%.  Those are big numbers, but think about this:  some 15 years after Salesforce.com was founded, its head pin category, sales force automation (SFA), is still only around 40% penetrated by the cloud.  ERP is less than 10% in the cloud.  EPM is less than 5% in the cloud.  As Bill Gates once said about prognostication, “we always overestimate the change that will occur in the next two years and underestimate the change that will occur in the next ten.”  IT is going to the cloud, inexorably, but change in IT never happens overnight.
  • Big Data hype will peak.   I remember the first time I heard the term “big data” (in about 2008 when I was on the board of Aster Data) and thinking:  “wow, that’s good.”  Turns out my marketing instincts were spot on.  Every company today that actually is — or isn’t — a Big Data play is dressing up as one, which creates a big problem because the term quickly starts to lose meaning.  As a result, Big Data today is nearing the peak of Gartner’s hype cycle.  As a term it will start to fall off, but real Big Data technologies such as NoSQL databases and predictive analytics will continue to face a bright future.
  • The market will be unable to supply sufficient Data Science talent.  If someone remade The Graduate today, they’d change  Mr. McGuire’s line about “plastics” to “data science.”  Our ability to amass data and create analytics technology is quickly surpassing our ability to use it.  Job postings for data scientists were up 15,000% in 2012 over 2011.  Colleges are starting to offer data science degrees (for example, Berkeley and Northwestern).  There’s even an a startup, Udacity, specifically targeting the need for data science education.  Because of the scarcity of data science talent, the specialization required to correctly use it, and the lack of required scale to build data science teams, data science consultancies like Palantir and Mu Sigma will continue to flourish.
  • Privacy will remain center stage.  Trust in “Don’t Be Evil” Google and Facebook has never been particularly high.  Nevertheless, it seems like the average person has historically felt “you can do whatever you want with my personal data if you want to pitch me an advertisement” — but, thanks to Edward Snowden – we now know we can add, “and if the government wants to use that data to stop a terrorist attack, then back off.”  It’s an odd asymmetry.  These are complex questions, but in a world where the cost of data collection will converge to free, will the privacy violation be in collecting the data or in analyzing it?  In a world where one trusted the government to adequately control the querying and access (i.e., where it took a warrant from a non-secret court), I’d argue the query standard might be good enough.  Regardless, the debate sparked thus far will continue to burn in 2014 and tech companies will very much remain in the center of it.
  • Mobile will continue to drive consumer companies like Dropbox and Evernote, but also enterprise companies like Box, Clari, Expensify, and MobileIron.  Turns out the enterprise killer app for mobile was less about getting enterprise applications to run on mobile devices and more about device proliferation, uniform access to content, and eventually security and management.  (And since I’m primarily an enterprise blogger, I won’t even mention social à la SnapChat or mobile gaming).  As one VC recently told me over dinner, “God bless mobile.”  Amen in 2014.
  • Social becomes a feature, not an app.  When I first saw Foursquare in 2010, I thought it should be the example in the venture capital dictionary for “feature, not company.”  Location-awareness has definitely become a feature and these days I do more check-in’s on Facebook than Foursquare.  I felt the same way when I worked at Salesforce.com and we were neck deep in the “social enteprise” vision.  When I saw Chatter, I thought “cool, but who needs yet another communications platform.”  Then I realized you could follow a lead, a case, or an opportunity and I was hooked.  But those are all feature use-cases, not application or company use-cases.  Given the pace of Salesforce, they fell in love with, married, and divorced social faster than most vendors could figure out their product strategy.  In the end, social should be an important feature of an enterprise application, almost a fabric built across modules.  I think that vision ends up getting implemented in 2014.  (Particularly if Microsoft ends up putting in David Sacks as its next CEO as some speculate.)
  • SAP’s HANA strategy actually works.  I was one of relatively few people who was absolutely convinced that SAP’s $5.8B purchase of Sybase in 2010 was more about databases than mobile.  SAP is clearly crafting a strategy to move both analytics and transactional database processing onto HANA and they have been doggedly consistent about HANA and its importance to the firm going forward.  They have been trying for decades to eliminate their dependency on Oracle — e.g., the 1997 Adabas D acquisition from Software AG  – and I believe this time they will finally succeed.  In addition, they will succeed — quite ironically — with their ingredient-branding strategy around HANA using a database to differentiate an application suite, something that they themselves would have seen as heresy 20 years ago.
  • Good Data goes public.  Cloud-based BI tools have had a tough slog over the years.  Some good companies were too early to market and failed (e.g., LucidEra).  Birst, another early entrant, certainly hasn’t had an easy time over its ten-year history.  Personally, while I was always a fan of cloud-based applications (having become a big Salesforce customer in 2003), I always worried that with cloud-based BI tools, you’d have too much of the nothing-to-analyze problem.  Good Data got around that problem early on by adopting a Crystal-like OEM strategy, licensing their tools through SaaS applications vendors.  They later evolved to a general cloud-based BI platform and applications strategy.  The company was founded in 2007, has raised $75M in VC, is reportedly doing very well, and an IPO seems a likely event in its future.  I’m calling 2014.
  • Adaptive Planning gets acquired by NetSuite.  Adaptive Planning was founded in 2003 as a cloud-based planning company and — despite both aspirations and claims to the contrary — in my estimation continues to play the role of the low-priced, cheap-and-cheerful planning solution for small and medium businesses.  That market position, combined with an existing, long-term strategic relationship whereby NetSuite resells Adaptive as NetSuite Financial Planning, makes me believe that 2014 will be the year that NetSuite finally pulls the trigger and acquires Adaptive Planning.  I think this deal could go down one of two ways.  If Adaptive continues to perform as they claim, then a potential S-1 filing could serve as a trigger for NetSuite (much as Crystal Decisions’ S-1 served as a trigger for Business Objects).  Or, if Adaptive hits rough road in 2014 for any reason (including the curse of the new headquarters) then that could trigger NetSuite with a value-shopper impulse leading to the same conclusion.

I should end with a bonus prediction (#11) that Host Analytics, our customers, and my colleagues will enjoy a successful 2014, continuing to execute on our cloud strategy to put the E back in EPM — focus and leadership in the enterprise segment of the market — and that we will continue to acquire both high-growth companies who want an EPM solution with which they can scale and liberate enterprises from costly and painful Hyperion implementations and upgrades.

Finally, let me conclude by wishing everyone a Happy New Year and great business success in 2014.

Disclaimers

  • See my FAQ to understand my various allegiances and disclaimers.
  • Remember I am the CEO of Host Analytics so I have a de facto pro-Host Analytics viewpoint.  
  • Predictions are opinion:  I have mine; yours may differ.
  • Finally, remember the famous Yogi Berra quote:  predictions are hard, especially about the future.

The Pillorying of MarkLogic: Why Selling Disruptive Technology To the Government is Hard and Risky

There’s a well established school of thought that high-tech startups should focus on a few vertical markets early in their development.  The question is whether government should be one of them?

The government seems to think so.  They run a handful of programs to encourage startups to focus on government.  Heck, the CIA even has a venture arm right on Sand Hill Road, In-Q-Tel, whose mission is to find startups who are not focused on the Intelligence Community (IC) and to help them find initial customers (and provide them with a dash of venture capital) to encourage them to do so.

When I ran MarkLogic between mid-2004 and 2010, we made the strategic decision to focus on government as one of our two key verticals.  While it was then, and still is, rather contrarian to do so, we nevertheless decided to focus on government for several reasons.

  • The technology fit was very strong.  There are many places in government, including the IC, where they have a bona fide need for a hybrid database / search engine, such as MarkLogic.
  • Many people in government were tired of the Oracle-led oligopoly in the RDBMS market and were seeking alternatives.  (Think:  I’m tired of writing Oracle $40M checks.)  While this was true in other markets, it was particularly true in government because their problems were compounded by lack of good technical fit — i.e., they were paying an oligopolist a premium price for technology that was not, in the end, terribly well suited to what they were doing.
  • Unlike other markets (e.g., Finance, Web 2.0) where companies could afford the high-caliber talent able to use the then-new open source NoSQL alternatives, government — with the exception of the IC — was not swimming in such talent.  Ergo, government really needed a well-supported enterprise NoSQL system usable by a more typical engineer.

The choice had always made me nervous for a number of reasons:

  • Government deals were big, so it could lead to feast-or-famine revenue performance unless you were able to figure out how to smooth out the inherent volatility.
  • Government deals ran through systems integrators (SI) which could greatly complexify the sales cycle.
  • Government was its own tribe, with its own language, and its own idiosyncrasies (e.g., security clearances).  While bad from the perspective of commercial expansion, these things also served as entry barriers that, once conquered, should provide a competitive advantage.

The only thing I hadn’t really anticipated was the politics.

It had never occurred to me, for example, that in a $630M project — where MarkLogic might get maybe $5 to $10M — that someone would try to blame failure of what appears to be one of the worst-managed projects in recent history on a component that’s getting say 1% of the fees.

It makes no sense.  But now, for the second time, the New York Times has written an article about the HealthCare.gov fiasco where MarkLogic is not only one of very few vendors even mentioned but somehow implicated in the failures because it is different.

HealthCare.gov

Let me start with a few of my own observations on HealthCare.gov from the sidelines.  (Note that I, to my knowledge, was never involved with the project during my time at MarkLogic.)

From the cheap seats the problems seem simple:

  • Unattainable timelines.  You don’t build a site “just like Amazon.com” using government contractors in a matter of quarters.  Amazon has been built over the course of a more than a decade.
  • No Beta program.  It’s incomprehensible to me that such a site would go directly from testing into production without quarters of Beta.  (Remember, not so long ago, that Google ran Beta’s for years?)
  • No general oversight.  It seems that there was no one playing the general contractor role.  Imagine if you built a house with plumbers, carpenters, and electricians not coordinated by a strong central resource.
  • Insufficient testing.  The absent Beta program aside, it seems the testing phase lasted only weeks, that certain basic functionality was not tested, and that it’s not even clear if there was a code-freeze before testing.
  • Late changes.  Supporting the idea that there was no code freeze are claims that the functional spec was changing weeks before the launch.

Sadly, these are not rare problems on a project of this scale.  This kind of stuff happens all the time, and each of these problems is a hallmark of a “train wreck” software development project.

To me, guessing from a distance, it seems pretty obvious what happened.

  • Someone who didn’t understand how hard it to build was ordered up a website of very high complexity with totally unrealistic timeframes.
  • A bunch of integrators (and vendors) who wanted their share of the $630M put in bids, probably convincing themselves in each part of the system that if things went very well that they could maybe make the deadlines or, if not, maybe cut some scope.  (Remember you don’t win a $50M bid by saying “the project is crazy and the timeframe unrealistic.”)
  • Everybody probably did their best but knew deep down that the project was failing.
  • Everyone was afraid to admit that the project was failing because nobody likes to deliver bad news, and it seems that there was no one central coordinator whose job it was to do so.

Poof.  It happens all the time.  It’s why the world has generally moved away from big-bang projects and towards agile methodologies.

While sad, this kind of story happens.  The question is how does the New York Times end up writing two articles where somehow the failure is somehow blamed on MarkLogic.  Why is MarkLogic even mentioned?  This the story of a project run amok, not the story of a technology component failure.

Politics and Technology

The trick with selling disruptive technology to the government is that you encounter two types of people.

  • Those who look objectively at requirements and try to figure out which technology can best do the job.  Happily, our government contains many of these types of people.
  • Those who look at their own skill sets and view any disruptive technology as a threat.

I met many Oracle-DBA-lifers during my time working with the government.  And I’m OK with their personal decision to stop learning, not refresh their skills, not stay current on technology, and to want to ride a deep expertise in the Oracle DMBS into a comfortable retirement.  I get it.  It’s not a choice I’d make, but I can understand.

What I cannot understand, however, is when someone takes a personal decision and tries to use it as a reason to not use a new technology.  Think:  I don’t know MarkLogic, it is new, ergo it is a threat to my personal career plan, and ergo I am opposed to using MarkLogic, prima facie, because it’s not aligned with my personal interests.  That’s not OK.

To give you an idea of how warped this perspective can get (and while this may be urban myth), I recall hearing a story that one time a Federal contractor called a whistle-blower line to report the use of MarkLogic on system instead of Oracle.  All I could think of was Charlton Heston at the end of Soylent Green saying, “I’ve seen it happening … it’s XML … they’re making it out of XML.

The trouble is that these folks exist and they won’t let go.  The result:  when a $630M poorly managed project gets in trouble, they instantly raise and re-raise decisions made about technology with the argument that “it’s non-standard.”

Oracle was non-standard in 1983.  Thirty years later it’s too standard (i.e., part of an oligopoly) and not adapted to the new technical challenges at hand.  All because some bright group of people wanted to try something new, to meet a new challenge, that cost probably a fraction of what Oracle would have charged, the naysayers and Oracle lifers will challenge it endlessly saying it’s “different.”

Yes, it is different.  And that, far as I can tell, was the point.  And if you think that looking at 1% of the costs is the right way to diagnose a struggling $630M project, I’d beg to differ.  Follow the money.

###

FYI, in researching this post, I found this just-released HealthCare.gov progress report.

The Customer Acquisition Cost (CAC) Ratio: Another Subtle SaaS Metric

The software-as-a-service (SaaS) space is full of seemingly simple metrics that can quickly slip through your fingers when you try to grasp them.  For example, see Measuring SaaS Renewals Rates:  Way More Than Meets the Eye for a two-thousand-word post examining the many possible answers to the seemingly simple question, “what’s your renewal rate?”

In this post, I’ll do a similar examination to the slightly simpler question, “what’s your customer acquisition cost (CAC) ratio?”

I write these posts, by the way, not because I revel in the detail of calculating SaaS / cloud metrics, but rather because I cannot stand when groups of otherwise very intelligent people have long discussions based on ill-defined metrics.  The first rule of metrics is to understand what they are and what they mean before entertaining long discussions and/or making important decisions about them.  Otherwise you’re just counting angels on pinheads.

The intent of the CAC ratio is to determine the cost associated with acquiring a customer in a subscription business.  When trying to calculate it, however, there are six key issues to consider:

  • Months vs. years
  • Customers vs. dollars
  • Revenue on top vs. bottom
  • Revenue vs. gross margin
  • The cost of customer success
  • Time periods of S&M

Months vs. Years

The first question — which relates not only to CAC but also to many other SaaS metrics:  is your business inherently monthly or annual?

Since the SaaS movement started out with monthly pricing and monthly payments, many SaaS businesses conceptualized themselves as monthly and thus many of the early SaaS metrics were defined in monthly terms (e.g., monthly recurring revenue, or MRR).

While for some businesses this undoubtedly remains true, for many others – particularly in the enterprise space – the real rhythm of the business is annual.  Salesforce.com, the enterprise SaaS pioneer, figured this out early on as customers actually encouraged the company to move to an annual rhythm, for among other reasons, to avoid the hassle associated with monthly billing.

Hence, many SaaS companies today view themselves as in the business of selling annual subscriptions and talk not about MRR, but ARR (annual recurring revenue).

Customers vs. Dollars

If you ask some cloud companies their CAC ratio, they will respond with a dollar figure – e.g., “it costs us $12,500 to acquire a customer.”  Technically speaking, I’d call this customer acquisition cost, and not a cost ratio.

There is nothing wrong with using customer acquisition cost as a metric and, in fact, the more your business is generally consistent and the more your customers resemble each other, the more logical it is to say things like, “our average customer costs $2,400 to acquire and pays us $400/month, so we recoup our customer acquisition cost in six months.”

However, I believe that in most SaaS businesses:

  • The company is trying to run a “velocity” and an “enterprise” model in parallel.
  • The company may also be trying to run a freemium model (e.g., with a free and/or a low-price individual subscription) as well.

Ergo, your typical SaaS company might be running three business models in parallel, so wherever possible, I’d argue that you want to segment your CAC (and other metric) analysis.

In so doing, I offer a few generic cautions:

  • Remember to avoid the easy mistake of taking “averages of averages,” which is incorrect because it does not reflect weighting the size of the various businesses.
  • Remember that in a bi-modal business that the average of the two real businesses represents a fictional mathematical middle.

avg of avg

For example, the “weighted avg” column above is mathematically correct, but it contains relatively little information.  In the same sense that you’ll never find a family with 1.8 children, you won’t find a customer with $12.7K in revenue/month.  The reality is not that the company’s average months to recoup CAC is a seemingly healthy 10.8 – the reality is the company has one very nice business (SMB) where it takes only 6 months to recoup CAC and one very expensive one where it takes 30.  How you address the 30-month CAC recovery is quite different from how you might try to squeeze a month or two out the 10.8.

Because customers come in so many different sizes, I dislike presenting CAC as an average cost to acquire a customer and prefer to define CAC as an average cost to acquire a dollar of annual recurring revenue.

Revenue on Top vs. Bottom

When I first encountered the CAC ratio is was in a Bessemer white paper, and it looked like this.

cac picture

In English, Bessemer defined the 3Q08 CAC as the annualized amount of incremental gross margin in 3Q08 divided by total S&M expense in 2Q08 (the prior quarter).

Let’s put aside (for a while) the choice to use gross margin as opposed to revenue (e.g., ARR) in the numerator.  Instead let’s focus on whether revenue makes more sense in the numerator or the denominator.  Should we think of the CAC ratio as:

  • The amount of S&M we spend to generate $1 of revenue
  • The amount of revenue we get per $1 of S&M cost

To me, Bessemer defined the ratio upside down.  The customer acquisition cost ratio should be the amount of S&M spent to acquire a dollar of (annual recurring) revenue.

Scale Venture Partners evidently agreed  and published a metric they called the Magic Number:

Take the change in subscription revenue between two quarters, annualize it (multiply by four), and divide the result by the sales and marketing spend for the earlier of the two quarters.

This changes the Bessemer CAC to use subscription revenue, not gross margin, as well as inverts it.  I think this is very close to CAC should be calculated.  See below for more.

Bessemer later (kind of) conceded the inversion — while they side-stepped redefining the CAC, per se, they now emphasize a new metric called “CAC payback period” which puts S&M in the numerator.

Revenue vs. Gross Margin

While Bessemer has written some great papers on Cloud Computing (including their Top Ten Laws of Cloud Computing and Thirty Q&A that Every SaaS Revenue Leader Needs to Know) I think they have a tendency to over-think things and try to extract too much from a single metric in defining their CAC.  For example, I think their choice to use gross margin, as opposed to ARR, is a mistake.

One metric should be focused on measuring one specific item. To measure the overall business, you should create a great set of metrics that work together to show the overall state of affairs.

leaky

I think of a SaaS company as a leaky bucket.  The existing water level is a company’s starting ARR.  During a time period the company adds water to the bucket in form of sales (new ARR), and water leaks out of the bucket in the form of churn.

  • If you want to know how efficient a company is at adding water to the bucket, look at the CAC ratio.
  • If you want to know what happens to water once in the bucket, look at the renewal rates.
  • If you want to know how efficiently a company runs its SaaS service, look at the subscription gross margins.

There is no need to blend the efficiency of operating the SaaS service with the efficiency of customer acquisition into a single metric.  First, they are driven by different levers.  Second, to do so invariably means that being good at one of them can mask being bad at the other.  You are far better off, in my opinion, looking at these three important efficiencies independently.

The Cost of Customer Success

Most SaaS companies have “customer success” departments that are distinct from their customer support departments (which are accounted for in COGS).  The mission of the customer success team is to maximize the renewals rate – i.e., to prevent water from leaking out of the bucket – and towards this end they typically offer a form of proactive support and adoption monitoring to ferret out problems early, fix them, and keep customers happy so they will renew their subscriptions.

In addition, the customer success team often handles basic upsell and cross-sell, selling customers additional seats or complementary products.  Typically, when a sale to an existing customer crosses some size or difficultly threshold, it will be kicked back to sales.  For this reason, I think of customer success as handling incidental upsell and cross-sell.

The question with respect to the CAC is what to do with the customer success team.  They are “sales” to the extent that they are renewing, upselling, and cross-selling customers.  However, they are primarily about ARR preservation as opposed to new ARR.

My preferred solution is to exclude both the results from and the cost of the customer success team in calculating the CAC.  That is, my definition of the CAC is:

dk cac pic

I explicitly exclude the cost customer success in the numerator and exclude the effects of churn in the denominator by looking only at the new ARR added during the quarter.  This formula works on the assumption that the customer success team is selling a relatively immaterial amount of new ARR (and that their primary mission instead is ARR preservation).  If that is not true, then you will need to exclude both the new ARR from customer success as well as its cost.

I like this formula because it keeps you focused on what the ratio is called:  customer acquisition cost.  We use revenue instead of gross margin and we exclude the cost of customer success because we are trying to build a ratio to examine one thing:  how efficiently do I add new ARR to the bucket?  My CAC deliberately says nothing about:

  • What happens to the water once S&M pours it in the bucket.  A company might be tremendous at acquiring customers, but terrible at keeping them (e.g., offer a poor quality service).  If you look at net change in ARR across two periods then you are including both the effects of new sales and churn.  That is why I look only at new ARR.
  • The profitability of operating the service.  A company might be great at acquiring companies but unable to operate its service at a profit.  You can see that easily in subscription gross margins and don’t need to embed that in the CAC.

There is a problem, of course.  For public companies you will not be able to calculate my CAC because in all likelihood customer success has been included in S&M expense but not broken out and because you can typically only determine the net change in subscription revenues and not the amounts of new ARR and churn.  Hence, for public companies, the Magic Number is probably your best metric, but I’d just call it 1/CAC.

My definition is pretty close to that used by Pacific Crest in their annual survey, which uses yet another slightly different definition of the CAC:  how much do you spend in S&M for a dollar of annual contract value (ACV) from a new customer?

(Note that many vendors include first-year professional services in their definition of ACV which is why I prefer ARR.  Pacific Crest, however, defines ACV so it is equivalent to ARR.)

I think Pacific Crest’s definition has very much the same spirit as my own.  I am, by comparison, deliberately simpler (and sloppier) in assuming that customer success not providing a lot of new ARR (which is not to say that a company is not making significant sales to its customer base – but is to say that those opportunities are handed back to the sales function.)

Let’s see the distribution of CAC ratios reported in Pacific Crest’s recent, wonderful survey:

pac crest cac

Wow.  It seems like a whole lot of math and analysis to come back and say:  “the answer is 1.

But that’s what it is.  A healthy CAC ratio is around 1, which means that a company’s S&M investment in acquiring a new customer is repaid in about a year.  Given COGS associated with running the service and a company’s operating expenses, this implies that the company is not making money until at least year 3.  This is why higher CACs are undesirable and why SaaS businesses care so much about renewals.

Technically speaking, there is no absolute “right” answer to the CAC question in my mind.  Ultimately the amount you spend on anything should be related to what it’s worth, which means we need relate customer acquisition cost to customer lifetime value (LTV).

For example, a company whose typical customer lifetime is 3 years needs to have a CAC well less than 1, whereas a company with a 10 year typical customer lifetime can probably afford a CAC of more than 2.  (The NPV of a 10-year subscription increasing price at 3% with a 90% renewal rate and discount at 8% is nearly $7.)

Time Periods of S&M Expense

Let me end by taking a practical position on what could be a huge rat-hole if examined from first principles.  The one part of the CAC we’ve not yet challenged is the use of the prior quarter’s sales and marketing expense.  That basically assumes a 90-day sales cycle – i.e., that total S&M expense from the prior quarter is what creates ARR in the current quarter.  In most enterprise SaaS companies this isn’t true.  Customers may engage with a vendor over a period of a year before signing up.  Rather than creating some overlapped ramp to try and better model how S&M expense turns into ARR, I generally recommend simply using the prior quarter for two reasons:

  • Some blind faith in offsetting errors theory.  (e.g., if 10% of this quarter’s S&M won’t benefit us for a year than 10% of a year ago’s spend did the same thing, so unless we are growing very quickly this will sort of cancel out).
  • Comparability.  Regardless of its fundamental correctness, you will have nothing to compare to if you create your own “more accurate” ramp.

I hope you’ve enjoyed this journey of CAC discovery.  Please let me know if you have questions or comments.

Accenture’s Key Considerations in SaaS Financial Applications

The first stop in the Host Analytics cloud EPM roadshow was held today before a packed house at the lovely Four Seasons Hotel in Palo Alto.  The event featured a keynote address from Accenture Managing Director Tamara Emerson.

One of her slides was about considerations whether you should look at SaaS/cloud financial systems.  Here is the criteria list (i.e., if you are experiencing any of these issues you should consider cloud financial applications).

  • Financial systems are not providing the information necessary to make solid business decisions.
  • Financial systems cannot easily adapt to changing business processes.
  • Delivered SaaS businesses processes that fit business needs.
  • No current solution exists or financial systems are on an unsupported version
  • Financial systems support is difficult to maintain and the skillsets needed are not readily available in the company / marketplace
  • Processes and capabilities fall behind industry benchmarks

A full copy of Tamara’s slides are embedded below.


 

Thoughts on MongoDB’s Humongous $150M Round

Two weeks ago MongoDB, formerly known as 10gen, announced a massive $150M funding round said to be the largest in the history of databases lead by Fidelity, Altimeter, and Salesforce.com with participation from existing investors Intel, NEA, Red Hat, and Sequoia.  This brings the total capital raised by MongoDB to $231M, making it the best-funded database / big data technology of all time.

What does this mean?

The two winners of the next-generation NoSQL database wars have been decided:  MongoDB and Hadoop.  The faster the runner-ups  figure that out, the faster they can carve off sensible niches on the periphery of the market instead of running like decapitated chickens in the middle. [1]

The first reason I say this is because of the increasing returns (or, network effects) in platform markets.  These effects are weak to non-existent in applications markets, but in core platform markets like databases, the rich invariably get richer.  Why?

  • The more people that use a database, the easier it is to find people to staff teams so the more likely you are to use it.
  • The more people that use a database, the richer the community of people you can leverage to get help
  • The more people that build applications atop a database, the less perceived risk there is in building a new application atop it.
  • The more people that use a database, the more jobs there are around it, which attracts more people to learn how to use it.
  • The more people that use a database, the cooler it is seen to be which in turn attracts more people to want to learn it.
  • The more people that use a database, the more likely major universities are to teach how to use it in their computer science departments.

To see just how strong MongoDB has become in this regard, see here.  My favorite analysis is the 451 Groups’ LinkedIn NoSQL skills analysis, below.

linkedinq31

This is why betting on horizontal underdogs in core platform markets is rarely a good idea.  At some point, best technology or not, a strong leader becomes the universal safe choice.  Consider 1990 to about 2005 where the relational model was the chosen technology and the market a comfortable oligopoly ruled by Oracle, IBM, and Microsoft.

It’s taken 30+ years (and numerous prior failed attempts) to create a credible threat to the relational stasis, but the combination of three forces is proving to be a perfect storm:

  • Open source business models which cut costs by a factor of 10
  • Increasing amounts of data in unstructured data types which do not map well to the relational model.
  • A change in hardware topology to from fewer/bigger computers to vast numbers of smaller ones.

While all technologies die slowly, the best days of relational databases are now clearly behind them.  Kids graduating college today see SQL the way I saw COBOL when I graduated from Berkeley in 1985.  Yes, COBOL was everywhere.  Yes, you could easily get a job programming it.  But it was not cool in any way whatsoever and it certainly was not the future.  It was more of a “trade school” language than interesting computer science.

The second reason I say this is because of my experience at Ingres, one of the original relational database providers which — despite growing from ~$30M to ~$250M during my tenure from 1985 to 1992 — never realized that it had lost the market and needed a plan B strategy.  In Ingres’s case (and with full 20/20 hindsight) there was a very viable plan B available:  as the leader in query optimization, Ingres could have easily focused exclusively on data warehousing at its dawn and become the leader in that segment as opposed to a loser in the overall market.  Yet, executives too often deny market reality, preferring to die in the name of “going big” as opposed to living (and prospering) in what could be seen as “going home.”  Runner-up vendors should think hard about the lessons of Ingres.

The last reason I say this is because of what I see as a change in venture capital. In the 1980s and 1990s VCs used to fund categories and cage-fights.  A new category would be identified, 5-10 companies would get created around it, each might raise $20-$30M in venture capital and then there would be one heck of a cage-fight for market leadership.

Today that seems less true.  VCs seem to prefer funding companies to categories.  (Does anyone know what category Box is in?  Does anyone care about any other vendor in it?)  Today, it seems that VCs fund fewer players, create fewer cage-fights, and prefer to invest much more, much later in a company that appears to be a clear winner.

This, so-called “momentum investing” itself helps to anoint winners because if Box can raise $309M, then it doesn’t really matter how smart the folks at WatchDox are or how clever their technology.

MongoDB is in this enviable position in the next-generation (open source) NoSQL database market.  It has built a huge following, that huge following is attracting a huge-r (sorry) following.  That cycle is attracting momentum investors who see MongoDB as the clear leader.  Those investors give MongoDB $150M.

By my math, if entirely invested in sales [2], that money could fund hiring some 500 sales teams who could generate maybe $400M a year in incremental revenue.  Which would in turn will attract more users.  Which would make the community bigger.  Which would de-risk using the system.  Which would attract more users.

And, quoting Vonnegut, so it goes.

# # #

Disclaimer:  I own shares in several of the companies mentioned herein as well as competitors who are not.  See my FAQ for more.

[1] Because I try to avoid writing about MarkLogic, I should be clear that while one can (and I have) argued that MarkLogic is a NoSQL system, my thinking has evolved over time and I now put much more weight on the open-source test as described in the “perfect storm” paragraph above.  Ergo, for the purposes of this post, I exclude MarkLogic entirely from the analysis because they are not in the open-source NoSQL market (despite the 451′s including them in their skills index).  Regarding MarkLogic, I have no public opinion and I do not view MongoDB’s or Hadoop’s success as definitively meaning either anything either good or bad for them.

[2] Which, by the way, they have explicitly said they will not do.  They have said, “the company will use these funds to further invest in the core MongoDB project as well as in MongoDB Management Service, a suite of tools and services to operate MongoDB at scale. In addition, MongoDB will extend its efforts in supporting its growing user base throughout the world.”

Measuring SaaS Renewal Rates: Way More Than Meets the Eye

I love cloud computing. I love metrics. And I love renewals. So when I went looking on the Web for a great discussion of SaaS renewals and metrics I was surprised not to find much. Certainly, I found the two classics on SaaS metrics:

  • The Bessemer Venture Partners 10 Laws of Cloud Computing white paper, which I highly recommend despite its increasing pollution with portfolio-company marketing.

The Four Factors
While the above articles are all great, I was surprised that no one really dug into the nitty-gritty of renewals at an enterprise SaaS company, where I believe there are four independent factors at work:

  • Timing. When a contracted is renewed. For example, how to handle when a contract is renewed early or late.
  • Duration. The length of the renewed contract. For example, how to handle when a one-year customer renews for three years, and receives a multi-year discount in the process (for either pre-payment or the contractual commitment itself). [1]
  • Expansion/shrinkage. The expansion or shrinkage of the contract’s value compared to the original contract. For example, how to handle customers adding or dropping seats or products, and/or price increases or decreases.
  • The count metric. What do we wish to count (e.g., bookings, ARR, seats, or customers) and what does it mean when we count one thing versus another.

Particularly in a world where companies are increasingly marketing “negative churn” rates and renewal rates well in excess of 100%, I think it’s worth digging into this and offering some rigor.

A Simple Example
Let’s take a concrete example. Imagine a customer who buys 100 seats of product A at $1,200/seat/year on 7/30/12, with a contractual provision that says the price cannot increase by more than 3% per year [1a].

Imagine that customer renews on 6/30/13, buying 80 seats of product A for $1,225, and adding 40 seats of product B at $1,200/seat/year, and who receives a 15% discount for making a prepaid three-year commitment.

Hang on. While I know you want to run away right now, don’t. This is all real-life stuff in a SaaS company. Bear with me, and download the spreadsheet here (as an Excel file, not a PDF) that shows the supporting math.

A few questions are easy:

  • What were the bookings on the initial order? Answer: $120,000.
  • What was the annual recurring revenue (ARR) of the initial order? Answer: $120,000.
  • What were the bookings on the renewal order? Answer: $372,300.
  • What was the ARR of the renewal order? Answer: $124,100. [2]

Calculating Churn: Leaky Bucket Analysis
So far, so good. Now let’s talk about churn. Because, as you will see, renewal rates alone are complicated enough, I have adopted a convention where:

  • When it comes to renewals, I look only at rates
  • When it comes to churn, I look only at dollars/values

I know this is a completely arbitrary decision, but doing this lets me remember one set of formulas instead of two, reduces rat-hole conversations about definitions, and — most importantly – lets me look at one area in percentages and the other in dollars, helping me to avoid the “percent trap” where you can lose all perspective of absolute scale. [3]

I define churn with an equation that I call “leaky bucket analysis.” [4]

Starting ARR + new ARR – churn ARR = ending ARR

So, some questions:

  • Was there any churn associated with this renewal? Answer: Yes.
  • Why? Answer: Despite a small price increase on product A, there was a 15% multi-year discount and a loss of 20 seats which more than offset it.
  • How much ARR churned? Answer: $36,700. [5]
  • How much new ARR was added? Answer: $40,800. The after-discount value of the product B subscriptions.
  • What is ending ARR? 124,100 = 120,000 + 40,800 – 36,700.
  • How many customers churned? Answer: 0.
  • How many seats churned? Answer: 20.

Note that ARR, seats, and customers are all snapshot (or, point-in-time) metrics that lend themselves to leaky bucket analysis. Period-metrics, like bookings, do not. Bookings happen within a period. There is no concept of starting bookings + new bookings – churn bookings = ending bookings. That’s not how it works. So, when you define churn through leaky bucket analysis, measuring bookings churn doesn’t work.

We can, however, calculate bookings churn as the difference between what was up for renewal and what we renewed. In this case, $120,000 – $372,300 = ($252,300), showing one way to generate a negative churn number. The example makes somewhat more sense in the other direction: if we had a three-year $372,300 contract up for renewal and only renewed $120,000 them we might argue that $252,300 in bookings were churned. From a cash collections perspective, this makes sense [6].

But from a customer value perspective it does not. Unless the customer has plans to discontinue using the service, by dropping from a three-year to a one-year contract we will actually collect more money from them over the next 3 years if they continue to renew ($438,000 vs. $372,300) [7]. So the bookings churn that looks bad for year-one cash actually results in superior ARR and three-year cash collections.

The lesson here is that different metrics are suited for measuring different things. In this case, we can see that bookings churn is useful primarily for analyzing short-term cash collections and not, say, for customer lifetime value or customer satisfaction.

Renewal Rates and Timing
Now that we’re warmed up let’s have some fun. Let’s answer some questions on renewals:

  • From a bookings perspective, when should we count the renewal order? Answer: the order was received on 6/30/13 so it’s a 2Q13 booking.
  • From a renewal rate perspective, when should we count this order? Answer: while debatable, to me it’s a renewal of a 3Q contract, so I would count it in 3Q from a renewal rate perspective. [8]
  • When would we count the booking if it were late and arrived on 10/30/13? Answer: From a bookings perspective, it would be a 4Q13 booking. From a renewal rate perspective, it’s the renewal of a 3Q contract, so I would count it in 3Q. [9]
  • On a customer-count basis, how do we count this renewal? Answer: 100%. We had one logo before and we have one logo after, so 100%. [10]

Here it’s going to get a little dicey.

On an ARR basis, how do we measure this renewal? Answer: this begs the question of whether we should include expansion ARR due to new seats, new products, and price increases. Since I am worried that expansion may hide shrinkage, I want to see this both ways. Hence, I will define “gross” to mean including expansion and “net” to mean excluding expansion.

  • What is the gross ARR-based renewal rate? Answer: 103%. [11]
  • What is the net ARR-based renewal rate? Answer: 69%. Now you understand why I want to see it both ways. The net rate is showing that we lost real ARR on product A due to reduced seats and the multi-year discount. The upsell of product B hides shrinkage, producing an innocuous 103% number that might evoke a very different scenario in the mind’s eye (e.g., renewing the original deal for one year with a 3% price hike).
  • What is the gross bookings-based renewal rate? Answer: 310%. We took a $120,000 order and renewed it at $372,000. (But we transformed it greatly in the process.)
  • What is the net bookings-based renewal rate? 208%. We took a $120,000 order for product A and turned it into a $249,000 order for product A. But we dropped ARR about 33% in the process (from $120,000 to $83,300) through lost seats and the multi-year discount.
  • What is the gross seat-count renewal rate? 120%
  • What is the net seat-count renewal rate? 80%
  • What is the customer-count renewal rate? 100%

Identifying the Best Renewal-Related Metrics
So, what is the renewal rate then anyway?  69%, 80%, 100%, 103%, 120%, 208%, or 310%?

I’d say the answer depends on what you want to measure. Having nearly drowned you in the renewal-rate swamp, let me now drain it. Here are the metrics that I think matter most:

key renewals metrics

Here’s why:

  • Leaky bucket analysis is important because ARR growth is the single most important driver of value for a SaaS company.
  • Churn ARR shows you, viscerally, how much extra you had to sell just to make up for leaks [12].  Rates seem sterile by comparison.
  • The customer count-based renewal rate is the best indicator of overall customer satisfaction: what percent of your customers want to keep doing business with you, regardless of whether they change their configuration, product mix, seat mix, contract duration, etc.
  • The gross seat-based based renewal rate shows you how effective you are at driving adoption of your services. Think: land and expand (in terms of seats).
  • The gross ARR-based renewal rate shows you, overall, how effective you are at increasing your customers’ annual commitment. However, it says nothing about how you do that (i.e., which type of expansion ARR) or the extent to which expansion ARR in one area is offsetting shrinkage in another.
  • The net ARR-based renewal rate shows you how much of ARR you renew without relying on expansion. This is a very conservative metric designed to unmask problems that can be hidden by expansion ARR.
  • The gross bookings-based renewal rate is the best predictor of future cashflows. If we know that, on average, we take an order of 100 units and turn it into an order of 175 units – through whatever means – then we should use this metric to predict cashflows. Note that, as we’ve seen, there are trade-offs between ARR and bookings, but the consequences of those can be revealed by other metrics.

Footnotes
[1] Note that in a multi-year prepaid contract that bookings (order value) equals total contract value (TCV). When multi-year contracts are not prepaid, bookings are only the first-year portion of TCV.

[1a] Some purists would argue that having the right to raise the price 3% should set the denominator of subsequent renewal rate calculations to 1.03 * original-value.  While I get the idea, I nevertheless disagree.

[2] The renewal order is for three years, so to calculate the ARR we need to divide the bookings value by three.

[3] Saying our “churn rate was 10%” makes things sound OK, but saying we churned $2M in ARR is, to me, somehow more visceral. That is, we had to sell an extra $2M in ARR just to make up for existing business that we lost.

[4] A leaky bucket starts at one water level, during a period new water is added, some water leaks out, and the net change establish the ending water level. (Note that in leaky bucket analysis, definitionally, leaks are never negative.)

[5] Now might be a good time to download the spreadsheet accompanying this post so you can see my calculations. In this case, the churn is the difference between the total value from product A on the original order versus the renewals order.

[6] Subscription bookings typically turn into cash within 90 days.

[7] In reality, we should both uplift the price in years 2 and 3 and discount by the renewal rate to get a better expected cash collections figure. (There is nearly endless detail in analyzing this subject but I will make simplifying assumptions at times.)

[8] Otherwise, it would juice 2Q renewal rates and depress 3Q renewal rates, making both less meaningful.

[9] Bonus question:  how would you handle the late-renewal scenario at the 7/20/13 board meeting? Answer: I would publish provisional renewal rates that exclude the transaction, letting the board know we have an outstanding renewal in process. Then once it closed, I would revise the 3Q renewal rates accordingly.

[10] Which then begs the question of how you count customers. For example, while GE has one logo, they have numerous very independent divisions in a large number of countries.

[11] Note that purist might argue that since we had the right to raise prices up to 3% that we should put 103% of the ARR in denominator in this and all similar calculations, thus dropping the resulting renewal rate here to 100%.  While I believe annual increases are important, I still believe renewing someone to 103K in ARR who was at 100K in ARR is a 103% renewal.  Tab 3 of the supporting spreadsheet plays with some numbers in this regard.

[12] It is a good idea to divide churn into 3 buckets to describe the reason: owner change (including bankruptcy), leadership change, and customer dissatisfaction.

I’m Now An Enterprise Irregular

Just a quick post to announce that I’ve joined the diverse group of practitioners, consultants, investors, journalists, analysts, tech executives, and full-time bloggers — known as the Enterprise Irregulars — who share a common passion for enterprise technology and its application to business in the 21st century.

I’ve been a big fan / reader of the Enterprise Irregulars blog and tweetstream for years and think there’s nothing like it in enterprise software.

I was quite honored to be asked to join the group and am very happy to be on board.  In so doing, I’m reconnecting with many old friends and colleagues:

  • Anshu Sharma, with whom I worked at Salesforce.
  • Evangelos Simoudis, with whom I currently work at Host Analytics where he sits on the board of directors.
  • Dennis Moore, with whom I worked at Ingres … we’re talking all the way back to like 1987 or so.
  • Esteban Kolsky, a customer strategist who followed us when I was at Salesforce on the Service Cloud.
  • John Taschek, with whom I worked at Salesforce and who I knew in his journalistic life prior.
  • Jeff Nolan, who I’ve met in and about the Valley and with whom I’ve had some lovely Brunello di Montalcino.
  • Jevon McDonald, who I met during the GoInstant acquisition at Salesforce.
  • Merv Adrian, a fellow data aficionado and someone who I’ve known since his days at Giga.
  • Paul Greenberg, a CRM author and expert … and ditto on the Brunello
  • Ramana Rao, with whom we partnered when he was at Inxight and I was running MarkLogic.
  • Sameer Patel, who I’ve met in and about the Valley.
  • Zoli Erdos, publisher of Enterprise Irregulars and Cloud Ave, and who I’ve met in and about the Valley.

And those are just the folks I already know in some way!  The full list of Enterprise Irregulars is here.

Now all I have to do is to finish the labor-of-love that I’ve been writing on SaaS renewals rates.  I should be done in about a week.

Some Thoughts on Rocket Fuel, Their Voice, and Their Recent S-1

Silicon Valley is a place built by nerds, arguably for nerds, but once big money gets involved there is always tension between the business people and the technical people about control.  Think, for example, of the famous Jobs/Sculley falling-out back in 1985 where the business guy beat the technical guy.

However, in part because of events like that, the business people don’t always win.  In my estimation, there is a sort of “founder pendulum,” which swings with about a ten-year period between one end (where technical founders are “out”) and the other (where they are “in’).

Through most of the 2000s, founders were “out.”  There are two ways to tell this:  (1) you hear incessant griping about “founder issues” at Buck’s and at the Rosewood and (2) you see young PhD’s paired fairly early in the company’s evolution with business-person CEOs, often as a condition of funding.

Somewhere towards the end of the last decade, founders were “in” again.  This  makes me happy because I think engineers and scientists are the soul of Silicon Valley.  That’s why I had so much fun on the board of Aster Data.  And it’s why I like companies like Rocket Fuel.

Rocket Fuel was co-founded by Stanford computer science PhD George John and two fellow Yahoo colleagues in 2008.  John remains its CEO today.  I met him during my year-off in 2011 and was impressed, so I’ve kept an eye on the company ever since.

During the interim, the thing I most noticed about Rocket Fuel was its corporate personality.  Like Splunk, they do a great job of having a strong corporate voice.  Let’s look at some of the culture and communications that are part of this voice.

  • “The rocket scientists behind Rocket Fuel.”  (Turns out John actually worked for a while at NASA.)
  • “In 2008, a group of data savants came together.”
  • “Rocket Fuel is bringing hardcore science to the art of marketing.”
  • “Rocket Fuel has great machine-learning scientists”
  • Jobs titles like “Rocket Scientist” and “Chief Love Officer.”
  • A professorial founder with a great TEDx speech.
  • Strong recruiting videos on culture and science.  “Geek cult.”
  • The launching of (client-labelled) weather balloons from the Nevada desert at a company event.
  • A “nerdy, but loveable” culture (straight from the S-1 and beats “don’t be evil” any day in my book).
  • And, of course, a great puzzle recruiting billboard

rocket-fuel-palindrome

I know that many Silicon Valley companies have odd job titles, geeky events, nerdy billboards, and a focus on recruiting great engineers.  Somehow, however, to me, Rocket Fuel comes off as both more mature and more authentic in this race.  These aren’t geeks trying to look cool, playing sand volleyball, and partying till dawn; these are geeks being geeks, and quite happily so.

I noticed when the company filed for an IPO back in August, but didn’t have time to dig into the (amended) S-1 until now.

Here are some takeaways:

  • Revenue of $44.6M and $106.6M in 2011 and 2012, 139% growth
  • Revenue of $39.6M and $92.6M in 1H12 and 1H13, 133% growth
  • Gross profit of $42.9M in 1H13, up from $17.6M in 1H12, with gross margin of 46%
  • R&D expense of $6.1M in 1H13, up from $1.5M in 1H12 and representing 7% of sales
  • S&M expense of $34.6M in 1H13, up from $15.5M in 1H12 and representing 37% of sales
  • G&A expense of $10.9M in 1H13, up from $2.6M in 1H12, and representing 11% of sales.
  • Operating loss of $8.8M in 1H13, up from $2.1M in 1H12, and representing 9.5% of sales
  • EPS of ($1.43) in 1H13, up from ($0.31) in 1H12

So the financial picture looks pretty clear:  really impressive growth, no profits.  Let’s take a quick look at how things are scaling.

rocket fuel scaling

  • Revenue growth is decelerating slightly as the more recent half-over-half (HoH) growth rate is slightly lower than the YoY
  • R&D expense is way up, growing 307% HoH.
  • S&M expense is up, but is scaling slight slower than revenue (as one generally likes) at 123%
  • G&A expense is way up, growing 319% HoH.  Let’s assume a lot of that is IPO-related.
  • Total operating expenses are growing at 163% versus revenue at 134%.  Usually, you like it the other way around.

The risk factors, which run nearly 20 pages, look reasonably standard and include risks from being able to file as an “emerging growth company,” implying more onerous disclosure, and the potential inability to comply, later.

The most interesting risks related to user rejection of 3rd party cookies, European Union laws, and potential “do not track” standards.  They cite customer concentration as a risk, but their top 20 customers in 2011 and 2012 accounted for (only) 39% and 38% of revenues.  They also cite access to inventory, which makes sense a threat to anyone in this business, particularly in the case of social media and Facebook FBX.

  • As of 6/30/13, the company had about 405 employees.
  • Prior to the IPO, the company has raised about $75M in capital.
  • The company will have 32.5M shares outstanding after the IPO.
  • The increase in the fair market value (FMV) of the stock, as shown in the option grant history table, is impressive.  That’s an 8.9x over the 18 months shown.

fmv rocket fuel

  • After the IPO, the three cofounders will own 10.7%, 9.0%, and 3.9% of the company, Mohr Davidow will own 35.1%, and Nokia will own 8.3% (assuming no exercise of over-allotment).

As per my S-1 tradition, I never get all the way through.  I stopped on page 125 of about what appears to be 185 or so pages.  If you want to dig through the rest of it, you can find the S-1 here.

In conclusion, I will say that I’m an enterprise software guy and don’t know a whole lot about the digital advertising business.  I believe that Rocket Fuel is both a middleman and an arbitrage play, that middlemen can sometimes get squeezed, and that the name of the game in arbitrage is consistently outsmarting the other guys.  So, in reality, I believe there’s more to the geek culture than simple fun:  it’s critical to winning in the strategy.

How this will end?  I don’t know.  Do I think George John can build one heck of a team?  You betcha.  Do the big guys against whom they compete have people as smart as Rocket Fuel’s?  Probably.  Are the big guys’ best-and-brightest working on this particular problem?  I don’t know.

(Often, in my experience, that is the difference.  It’s not whether company X has people as smart as startup Y; it’s where they’ve chosen to deploy them.  Even Facebook and Google have a bottom 20%.)

I do know that programmatic video advertising company Adapt.tv recently sold for $405M to AOL and that YuMe had to reduce its IPO pricing, but then got off to a strong first day in the public markets (only to gradually drop and then rebound).  Are these clouds or silver linings?  I’m inclined to think the latter.

I hope things go well for the company going forward and congratulations to them for all the success they’ve had thus far.  #revengeofthenerds

See my FAQ for disclaimers.  I am not financial analyst.  I do not recommend buying, selling, or holding any given stock. I may directly or indirectly own shares in the companies about which I blog.

5 Things Executives Should Say More About The Budget

I’m always struck by how often good business ideas, conceived with the best of intentions, get flipped upside-down when applied by some managers.  A favorite example is the 3x pipeline rule about which I’ve already blogged (see The Self-Fulfilling 3x Pipeline Coverage Fallacy).  Another might be the 50 calls/day rule for an SDR or a 100 lead goal for a marketing event.

Instead of using tools and metrics to intelligently guide us, we all too often become slaves to them.  We get 3x pipeline coverage because sales management will scream if we don’t.  We make 50 calls/day — even if they’re all “left voicemail” — because everyone else does.  We generate 100 leads, regardless of their quality, because that’s what the boss wanted.

As we approach annual planning season, I thought I’d take a moment to post on the corporate budget — a useful tool if there ever was one, but one all too often used as an instrument of oppression, rather than one of empowerment.

I won’t go into an analysis of the major problems in producing corporate budgets both because I’ve already done so (see The Great Dysfunctional Corporate Budgeting Process) and because the Wall Street Journal also recently featured an excellent op-ed piece describing the key problems (see Companies Get Budgets All Wrong).

Instead of talking about problems with the budget creation process, today I’m going to focus today on how executives communicate to their teams about budgets and budget-related issues.

All too often managers de-power themselves by saying things like:

  • “I know we’re dying for resource here in technical support and trust me, I’m fighting as hard I can for you, but ‘Dr. No the CFO’ just won’t give us any more resources.  I know it stinks, and that maybe it means we really don’t care about our customers, but perhaps next year it will get better and your job will suck less.”
  • “I’m sorry — that’s a great idea, but we just don’t have the budget for it.”
  • “I’d love to hire that amazing person, but they cost 108% of what we budgeted.   Go hire someone within budget.”
  • “Gosh, $15,000 for an experimental marketing program is a lot of money that we don’t have budgeted.  Let’s not try it.”

I’ve heard all of these statements myself, multiple times, in real life as I worked my way up the corporate ladder.  Each of them is a cop-out where the manager fails to show leadership, positions himself as a victim, de-powers himself in front of his team, and demotivates his team in the process.

I expect my executive team members to stay within budget (unless I’ve given them explicit approval otherwise) but it’s also very important to me that they not cop out and act like a prisoner of the budget, instead of its master, in so doing.

To make this philosophy actionable, I have come up with five things executives should say more when talking to their teams about the budget:

  • “We need to spend what we have before we gripe about needing more.”
  • “Show me a rockstar and I’ll hire them.”
  • “I always have $50K for a great idea.”
  • “$15K is a rounding error in my budget.”
  • “Great things often start with small investments.”

“We need to spend what we have before we gripe about needing more.”
The fast-track way out of most executive jobs is to give an impassioned speech to the operating committee about how much your team is struggling under an undue workload, how close everyone is to the breaking point, and how unsustainable the current situation is, only to have the following dialog ensue:

CEO: “I understand that calls per agent are up 30%. I understand that we struggling to hit our SLA targets. I understand the team is working hard. What I don’t understand is why you are making this speech. You are tracking to spend only 85% of your budget this quarter and have four open headcount.”

I saw this happen once in a management committee meeting. The speech was touching. The passion was real. But the logic was threadbare. The new head of customer service did not make a similar speech at the next meeting.

Executives need own their budgets both in the sense of not exceeding them, but also in the sense of spending them. The company has allocated resources to solve a problem. It is the executive’s job to deploy those resources. Particularly in high-growth companies, spending too little can be worse than spending too much.

Executives should also be transparent with their teams. If the team is behind on hiring, executives shouldn’t pretend that Darth CFO is the problem. “We’re the problem. So let’s go fix it.”

In the event the budget is fully deployed, executives still shouldn’t cop out. Instead of saying, “we’ve spent all that corporate gave us, and we’re still dying,” they need to reframe the situation as a challenge. “Either we need to find a way to meet the caseload with our current resources,” or “we need to do a better job at building a business case that convinces the company to give us more resources.”

We’re not victims. We either have an efficiency challenge or a better business case to make.

“Show me a rockstar and I’ll hire them.”
Hiring generates its own challenges. Headcount may open and close with the ups and downs of the sales forecast. At some companies, HR will foolishly not support a recruiting process 2-3 months before a headcount opens, thus building-in automatic delays. Sometimes we find people who cost more than what we’ve allocated in the budget.

Here is what I tell my team:

“I need to admit that I have a huge soft spot for talent. Show me a rockstar and I’ll hire them: budget-schmudget, headcount-schmedcount. We need to build a top-quality organization and I know that top-quality people don’t always come along at exactly the time and at exactly the cost that we have in our budget. So abuse me. Exploit my weakness. When it comes to talent, paraphrasing Rogers and Hammerstein, ‘I’m just a CEO who cain’t say no.’”

Why do I do this? First because it eliminates all possible excuses to not hiring great talent and second because I honestly believe it. Suppress your inner bureaucrat and don’t say “gosh, that guy’s a little too expensive” or “I think we’re going to have a headcount freeze, so let’s slow down on this one.”

Instead think: if after I hire this rockstar, if things got tight financially and I had to eliminate someone else to do so, would I? If the answer is yes, make the hire. Sports teams get stronger by recruiting players stronger the current line-up. Unlike sports, however, business isn’t zero-sum. We can take all the great players we can find. Once in a while if that means having to zero-sum things when the budget gets tight, so be it.

One convenient side-effect of this policy is that it lets you see who your executives think are rockstars. If someone uses the rockstar argument on me and the person in question is a dud, I’ve learned important information about my executive’s talent identification skills.

“I always have $50K for a great idea.”
I starting using this when I got my first marketing management job because I was so tired of hearing my bosses say, “that’s a great idea, too bad we can’t afford it.”

Let’s think for a second:

  • Either something is a great idea and management should figure out how to do it.
  • Or it’s not a great idea and management should tell its originator why.

But, please, don’t cop out and say, “it’s a great idea, but we can’t afford it.”

To flip this problem around I long ago adopted, “I always have $10K for a good idea” which I’ve title- and inflation-adjusted to $50K. Obviously, the number should scale according to your budget, but the point is first that you change your own reaction to new ideas and second that you don’t kill them at birth with, “before you tell me this, you should know I don’t have any money — so what’s your idea again?”

Instead say, “you’ve got an idea — let’s hear it — I always have $50K for a great idea.”

By the way, budgeting for this is highly recommended. I usually carry a cushion of 1-3x my “nut” each quarter to be sure that I can back up my words.

“$15K is a rounding error in my budget.”
Managers can get so focused on not exceeding budgets that I’ve literally been in meetings where people with $3M quarterly budgets take valuable executive team meeting time talking about $15K items. $15K is one-half of one-percent of a $3M budget. So, yes, while $15K is a lot of money and while money should never be wasted, I think executives need to remember what their 0.005 threshold is and remind their teams about it.

I don’t want to talk about items that either rounding errors or, more amazingly, completely invisible when rolled into the final quarterly numbers. Let’s, shall we, worry about the other 99.5% of our expenses?

The other way to say this is that executives should look holistically at their budgets. An excess focus on incremental expenses (often combined with a lack of planned cushion) is what leads people to lengthy discussions of rounding errors.

“Great things often start with small investments.”
A side effect of working at successful companies is that they grow. Teams get big. We have 100 engineers here and 200 engineers there. We’re spending $1M on this marketing event or that. People starting anchoring their idea of size relative to the core teams or programs that drive the company.

In so doing, they forget the critical principle that great ideas often start with small investments. Business Objects, which eventually sold for nearly $7B, was created on only $4M in venture capital. The entire Salesforce Social Enterprise vision, which helped catapult the company from $2B to $3B in revenues, was created on the back of a $70K outsourced Twitter connector, conceived by the amazing Service Cloud team.

Instead of starting with, “we need millions of dollars to build Chatter, integrate a feed-based paradigm into our entire CRM suite, and then become the social enterprise company,” the Service Cloud team started small. They said, “I bet companies would love to be able to find unhappy customers on Twitter, automatically create cases in response, and leverage their entire contact center infrastructure to provide support on social channels.”

They hired an outsourcer to build a Twitter connector, cases began flowing in, and the seeds of the Social Enterprise vision were born.

The moral of all this, of course, is that great ideas can start small. Instead of saying, “sorry, we can’t find $2M to fund your new idea,” executives needs to say, “how you can re-cast your idea to start small, so we can try it quickly, see if works, and then build from there?”

Sometimes it’s not possible. You can’t build a nuclear submarine or a 787 on incremental budget. But in information technology and consumer services, you can go a long way by starting small with a little money.

Video

Video on Why Customers Are Moving EPM Systems to the Cloud

Here’s a three-minute video of me talking about why customers are choosing to move their EPM systems from traditional on-premises systems like Hyperion and onto cloud-based systems like Host Analytics.