Dear Marketing: Stop Putting the Template Ahead of the Story

I’ve always thought that if marketers wrote newspapers, the famous New York Times headline of August 8, 1974 would have looked like this:

nixon1

Instead of how it actually looked, which was:

pinsdaddy-richard-nixon-resigned-as-us-president-40-years-ago-this-week

What’s the difference?  While both of the above presentations are structured, the newspaper doesn’t let the template get in the way of story.  The newspaper works within the template to tell the story.

I think because marketing departments are so often split between “design people” and “content people,” that (1) templates get over-weighted relative to content and (2) content people get so busy adhering to the template that they forget to tell the story.

Here’s a real, anonymized example:

agf1What’s wrong here?

  • There is a lot of wasted vertical space at the top:  all large font, bolded template items with generous line spacing.
  • The topic section gets lost among the other template items.  Visually, author is as important as topic.
  • There is no storytelling.  There is effectively no headline — “Latest Release of Badguy Product” takes no point-of-view and doesn’t create an angle for a story.
  • The metadata is not reader-first, preferring to remind Charles of his title over providing information on how to contact him.

But there is one, much more serious problem with this:  the claim / rebuttal structure of the document lets the competitor, not the company, control the narrative.

For example, political affiliations aside, consider current events between Trump and Comey.  Like him or not, Trump knows how to control a narrative.  With the claim / rebuttal format, our competitive bulletin would read something like this if adapted to the Trump vs. Comey situation.

Competitive Update:  Team Comey
Trump says:

  • Comey is a coward
  • Comey is a leaker
  • Comey is a liar

But, don’t worry, our competitive team says: 

  • Comey isn’t really a coward, but it is interesting that he released the information through a colleague at Columbia Law School
  • Comey isn’t really a leaker because not all White House conversations can be presumed confidential and logically speaking you can either leak or lie, but you can’t both at the same time.

Great.  What are we talking about?  Whether Comey is a leaker, liar, or coward.  Who’s controlling the narrative?  Not us.

Here’s a better way to approach this document where you rework the header and metadata, add a story to the title, recharacterize each piece of the announcement on first reference (rather than saying it once “their way” and then challenging it), and then providing some broader perspective about what’s happening at the company and how it relates to the Fall17 release.

agf2

This is a very common problem in marketing.  It comes from a lack of storytelling and fill-in-the-template approach to the creation of marketing deliverables.  Avoid it by always remembering to put the story ahead of the template.

Just likes blogs and newspapers do.

Blocking the End Run: Eleven Words to Reduce Politics in Your Organization

People are people.  Sometimes they’re conflict averse and just not comfortable saying certain things to their peers.  Sometimes they don’t like them and are actively trying to undermine them. Sometimes they’re in a completely functional relationship, but have been too darn busy to talk.

So when this happens, how do you — as a manager — what should you do?

“Hey Dave, I wanted to say that Sarah’s folks really messed up on the Acme call this morning.  They weren’t ready with the proposal and were completely not in line with my sales team.”

Do you pile on?

“Again?  Sarah’s folks are out of control, I’m going to go blast her.”  (The “Young Dave” response.)

Do you investigate?

“You know my friend Marcy always said there are three sides to every story:  yours, mine, and what actually happened.  So let me give Sarah a call and look into this.”

Do you defend?

“Well, that doesn’t sound like Sarah.  Her team’s usually buttoned up.”

In the first case, you’re going off half-cocked without sufficient information which, while emotionally satisfying in the short-term, often leads to a mess followed by several apologies in the mid-term.  In the second case, you’re being manipulated into investigating something when perhaps you were planning a better use of your time that day.  In the third case, you’re going off half-cocked again, but in the other direction.

In all three cases, you’re getting sucked into politics.  Politics?  Is it really politics?  Well, how do you think Sarah is going to feel in when you show up asking a dozen questions about the Acme call?  She’ll certainly consider it politics and, among other things, there’s about a 98% chance that she will say:

“Gosh, I wish Bill came and talked to me first.”

At which point, if you’re like me, you’re going to say:

“No, no, no.  I know what you’re thinking.  Don’t worry, this isn’t political.  It’s not like Bill was avoiding you on this one.  He just happened to be talking to me about another issue and he brought this up at the end.  It’s not political, no.”

But can you be sure?  Maybe it just did pop into Bill’s mind during the last minute of the other call.  Or maybe it didn’t.  Maybe the reason Bill called you was a masterfully political pretext.  Can you know the difference?

So what do you say to Bill when he drops the comment about Sarah’s team into your call?  The eleven words that reduce politics in any organization:

“What did Sarah say when you talked to her about this?”

[Mike Drop.]

# # #

(Props to Martin Cooke for teaching me the eleven words.)

How to Train Your VP of Sales to Think About the Forecast

Imagine a board meeting.

Director:  What’s the forecast for new ARR this quarter?

Sales VP:  $4.3M, with a best case of $5.0M.

Director:  So what’s the most likely outcome?

Sales VP:  $4.3M.

Director:  What are you really going to do?  (The classic noob trap question.)

Sales VP:  I think we can come in North of that.

Director:  What’s the worst case?

Sales VP:  $3.5M.

Director:  What are the odds of coming in at or above the forecast? 

Sales VP:  I always make my forecast.

Director:   What do you mean by worst case?

Sales VP:  You know, well, if the stars align in a bad way – a lot of stuff would have to go wrong – but if that happened, then we could end up at $3.5M.

Director:  So, let’s say a 10% chance of being at/below the worst case?

Sales VP:  I’d say more like 5%.

Director:  What do you mean by best case?

Sales VP:  Well, if we really struck it rich and everything lined up just the way I wanted, that would be best case.

Director:  You mean if all the deals came in — so best case basically equals pipeline?

Sales VP:  No, that never happens, I’ve made about 10 scenarios of different deal closing combinations and in 2 of them I can get to the best case.

You see the problem?  Does it sound familiar?  Do you realize how much time we spend talking in board meetings about “forecast,” “best case,” and “worst case” without every discussing what we mean by those terms?

Do you see how this is compounded by the sales VP’s natural, intuitive view of the outcomes?  Do you see the obvious mathematical contradictions?  “I always make my forecast” says it’s a 100% number, but then the VP says it’s the “most likely” number which implies 50%.  Then the VP says there’s a 5% chance of coming in at/less than worst case (which is much lower) and then kind of implies that there’s a 20% chance of beating best case – but the 2 out of 10 is meaningless because it’s not a probability, it’s just a count of scenarios.  Nothing adds up.

The result is, if you’re not careful, the board ends up counting angels on pinheads.  What can we do to fix this?  It’s simple:  teach (and if need be, force) your sales VP to think probabilistically.  Ask him/her how often:

  • It is reasonable to miss the forecast.  A typical answer might be 10%.
  • It is likely to come in at/below the worst case? Typical answer, 5%.
  • It is likely to meet/beat the best case? Typical answer, 20%.

So, with those three questions, we’ve now established that we want the sales VP to give us:

  • A 90% number on being at/above the forecast
  • A 20% number on being at/above the best case
  • A 5% number on being at/below the worst case

Put differently, when the sales VP decides what number to forecast that they should be thinking:

  • I should come in under my forecast once every 2.5 years (10 quarters).
  • I should hit/beat the best case about once every 5 quarters (a bit less than once a year).
  • I should come in/under the worst case once every 20 quarters (once every 5 years, or for most minds, basically never).

The beauty here is that when you work at a company a long time you can get enough quarters under your belt, to start really seeing how you’re doing relative to these frequencies.  What’s more, by converting the probabilities into frequencies (e.g., once every 10 quarters) you make it more intuitive for the sales VP and the organization to think this way.

In addition, you have a basis for conversations like this one which, among other things, is about overconfidence:

CEO:  You need to work on your forecasting.

Sales VP:  You know it’s hard out there, very competitive, and we don’t have much deal flow.  Back when I was at { Salesforce | Oracle | SAP }, I was much better at forecasting because we had more volume.

CEO:  But we agreed your forecast should be a 90% number and you’ve missed it 2 out of the past 4 quarters.

Sales VP:  Yes, but as I’ve said it’s tough to forecast in this market.

CEO:  Then forecast a lower number so you can beat it 90% of the time.  I’m asking you for a 90% number and empirically you’re giving me a 50% number. 

Sales VP:  OK.

CEO:  Plus, when those two big deals slipped last quarter you didn’t drop your forecast, why?

Sales VP:  Because where I grew up, you don’t cut the forecast.  You try like crazy to hold it.  Do you know the morale problems it causes when I cut the forecast – especially if it’s below plan? So, yes, when those two deals slipped it added more risk to the forecast – and I told you and the board that — but I didn’t cut forecast, no. 

CEO:  But “adding risk” here is meaningless.  In reality, “adding risk” means it’s not a 90% number anymore.  You’ve taken what was a 90% number and it’s now more like a 60% or 70% number.  So I want you to forget what they taught you growing up in sales and always – every week – give me a number that based on all available information you are 90% sure you can beat.  If that means dropping the forecast so be it.

sales forecast

This also helps with the board and the inevitable sandbagger issue.  In my experience (and with a bit of exaggeration) you always seem to be in one of two situations:  (1) intermittently missing plan and in trouble or (2) consistently making plan and a “sandbagger” – it feels like there’s nothing in between.

Well, if you establish with the board that your company forecast is a 90% number it means you are supposed to beat it 9 times out of 10 so you can only really be labelled a sandbagger when you’re 15 for 15 or 20 for 20.  It also reminds them that you’re supposed to arrive at the forecast so that you miss once every 10 quarters so they shouldn’t freak out if once every 2.5 years if that happens — it’s supposed to happen in this system.  (Just don’t let a once-in-ten-quarter event happen twice in a row.)

I like this quantitative basis for sales forecasting and I carry it down to the salesrep and pipeline level.  I believe that each “forecast category” should have a probability associated with it.  For example, at the opportunity level, you should link probabilities to categories, such as:

  • Commit = 90%
  • Forecast = 70%
  • Upside = 30%

This, in turn, means that over time, a given salesrep should close 90% of their committed deals, 70% of their forecast deals, and 30% of their upside.  Deviations from this over time indicate that the rep is mis-categorizing the deals because the probability should be the basis for the forecast category assignment [1].

Finally, I do believe that salesreps should give quarterly forecasts [2] that reflect their sense for how things will come in given all the odd things that can happen to deals (e.g., size changes, acceleration, slippage).  I believe those forecasts should be a 70% number because the sales manager will be managing across a  portfolio of them and while there is little room for a company to miss at the VP of Sales level, there is more room for and more variance in performance across salesreps.

While I know this will not necessarily come naturally to all sales VPs — and some may push-back hard — this is a simple, practical, and rigorous way to think about the forecast.

# # #

[1] Some people do this through an independent (orthogonal) field in the CRM system called probability.  I think that’s unnecessary because in my mind forecast category should effectively equal probability and your options for picking a probability should be bucketed.  No one can say a deal is 43% vs. 52% and forecast category doesn’t indicate some probability of closing, then … what use is it and on what basis should you classify something as forecast vs. upside?

[2] Some people believe that only managers should make forecasts, but I believe both reps and managers should forecast for two reasons:  (1) provided it’s left independent and not “managed” by the managers, the aggregated salesrep-level forecast provides another, Wisdom of Crowds-y, view into the sales forecast and (2) it’s never too early to teach salesreps how to forecast which is best learned through the experience of trial and error over many quarters.

How To Get Your Startup a Halo

How would you like your startup to win deals not only when you win a customer evaluation, but when you tie — and even sometimes when you lose?

That sounds great.  But is it even possible?  Amazingly, yes — but you need have a halo effect working to your advantage.  What is a halo effect?  Per Wikipedia,

The halo effect is a cognitive bias in which an observer’s overall impression of a person, company, brand, or product influences the observer’s feelings and thoughts about that entity’s character or properties

There’s a great, must-read book (The Halo Effect) on the how this and eight other related effects apply in business.  The book is primarily about how the business community makes incorrect attributions about “best practices” in culture, leadership, values, and process that are subsequent to — but were not necessarily drivers of — past performance.

I know two great soundbites that summarize the phenomenon of pseudo-science in business:

  • All great companies have buildings.” Which comes from the (partly discredited) Good To Great that begins with the observation that in their study cohort of top-performing companies that all of them had buildings — and thus that simply looking for commonalities among top-performing companies was not enough; you’d have to look for distinguishing factors between top and average performers.
  • “If Marc Benioff carried a rabbit’s foot, would you?”  Which comes from a this Kellblog post where I make the point that blindly copying the habits of successful people will not replicate their outcome and, with a little help from Theodore Levitt, that while successful practitioners are intimately familiar with their own beliefs and behaviors, that they are almost definitionally ignorant of which ones helped, hindered, or were irrelevant to their own success.

Now that’s all good stuff and if you stop reading right here, you’ll hopefully avoid falling for pseudo-science in business.  That’s important.  But it misses an even bigger point.

Has your company ever won (or lost) a deal because of:

  • Perceived momentum?
  • Analyst placement on a quadrant or other market map?
  • Perceived market leadership?
  • Word of mouth as the “everyone’s using it” or “next thing” choice?
  • Perceived hotness?
  • Vibe at your events or online?
  • A certain feeling or je ne sais quoi that you were more (or less) preferred?
  • Perceived vision?

If yes, you’re seeing halo effects at work.

Halo effects are real.  Halo effects are human nature.  Halo effects are cognitive biases that tip the scales in your favor.  So the smart entrepreneur should be thinking:  how do I get one for my company?  (And the smart customer, how can I avoid being over-influenced by them?  See bottom of post.)

In Silicon Valley, a number of factors drive the creation of halo effects around a company.  Some of these are more controllable than others.  But overall, you should be thinking about how you can best combine these factors into an advantage.

  • Lineage, typically in the form of previous success at a hot company (e.g., Reid Hoffman of PayPal into LinkedIn, Dave Duffield of PeopleSoft into Workday).  The implication here (and a key part of halo effects) is that past success will lead to future success, as it sometimes does.  This one’s hard to control, but ceteris paribus, co-founding (even somewhat ex post facto) a company with an established entrepreneur will definitely help in many ways, including halo effects.
  • Investors, in one of many forms:  (1) VC’s with a strong brand name (e.g., Andreessen Horowitz), (2) specific well known venture capitalists (e.g., Doug Leone), (3) well known individual investors (e.g., Peter Thiel), and to a somewhat lesser extent (4) visible and/or famous angels (e.g., Ashton Kutcher). The implication here is obvious, that the investor’s past success is an indication of your future success.  There’s no doubt that strong investors help build halo effects indirectly through reputation; in cases they can do so directly as well via staff marketing partners designed to promote portfolio companies.
  • Investment.  In recent years, simply raising a huge amount of money has been enough to build a significant halo effect around a company, the implication being that “if they can raise that much money, then there’s got to be a pony in there somewhere.” Think Domo’s $690M or Palantir’s $2.1B.   The media loves these “go big or go home” stories and both media and customers seem to overlook the increased risk associated with staggering burn rates, the waste that having too much capital can lead to, the possibility that the investors represent “dumb money,” and the simple fact that “at scale” these businesses are supposed to be profitable.  Nevertheless, if you have the stomach, the story, and the connections to raise a dumbfounding amount of capital, it can definitely build a halo around your company.  For now, at least.
  • Valuation.  Even as the age of the unicorn starts to wane, it’s undeniable that in recent years, valuation has been a key tool to generate halos around a company.  In days of yore, valuation was a private matter, but as companies discovered they could generate hype around valuation, they started to disclose it, and thus the unicorn phenomenon was born.  As unicorn status became increasingly de rigeur, things got upside-down and companies started trading bad terms (e.g., multiple liquidation preferences, redemption rights) in order to get $1B+ (unicorn) post-money valuations.  That multiplying the price of a preferred share with superior rights by a share count that includes the number of lesser preferred and common shares is a fallacious way to arrive at a company valuation didn’t matter.  While I think valuation as a hype driver may lose some luster as many unicorns are revealed as horses in party hats (e.g., down-round IPOs), it can still be a useful tool.  Just be careful about what you trade to get it.  Don’t sell $100M worth of preferred with a ratcheted 2 moving to 3x liquidation preference — but what if someone would buy just $5M worth on those terms.  Yes, that’s a total hack, but so is the whole idea of multiplying a preferred share price times the number of common shares.  And it’s far less harmful to the company and the common stock.  Find your own middle ground / peace on this issue.
  • Growth and vision.  You’d think that industry watchers would look at a strategy and independently evaluate its merits in terms of driving future growth.  But that’s not how it works.  A key part of halo effects is misattribution of practices and performance.  So if you’ve performed poorly and have an awesome strategy, it will overlooked — and conversely.  Sadly, go-forward strategy is almost always viewed through the lens of past performance, even if that performance were driven by a different strategy or affected positively or negatively by execution issues unrelated to strategy.  A great story isn’t enough if you want to generate a vision halo effect.  You’re going to need to talk about growth numbers to prove it.  (That this leads to a pattern of private companies reporting inflated or misleading numbers is sadly no surprise.)  But don’t show up expecting to wow folks with vision. Ultimately, you’ll need to wow them with growth — which then provokes interest in vision.
  • Network.  Some companies do a nice and often quiet job of cultivating friends of the company who are thought leaders in their areas.  Many do this through inviting specific people to invest as angels.  Some do this simply through communications.  For example, one day I received an email update from Vik Singh clearly written for friends of Infer. I wasn’t sure how I got on the list, but found the company interesting and over time I got to know Vik (who is quite impressive) and ended up, well, a friend of Infer.  Some do this through advisory boards, both formal and informal.  For example, I did a little bit of advising for Tableau early on and later discovered a number of folks in my network who’d done the same thing.  The company benefitted by getting broad input on various topics and each of us felt like we were friends of Tableau.  While sort of thing doesn’t generate the same mainstream media buzz as a $1B valuation, it is a smart influencer strategy that can generate fans and buzz among the cognoscenti who, in theory at least, are opinion leaders in their chosen areas.

Before finishing the first part of this post, I need to provide a warning that halo effects are both powerful and addictive.  I seem to have a knack for competing against companies pursuing halo-driven strategies and the pattern I see typically runs like this.

  • Company starts getting some hype off good results.
  • Company starts saying increasingly aggressive things to build off the hype.
  • Analysts and press reward the hype with strong quadrant placements and great stories and blogs.
  • Company puts itself under increasing pressure to produce numbers that support the hype.

And then one of three things happens:

  1. The company continues delivering strong results and all is good, though the rhetoric and vision gets more unrelated to the business with each cycle.
  2. The company stops delivering results and is downgraded from hot-list to shit-list in the minds of the industry.
  3. The company cuts the cord with reality and starts inflating results in order to sustain the hype cycle and avoid outcome #2 above.  The vision inflates as aggressively as the numbers.

I have repeatedly had to compete against companies where claims/results were inflated to “prove” the value of bad/ordinary strategies to impress industry analysts to get strong quadrant positions to support broader claims of vision and leadership to drive more sales to inflate to even greater claimed results.  Surprisingly, I think this is usually done more in the name of ego than financial gain, but either way the story ends the same way — in terminations, lawsuits and, in one case, a jail sentence for the CEO.

Look, there are valid halo-driven strategies out there and I encourage you to try and use them to your company’s advantage — just be very careful you don’t end up addicted to halo heroin.  If you find yourself wanting to do almost anything to sustain the hype bubble, then you’ll know you’re addicted and headed for trouble.

The Customer View

Thus far, I’ve written this post entirely from the vendor viewpoint, but wanted to conclude by switching sides and offering customers some advice on how to think about halo effects in choosing vendors.   Customers should:

  • Be aware of halo effects.  The first step in dealing with any problem is understanding it exists. While supposedly technical, rational, and left-brained, technology can be as arbitrary as apparel when it comes to fashion.  If you’re evaluating vendors with halos, realize that they exist for a reason and then go understand why.  Are those drivers relevant — e.g., buying HR from Dave Duffield seems a reasonable idea.  Or are they spurious —  e.g., does it really matter that one board member invested in Facebook?  Or are they actually negative — e.g., if the company has raised $300M how crazy is their burn rate, what risk does that put on the business, and how focused will they stay on you as a customer and your problem as a market?
  •  Stay focused on your problem.  I encourage anyone buying technology to write down their business problems and high-level technology requirements before reaching out to vendors.  Hyped vendors are skilled at “changing the playing field” and trained to turn their vision into your (new) requirements.  While there certainly are cases where vendors can point out valid new requirements, you should periodically step back and do a sanity check:  are you still focused on your problem or have you been incrementally moved to a different, or greatly expanded one.  Vision is nice, but you won’t be around solve tomorrow’s problems if you can’t solve today’s.
  • Understand that industry analysts are often followers, not leaders.  If a vendor is showing you analyst support for their strategy, you need to figure out if the analyst is endorsing the strategy because of the strategy’s merits or because of the vendor’s claimed prior performance.  The latter is the definition of a halo effect and in a world full of private startups where high-quality analysts are in short supply, it’s easy to find “research” that effectively says nothing more than “this vendor is a leader because they say they’re performing really well and/or they’ve raised a lot of money.” That doesn’t tell you anything you didn’t know already and isn’t actually an independent source of information.  They are often simply amplifiers of the hype you’re already hearing.
  • Enjoy the sizzle; buy the steak.  Hype king Domo paid Alec Baldwin to make some (pretty pathetic) would-be viral videos and had Billy Beane, Flo Rida, Ludacris, and Marshawn Lynch at their user conference.  As I often say, behind any “marketing genius” is an enormous marketing budget, and that’s all you’re seeing — venture capital being directly converted into hype.  Heck, let them buy you a ticket to the show and have a great time.  Just don’t buy the software because of it — or because of the ability to invest more money in hand-grooming a handful of big-name references.  Look to meet customers like you, who have spent what you want to spend, and see if they’re happy and successful.  Don’t get handled into meeting other customers only at pre-arranged meetings.  Walk the floor and talk to regular people.  Find out how many are there for the show, or because they’re actual successful users of the software.
  • Dive into detail on the proposed solution.  Hyped vendors will often try to gloss over solutions and sell you the hype (e.g., “of course we can solve your problem, we’ve got the most logos, Gartner says we’re the leader, there’s an app for that.”)  What you need is a vendor who will listen to your problem, discuss it with you intelligently, and provide realistic estimates on what it takes to solve it.  The more willing they are to do that, the better off you are.  The more they keep talking about the founder’s escape from communism, the pedigree of their investors, their recent press coverage, or the amount of capital they’ve raised, the more likely you are to end up high and dry.  People interested in solving your problem will want to talk about your problem.
  • Beware the second-worst outcome:  the backwater.  Because hyped vendors are actually serving Sand Hill Road and/or Wall Street more than their customers, they pitch broad visions and huge markets in order to sustain the halo.  For a customer, that can be disastrous because the vendor may view the customer’s problems as simply another lily pad to jump off on the path to success.  The second-worst outcome is when you buy a solution and then vendor takes your money and invests it in solving other problems.  As a customer, you don’t want to marry your vendor’s fling.  You want to marry their core.  For startups, the pattern is typically over-expansion into too many things, getting in trouble, and then retracting hard back into the core, abandoning customers of the new, broader initiatives.  The second-worst outcome is when you get this alignment wrong and end up in a backwater or formerly-strategic area of your supplier’s strategy.
  • Avoid the worst outcome:  no there there.  Once in awhile, there is no “there there” behind some very hyped companies despite great individual investors, great VCs, strategic alliances, and a previously experienced team.  Perhaps the technology vision doesn’t pan out, or the company switches strategies (“pivots”) too often.  Perhaps the company just got too focused on its hype and not on it customers.  But the worst outcome, while somewhat rare, is when a company doesn’t solve its advertised problem. They may have a great story, a sexy demo, and some smart people — but what they lack is a core of satisfied customers solving the problem the company talks about.  In EPM, with due respect and in my humble opinion, Tidemark fell into this category, prior to what it called a “growth investment” and what sure seemed to me like a (fire) sale, to Marlin Equity Partners.  Customers need to watch out for these no-there-there situations and the best way to do that is taking strong dose of caveat emptor with a nose for “if it sounds too good to be true, then it might well possibly be.”

Why has Standalone Cloud BI been such a Tough Slog?

I remember when I left Business Objects back in 2004 that it was early days in the cloud.  We were using Salesforce internally (and one of their larger customers at the time) so I was familiar with and a proponent of cloud-based applications, but never felt great about BI in the cloud.  Despite that, Business Objects and others were aggressively ramping on-demand offerings all of which amounted to pretty much nothing a few years later.

Startups were launched, too.  Specifically, I remember:

  • Birst, née Success Metrics, and founded in 2004 by Siebel BI veterans Brad Peters and Paul Staelin, which was originally supposed to be vertical industry analytic applications.
  • LucidEra, founded in 2005 by Salesforce and Siebel veteran Ken Rudin (et alia) whose original mission was to be to BI what Salesforce was to CRM.
  • PivotLink, which did their series A in 2007 (but was founded in 1998), positioned as on-demand BI and later moved into more vertically focused apps in retail.
  • GoodData, founded in 2007 by serial entrepreneur Roman Stanek, which early on focused on SaaS embedded BI and later moved to more of a high-end enterprise positioning.

These were great people — Brad, Ken, Roman, and others were brilliant, well educated veterans who knew the software business and their market space.

These were great investors — names like Andreessen Horowitz, Benchmark, Emergence, Matrix, Sequoia, StarVest, and Tenaya invested over $300M in those four companies alone.

This was theoretically a great, straightforward cloud-transformation play of a $10B+ market, a la Siebel to Salesforce.

But of the four companies named above only GoodData is doing well and still in the fight (with a high-end enterprise platform strategy that bears little resemblance to a straight cloud transformation play) and the three others all came to uneventful exits:

So, what the hell happened?

Meantime, recall that Tableau, founded in 2003, and armed in its early years with a measly $15M in venture capital, and with an exclusively on-premises business model, literally blew by all the cloud BI vendors, going public in May 2013 and despite the stock being cut by more than half since its July 2015 peak is still worth $4.2B today.

I can’t claim to have the definitive answer to the question I’ve posed in the title.  In the early days I thought it was related to technical issues like trust/security, trust/scale, and the complexities of cloud-based data integration.  But those aren’t issues today.  For a while back in the day I thought maybe the cloud was great for applications, but perhaps not for platforms or infrastructure.  While SaaS was the first cloud category to take off, we’ve obviously seen enormous success with both platforms (PaaS) and infrastructure (IaaS) in the cloud, so that can’t be it.

While some analysts lump EPM under BI, cloud-based EPM has not had similar troubles.  At Host, and our top competitors, we have never struggled with focus or positioning and we are all basically running slightly different variations on the standard cloud transformation play.  I’ve always believed that lumping EPM under BI is a mistake because while they use similar technologies, they are sold to different buyers (IT vs. finance) and the value proposition is totally different (tool vs. application).  While there’s plenty of technology in EPM, it is an applications play — you can’t sell it or implement it without domain knowledge in finance, sales, marketing or whatever domain for which you’re building the planning system.  So I’m not troubled to explain why cloud EPM hasn’t been a slog while cloud BI absolutely has been.

My latest belief is that the business model wasn’t the problem in BI.  The technology was.  Cloud transformation plays are all about business model transformation.  On-premises applications business models were badly broken:  the software cost $10s of millions to buy and $10s of millions more to implement (for large customers).  SMBs were often locked out of the market because they couldn’t afford the ante.  ERP and CRM were exposed because of this and the market wanted and needed a business model transformation.

With BI, I believe, the business model just wasn’t the problem.  By comparison to ERP and CRM, it was fraction of the cost to buy and implement.  A modest BusinessObjects license might have cost $150K and less than that to implement.  That problem was not that BI business model was broken, it was that the technology never delivered on the democratization promise that it made.  Despite shouting “BI for the masses” in 1995, BI never really made it beyond the analyst’s desk.

Just as RDBMS themselves failed to deliver information democracy with SQL (which, believe it or not, was part of the original pitch — end users could write SQL to answer their own queries!), BI tools — while they helped enable analysts — largely failed to help Joe User.  They weren’t easy enough to use.  They lacked information discovery.  They lacked, importantly, easy-yet-powerful visualization.

That’s why Tableau, and to a lesser extent Qlik, prospered while the cloud BI vendors struggled.  (It’s also why I find it profoundly ironic that Tableau is now in a massive rush to “go cloud” today.)  It’s also one reason why the world now needs companies like Alation — the information democracy brought by Tableau has turned into information anarchy and companies like Alation help rein that back in (see disclaimers).

So, I think that cloud BI proved to be such a slog because the cloud BI vendors solved the wrong problem. They fixed a business model that wasn’t fundamentally broken, all while missing the ease of use, data discovery, and visualization power that both required the horsepower of on-premises software and solved the real problems the users faced.

I suspect it’s simply another great, if simple, lesson is solving your customer’s problem.

Feel free to weigh in on this one as I know we have a lot of BI experts in the readership.

Do You Want to be Judged on Intentions or Results?

It was early in my career, maybe 8 years in, and I was director of product marketing at a startup.  One day, my peer, the directof of marketing programs hit me with this in an ops review meeting:

You want to be judged on intentions, not results.

I recall being dumbfounded at the time.  Holy cow, I thought.  Is he right?  Am I standing up arguing about mitigating factors and how things might have been when all the other people in the room were thinking only about black-and-white results?

It was one of those rare phrases that really stuck with me because, among other reasons, he was so right.  I wasn’t debating whether things happened or not.  I wasn’t making excuses or being defensive.  But I was very much judging our performance in the theoretical, hermetically sealed context of what might have been.

Kind of like sales saying a deal slipped instead of did not close.   Or marketing saying we got all the MQLs but didn’t get the requisite pipeline.  Or alliances saying that we signed up the 4 new partners, but didn’t get the new opportunities that were supposed to come with them.

Which phrase of the following sentence matters more — the first part or the second?

We did what we were supposed to, but it didn’t have the desired effect.

We would have gotten the 30 MQLS from the event if it hadn’t snowed in Boston.  But who decided to tempt fate by doing a live event in Boston in February?  People who want to be judged on intentions think about the snowstorm; people who want to be judged on results think about the MQLs.

People who want to judged on intentions build in what they see as “reasons” (which others typically see as “excuses”) for results not being achieved.

I’m six months late hiring the PR manager, but that’s because it’s hard to find great PR people right now.  (And you don’t want me to hire a bad one, do you?)

No, I don’t want you to hire a bad one.  I want you to hire a great one and I wanted you to hire them 6 months ago.  Do you think every other PR manager search in the valley took 6 months more than plan?  I don’t.

Fine lines exist here, no doubt.  Sometimes reasons are reasons and sometimes they are actually excuses.  The question isn’t about any one case.  It’s about, deep down, are you judging yourself by intentions or results?

You’d be surprised how many otherwise very solid people get this one thing wrong — and end up career-limited as a result.

The Role of Professional Services in a SaaS Business

I love to create reductionist mission statements for various departments in a company.  These are designed to be ultra-compact and potentially provocative.  My two favorite examples thus far:

I like to make them based on real-life situations, e.g., when someone running a department seems confused about the real purpose of their team.

For example, some police-oriented HR departments seem to think their mission is protect employees from management.  Think: “Freeze, you can’t send an email like that; put your hands in the air and step away from the keyboard!”

I think otherwise. If the HR team conceptualizes itself as “helping managers manage,” it will be more positively focused, help deliver better results, and be a better business partner — all while protecting employees from bad managers (after all, mistreating employees is bad management).

Over the past year, I’ve developed one of these pithy mission statements for professional services, also known as consulting, the (typically billable) experts employed by a software company who work with customers on implementations after the sale:

Professsional services exists to maximize ARR while not losing money.

Maximizing ARR surprises some people.  Why say that in the context of professional services?  Sales brings in new ARR.  Customer Success (or Customers for Life) is reponsible for the maintenance and expansion of existing ARR.  Where does professional services fit in?  Shouldn’t they exist to drive successful implementations or to achieve services revenue targets?  Yes, but that’s actually secondary to the primary mission.

The point of a SaaS business is to maxmize enterprise value and that value is a function of ARR.  If you could maximize ARR without a professional services team then you wouldn’t have one at all (and some SaaS firms don’t).  But if you’re going to have a professional services team, then they — like everybody else — should be there to maximize ARR.  How does professional services help maximize ARR?  They:

  • Help drive new ARR by supporting sales — for example, working with sales to draft a statement of work and by building confidence that the company can solve the customer’s problem.  If you remember that customers buy “holes, not bits” you’ll know that a SaaS subscription, by itself, doesn’t solve any business problem.  The importance of the consultants who do the solution mapping is paramount.
  • Help preserve/expand existing ARR by supporting the Customer Success (aka, the Customers for Life) team, either by repairing blown implementations or by doing new or expanded implementations at existing customers.  This could entail anything from a “save” to a simple expansion, but either way, professional services is there maximizing ARR.
  • Help do both by enabling the partner ecosystem.  Professional services is key to enabling partners who can both provide quality implementation services for customers and who can extend the vendor’s reach through go-to-market partnering.

Or, as our SVP of Services at Host Analytics says, “our role is to make happy customers.”

I prefer to say “maximize ARR without losing money” but we’re very much on the same page.  Let’s finish with the “not losing money” part.  In my opinion,

  • A typical on-premises software vendor drove 25% to 30% gross margins on professional services.  Those were the days one big one-shot license fees and huge multi-million dollar implementations.  In those days, customers weren’t necessarily too happy but the services team had a strong “make money” aspect to its mission.
  • A typical SaaS vendors have negative 20% to negative 10% gross margins on services (and sometimes a lot more negative than that).  That’s because some vendors subsidize their ARR with free or heavily discounted services because ARR recurs whereas services revenue does not.

I believe that professional services has real value (e.g., our team at Host Analytics is amazing) and that if you’re driving 0% to 5% gross margins with such a team that you are already supporting the ARR pool with discounted services (you could be running 25% to 30% margins).  Whether you make 0% or 10% doesn’t much matter — because it won’t to someone valuing your company — but I think it’s a mistake to shoot for the 30% margins of yore as well as a mistake to tolerate -50% margins and completely de-value your services.