Category Archives: EPM

EPM, Project Orion, and the Beginner’s Mind

I’ll always be thankful for my time at Salesforce both because I met so many amazing people and because I learned so much.  I learned about the importance of Trust in a SaaS company (and was drilled in the mantra, “nothing is more important than the Trust of our customers.”)  And I learned about shoshin, the Zen concept of the Beginner’s Mind.

The Beginner’s Mind
It’s not unusual when working at Salesforce to hear about Zen concepts or get an email reply from Marc containing only a Zen proverb.  But of all the concepts I learned about, the most powerful and elusive was shoshin, a concept that Benioff says he adopted from Steve Jobs.  Per Wikipedia:

Shoshin (初心) is a word from Zen Buddhism meaning “Beginner’s Mind.” It refers to having an attitude of openness, eagerness, and lack of preconceptions when studying a subject, even when studying at an advanced level, just as a beginner would.

Shoshin is powerful because it enables you to take a fresh look at an old problem.  Shoshin is elusive, however, because it requires you to step outside your paradigm — the filters through which you see the world — which perhaps sounds easy, but can be incredibly difficult.  In fact, in what I all the paradox of knowledge, the more you know about something the more difficult it is to break out of your paradigm, to get outside the metaphorical box.

As an example of this, our head of products, Sanjay Vyas, recently went to a silent, ten-day vipassana meditation retreat.  Vipassana means “to see things as they really are”  and is a technique that has been passed down from the Buddha by an unbroken chain of teachers to the present day.  At the retreat, the first phase is three days spent simply trying to calm the noise in your mind.  Only then, after three days of silent meditation, are you ready to start to attempt to see things as they really are.  Such is the difficulty in breaking free from a paradigm.

The Problem We Approached With a Beginner’s Mind
What problem did we try to see with a Beginner’s Mind at Host Analytics?  End-user planning, budgeting, and forecasting (three key pieces of enterprise performance management, also known as EPM).  Why did we do it?  Because despite decades of great success within finance organizations, we believe that EPM has under-penetrated the overall market.

Far too many people rely solely on Excel for planning/budgeting and far too many EPM end-users build budgets in Excel and mail them to finance as opposed to using the EPM system.  The same is true for reporting, where far too often users drop out of the EPM system and into Excel to make reports and charts.  (This is less true of Host users due to our strong reporting, but the trend remains true at an industry level.)

While as EPMers, we take great pride in our category and, at Host, in our ability to move enterprise-class EPM to the cloud, we must recognize that at some level EPM has failed to deliver against its broad vision of accountability and empowerment.  To get to the bottom of this, as Clayton Christensen has often observed, you can’t just talk to your customers to understand your market, you need to understand non-consumers as well.  All those Excel-only or primarily-Excel users are Christensen’s non-consumers, so we decided to talk to them.  Here’s an example of what we heard.

“I hate budgeting.  They made me attend the meeting to look at these tools.  I don’t want to use any of them.”  — Chief Legal Officer

We heard this over and over.  The average business user would seemingly prefer a root canal to working on the budget.  Yet we knew these same business users were passionate about metrics, empowerment, accountability, and performance.  So where had the whole category gone wrong?  Thus was born Project Orion.

By Finance For Finance
We realized that for forty years EPM has been designed by finance for finance (or even more specifically, by FP&A for FP&A).  EPM vendors did a great job of listening to EPM customers.  And EPM customers, particularly EPM buyers, often had job titles like Vice President of Financial Planning & Analysis (FP&A).  These were the people who selected the tools.  These were the people who bought the tools.  But, these weren’t always the people who used the tools.  An important part of EPM is to roll it out broadly across an organization, meaning to put the tool into the hands of business end-users, budget owners, in all the various departments.

The Perils of “Configuration” to Dumb Down the Interface
The universal answer to the end-user question was dumb it down.  Configure it.  Take the product that was built for a heavily analytical, highly skilled, finance professional — and FP&A people are whip smart — and dumb down the interface for a business end-user.  Hide some menu items.  Remove some toolbar buttons.  Take away some tabs.

That was the conventional wisdom.  Take a product built for one person and configure it for use by another.  Now some EPM vendors were better than others at this bluff, some had slicker interfaces that would be relatively more appealing than others.  But amazingly, nobody ever said,  “wait a minute, what if we designed the product for people who actually used it?

Thank to shoshin, that’s exactly what we did with Project Orion at Host Analytics.

Task-Oriented Design
Instead of starting with what we had, a template-oriented product built for finance people, and a desire to twist/configure into something else, we started with a blank sheet.  We asked business end-users what they wanted to do with an EPM product.  Those end-users gave us a three-part answer:

  • We want to be able to quickly figure out where we stand relative to the plan.
  • We want help in determining where we are going to land on the current quarter — and to optimize that result.  (Not an easy problem, mind you.)
  • We want to get the next period planned in line with objectives and targets.

And we want to do all of the above quickly and easily because, much as we love this stuff (and we don’t), we’ve got a business to run.  This idea, what we came to call the stand / land / planned message, became the center of Orion design.

How We Knew We Were Onto Something
We noticed quickly that people had strong reactions to Orion, which typically fell into one of two types:

  • Reaction 1:  “Holy Cow, why didn’t I think of that?  It’s kind of obvious in 20/20 hindsight.”
  • Reaction 2:  “That’s not needed.  You just need to configure your way out of the problem.”

In the early days, we got a lot of reaction 2 — particularly from our internal EPM experts, who were somewhat blinded by the paradox of knowledge.  The internal resistance was, at times, intense.  But that resistance told me that we were onto something.  We were challenging the conventional wisdom in a way that could lead to a major breakthrough.  And the more we asked people outside Host, and the more we showed Orion to business end-users, the more convinced we were that we had made such a breakthrough.

The same chief legal officer who said “I hate budgeting” above, said this:

“When I look at Project Orion, it’s clear that you are the only folks thinking about me.  I could and would use this tool.” — Chief Legal Officer after seeing Orion.

Tips on Adopting a Beginner’s Mind
We’re launching Project Orion today and proud both of the software we built and how we came to build it.  We believe Orion is a breakthrough product that is going to change the EPM market.  All because we looked at an age-old problem in EPM with a Beginner’s Mind.

I’ll finish the post with some tips on how to take a shoshin approach that we learned along our journey — and which happily don’t involve 10 days of silent meditation.

  • Put a mix of veterans and neophytes on the project.  This will reduce the paradox of knowledge and naturally bring in some fresh eyes.
  • Confront tough facts.  The data says lots of people still use only or primarily Excel despite 40 years of EPM.  That’s a fact.  The question is why?
  • Challenge the team to document hidden assumptions.  Configuration as the solution to the end-user problem was one such huge assumption.  You can only go outside the box when you know its edges.
  •  Talk to non-consumers.  Talking to customers is great, but it can create an echo chamber.  Talk to non-consumers, too, particularly when fishing for breakthroughs, and ask them why they have not purchased in the product category.
  • Embrace resistance.  View resistance as a good sign, as a sign that you’re changing something big, and not just as a yellow flag.
  • Test early and often.  Go back to the non-consumers you interviewed and ask if your prototype would change their mind.  Iterate in response.



Kellblog Predictions for 2018

In continuing my tradition of offering predictions every year, let’s start with a review of my hits and misses on my 2017 predictions.

  1. The United States will see a level of divisiveness and social discord not seen since the 1960s.  HIT.
  2. Social media companies finally step up and do something about fake news. MISS, but ethical issues are starting to catch up with them.
  3. Gut feel makes a comeback. HIT, while I didn’t articulate it as such, I see this as the war on facts and expertise (e.g., it’s cold today ergo global warming isn’t real despite what “experts” say).
  4. Under a volatile leader, we can expect sharp reactions and knee-jerk decisions that rattle markets, drive a high rate of staff turnover in the Executive branch, and fuel an ongoing war with the media.  HIT.
  5. With the new administration’s promises of $1T in infrastructure spending, you can expect interest rates to raise and inflation to accelerate. MISS, turns out this program was never classical government investment in infrastructure, but a massive privatization plan that never happened.
  6. Huge emphasis on security and privacy. PARTIAL HIT, security remained a hot topic and despite numerous major breaches it’s still not really hit center stage.
  7. In 2017, we will see more bots for both good uses (e.g., customer service) and bad (e.g., trolling social media).  HIT.
  8. Artificial intelligence hits the peak of inflated expectations. HIT.
  9. The IPO market comes back. MISS, though according to some it “sucked less.”
  10. Megavendors mix up EPM and ERP or BI. PARTIAL HIT.  This prediction was really about Workday and was correct to the extent that they’ve seemingly not made much progress in EPM.

Kellblog’s Predictions for 2018

1.  We will again continue to see a level of divisiveness and social discord not seen since the 1960s. We have evolved from a state of having different opinions about policies based on common facts to a dangerous state based on different facts, even on easily disprovable claims, e.g., the White House nativity scene.  The media is advancing, not reducing, this divide.

2.  The war on facts and expertise will continue to escalate. Read The Death of Expertise for more.   This will extend to a war on college. While an attempted opening salvo on graduate student tuition waivers didn’t fire, in an environment where the President’s son says, “we’ll take $200,000 of your money; in exchange we’ll train your children to hate our country,” you can expect ongoing attacks on post-secondary education.  This spells trouble for Silicon Valley, where a large number of founders and entrepreneurs are former grad students as well as immigrants (which is a whole different area of potential trouble).

3.  Leading technology and social media companies finally step up to face ethical challenges. This means paying more attention to their own culture (e.g., sexual harassment, brogrammers).  This means taking responsibility for policing trolls, spreading fake news, building addictive content, and enabling foreign intelligence operations.  Thus far, they have tended to argue they are simply keepers of the town square, and not responsible for the content shared there.  This abdication of responsibility should start to stop in 2018, if only because people start to tune-out the services.  This leads to one of my favorite tweets of the year:


4.  AI will move from hype to action, meaning bigger budgets, more projects, and some high visibility failures. It will also mean more emphasis on voice and more conversational chatbots.  For finance departments, this means more of what Ventana’s Rob Kugel calls the age of robotic finance, which unites AI and machine learning, robotic process automation (RPA), natural language bots, and blockchain-based distributed ledgers.

5. AI will continue to generate lots of controversy about job displacement. While some remain optimistic, the consensus viewpoint seems to be that AI will suppress employment, most likely widening the wealth inequality gap.  A collapsing educational system combined with AI-driven pressure on low-skilled work seems a recipe for trouble.

6.  The bitcoin bubble bursts. As a reminder, at one point during the peak of tulip mania, the Dutch East India Company was worth more, on an inflated-adjusted basis, than twenty of today’s technology giants combined.


7.  The Internet of Things (IoT) will continue to build momentum.  IoT won’t hit in a massive horizontal way, instead B2B adoption will be lead by certain verticals such as healthcare, retail, and supply chain.

8.  The freelance / gig economy continues to gain momentum with freelance workers poised to pass traditional employees by 2027. While the gig economy brings advantages to high-skilled knowledge workers (e.g., freedom of location, freedom of work projects), this same trend threatens low-skilled workers via the continual decomposition of full-time jobs in a series of temp shifts.  This means someone working 60 hours a week across three 20-hour shifts wouldn’t be considered to be a full-time employee and thus not eligible for full-time benefits, further increasing wealth inequality.


9.  M&A heats up due to repatriation of overseas cash. Apple alone, for example, has $252B in overseas cash.  With the new tax rate dropping from 35% to 15.5%, it will now be ~$50B less expensive for Apple to repatriate that cash.  Overall, US companies hold trillions of dollars overseas and making it cheaper for them to repatriate that cash suggests that they will be flush with dollars to invest in many areas, including M&A

10.  2018 will be a good year for cloud EPM vendors. The dynamic macro environment, the opportunities posed by cash repatriation, and the strong fundamentals in the economy will increase demand for EPM software that helps companies explore how to best exploit the right set of opportunities facing them.  Oracle will fail in pushing PBCS into the NetSuite base, creating a nice third-party opportunity.  SAP, Microsoft, and IBM will continue to put resources into other strategic investment areas (e.g., IBM and Watson, SAP and Hana) leaving fallow the EPM market adjacent to ERP.  And the greenfield opportunity to replace Excel for financial planning, budgeting, and even consolidations will continue drive strong growth.

Let me wish everyone, particularly the customers, partners, and employees of Host Analytics, a Happy New Year in 2018.

# # #

Disclaimer:  these predictions are offered in the spirit of fun.  See my FAQ for more on this and other usage terms.

Putting the A Back in FP&A with Automated, Integrated Planning

I was reading this blog post on Continuous Planning by Rob Kugel of Ventana Research the other day and it reminded me of one of my (and Rob’s) favorite sayings:

We need to put the A back in FP&A

This means that the financial planning and analysis (FP&A) team at many companies is so busy doing other things that it doesn’t have time to focus on what it does best and where it can add the most value:  analysis.

This begs the question:  where did the A go?  What are the other things that are taking up so much time?  The answer:  data prep and spreadsheet jockeying.  These functions suck time away and the soul from the FP&A function.


Data-related tasks — such as finding, integrating, and preparing data — take up more than 2/3rds of FP&A’s time.  Put differently, FP&A spends twice as much time getting ready to analyze data than it does analyzing it.  It might even be worse, depending on whether periodic and ad hoc reporting is included in data-related task or further carved out of the 28% of time remaining for analytics, as I suspect it is.


It’s not just finance who loves spreadsheets.  The business does do:  salesops, marketingops, supply chain planners, professional services ops, and customer support all love spreadsheets, too.  When I worked at Salesforce, we had one of the most sophisticated sales strategy and planning teams I’ve ever seen.  Their tool of choice?  Excel.

This comes back to haunt finance in three ways:

  • Warring models, for example, when the salesops new bookings model doesn’t foot to the finance one because they make different ramping and turnover assumptions.  These waste time with potential endless fights.
  • Non-integrated models.  Say sales and finance finally agree on a bookings target and to hire 5 more salespeople to support it.  Now we need to call marketing to update their leadgen model to ensure there’s enough budget to support them, customer service to ensure we’re staffed to handle the incremental customers they sign, professional services to ensure we’re have adequate consulting resources, and on and on.  Forget any of these steps and you’ll start the year out of balance, with unattainable targets somewhere.
  • Excel inundation.   FP&A develops battle fatigue dealing with and integrating some many different versions of so many spreadsheets, often late and night and under deadline pressure.  Mistakes gets made.

So how can prevent FP&A from being run over by these forces?  The answer is to automate, automate, and integrate.

  • Automate data integration and preparation.  Let’s free up time by use software that lets you “set and forget” data refreshes.  You should be able to setup a connector to a data source one, and then have it automatically run at periodic intervals going forward.  No more mailing spreadsheets around.
  • Automate periodic FP&A tasks.  Use software where you can invest in building the perfect monthly board pack, monthly management reports, quarterly ops review decks, and quarterly board reports once, and then automatically refresh it every period through these templates.  This not only free up time and reduces drudgery; it eliminates plenty of mistakes as well.
  • Integrate planning across the organization.  Move to a cloud-based enterprise performance platform (like Host Analytics) that not only accomplishes the prior two goals, but also offers a modeling platform that can be used across the organization to put finance, salesops, marketingops, professional services, supply chain, HR, and everyone else across the organization on a common footing.

Since the obligatory groundwork in FP&A is always heavy, you’re not going to succeed in putting the A back in FP&A simply by working harder and later.  The only way to put the A back in FP&A is to create time.  And you can do that with two doses of automation and one of integration.

Win Rates, Close Rates and Milestone vs. Flow Analysis

Hey, what’s your win rate?

It’s another seemingly simple question.  But, like most SaaS metrics, when you dig deeper you find it’s not.  In this post we’ll take a look at how to calculate win rates and use win rates to introduce the broader concept of milestone vs. flow analysis that applies to conversion rates across the entire sales funnel.

Let’s start with some assumptions.  Once an opportunity is accepted by sales (known as a sales-accepted opportunity, or SAL), it eventually will end up in one of three terminal states:

  • Won
  • Lost
  • Other (derailed, no decision)

Some people don’t like “other” and insist that opportunities should be exclusively either won or lost and that other is an unnecessary form of lost which should be tracked with a lost reason code as opposed to its own state.  I prefer to keep other, and call it derailed, because a competitive loss is conceptually different from a project cancellation, major delay, loss of sponsor, or a company acquisition that halts the project.  Whether you want to call it other, no decision, or derailed, I think having a third terminal state is warranted from first principles.  However, it can make things complicated.

For example, you’ll need to calculate win rates two ways:

  • Win rate, narrow = wins / (wins + losses)
  • Win rate, broad = wins / (wins + losses + derails)

Your narrow win rate tells you how good you are at beating the competition.  Your broad rates tells you how good you are at closing deals (that come to a terminal state).

Narrow win rate alone can be misleading.  If I told you a company had a 66% win rate, you might be tempted to say “time to add more salespeople and scale this thing up.”  If I told you they got the 66% win rate by derailing 94 out of every 100 opportunities it generated, won 4, and lost the other 2, then you’d say “not so fast.”  This, of course, would show up in the broad win rate of 4%.

This brings up the important question of timing.  Both these win rate calculations ignore deals that push out of a quarter.  So another degenerate case is a situation where you win 4, lose 2, derail 4, and push 90 opportunities.  In this case, narrow win rate = 66% and broad win rate = 40%.  Neither is shining a light on the problem (which, if it happens continuously, I call a rolling hairball problem.)

The issue here is thus far we’ve been performing what I call a milestone analysis.  In effect, we put observers by the side of the road at various milestones (created, won, lost, derailed) and ask them to count the number opportunities that pass by each quarter.  The issue, especially with companies that have long sales cycles, is that you have no idea of progression.  You don’t know if the opportunities that passed “win” this quarter came from the opportunities that passed “created” this quarter, or if they came from last quarter, the quarter before that, or even earlier.

Milestone analysis has two key advantages

  • It’s easy — you just need to count opportunities passing milestones
  • It’s instant — you don’t have to wait to see how things play out to generate answers

The big disadvantage is it can be misleading, because the opportunities hitting a terminal state this quarter were generated in many different time periods.  For a company with an average 9 month sales cycle, the opportunities hitting a terminal state in quarter N, were generated primarily in quarter N-3, but with some coming in quarters N-2 and N-1 and some coming in quarters N-4 and N-5.  Across that period very little was constant, for example, marketing programs and messages changed.  So a marketing effectiveness analysis would be very difficult when approached this way.

For those sorts of questions, I think it’s far better to do a cohort-based analysis, which I call a flow analysis.  Instead of looking at all the opportunities that hit a terminal state in a given time period, you go back in time, grab a cohort of opportunities (e.g., all those generated in 4Q16) and then see how they play out over time.  You go with the flow.

For marketing programs effectiveness, this is the only way to do it.  Instead of a time-based cohort, you’d take a programs-based cohort (e.g., all the opportunities generated by marketing program X), see how they play out, and then compare various programs in terms of effectiveness.

The big downside of flow analysis is you end up analyzing ancient history.  For example, if you have a 9 month average sales cycle with a wide distribution around the mean, you may need to wait 15-18 months before the vast majority of the opportunities hit a terminal state.  If you analyze too early, too many opportunities are still open.  But if you put off analysis then you may get important information, but too late.

You can compress the time window by analyzing programs effectiveness not to sales outcomes but to important steps along the funnel.  That way you could compare two programs on the basis of their ability to generate MQLs or SALs, but you still wouldn’t know whether and at what relative rate they generate actual customers.  So you could end up doubling down on a program that generates a lot of interest, but not a lot of deals.

Back to our original topic, the same concept comes up in analyzing win rates.  Regardless of which win rate you’re calculating, at most companies you’re calculating it on a milestone basis.  I find milestone-based win rates more volatile and less accurate that a flow-based SAL-to-close rate.  For example, if I were building a marketing funnel to determine how many deals I need to hit next year’s number, I’d want to use a SAL-to-close rate, not a win rate, to do so.  Why?  SAL-to-close rates:

  • Are less volatile because they’re damped by using long periods of time.
  • Are more accurate because they actually tracking what you care about — if I get 100 opportunities, how many close within a given time period.
  • Automatically factor in derails and slips (the former are ignored in the narrow win rate and the latter ignored in both the narrow and broad win rates).

Let’s look at an example.  Here’s a chart that tracks 20 opportunities, 10 generated in 1Q17 and 10 generated in 2Q17, through their entire lifetime to a terminal stage.

oppty tracking

In reality things are a lot more complicated than this picture because you have opportunities still being generated in 3Q17 through 4Q18 and you’ll have opportunities that are still in play generated in numerous quarters before 1Q17.  But to keep things simple, let’s just analyze this little slice of the world.  Let’s do a milestone-based win/loss analysis.


First, you can see the milestone-based win/loss rates bounce around a lot.  Here it’s due in part due to law of small numbers, but I do see similar volatility in real life — in my experience win rates bounce within a fairly broad zone — so I think it’s a real issue.  Regardless of that, what’s indisputable is that in this example, this is how things will look to the milestone-based win/loss analyzer.  Not a very clear picture — and a lot to panic about in 4Q17.

Let’s look at what a flow-based cohort analysis produces.


In this case, we analyze the cohort of opportunities generated in the year-ago quarter.  Since we only generate opportunities in two quarters, 1Q17 and 2Q17, we only have two cohorts to analyze, and we get only two sets of numbers.  The thin blue box shows in opportunity tracking chart shows the data summarized in the 1Q18 column and the thin orange box shows the data for the 2Q18 column.  Both boxes depict how 3 opportunities in each cohort are still open at the end of the analysis period (imagine you did the 1Q18 analysis in 1Q18) and haven’t come to final resolution.  The cohorts both produce a 50% narrow win rate, a 43% vs. 29% broad win rate, and a 30% vs. 20% close rate.  How good are these numbers?

Well, in our example, we have the luxury of finding the true rates by letting the six open opportunities close out over time.  By doing a flow-based analysis in 4Q18 of the 1H17 cohort, we can see that our true narrow win rate is 57%, our true broad win rate is 40%, and our close rate is also 40% (which, once everything has arrived at a terminal state, is definitionally identical to the broad win rate).


Hopefully this post has helped you think about your funnel differently by introducing the concept of milestone- vs. flow-based analysis and by demonstrating how the same business situation results in a very different rates depending on both the choice of win rate and analysis type.

Please note that the math in this example backed me into a 40% close rate which is about double what I believe is the benchmark in enterprise software — I think 20 to 25% is a more normal range. 


Is IBM Getting Out of Enterprise Performance Management?

I noticed that IBM last week sold off several EPM products — IBM Cognos Disclosure Management (CDM), IBM Cognos Financial Statement Reporting (FSR), and IBM Clarity 7 products — to a company called Certent.

This, combined with a pretty weak performance in Gartner’s recent financial and strategic CPM magic quadrants — where IBM landed as Visionary and one and a Challenger in the other, and  a Leader in neither — got me wondering about IBM’s commitment to EPM as a category going forward.  Could Planning or TM1 be next?

Moreover, it wasn’t just the new Gartner magic quadrants where IBM didn’t fare well.  In the Dresner Wisdom of Crowds market study, IBM was bottom-right in the Customer Experience model and was the only vendor entirely left out of (i.e., “outside the magnifying glass”) the vendor credibility model.  And IBM’s ring in the spider chart seems to gotten worse, not better, in 2017 over 2016.

Yes, we all know IBM is quite busy re-branding everything that’s not nailed down Watson, but could they be backing off EPM?

Which got me wondering, as I surfed around IBM’s website, why some products appeared to be first-class “products” while others were found under “marketplace.”  Why is DB2 under analytics products while TM1 is under marketplace?

db2 v tm1

Maybe it’s nothing, but I decided to check around a bit.  My friends in the know seem to believe that IBM remains committed to EPM, but that they view Clarity as a legacy product and were tired of getting beaten by Workiva in disclosure management.  That is, they saw it as a desire to focus more on planning and consolidation and as well things like compensation management.

Me, I’m not so sure.  When companies start pruning in an area sometimes they keep pruning.  And, in general, we don’t see them that much in the marketplace — particularly when you think of the powerhouse that Cognos was back in the day.  And, they don’t seem to be doing that well.  And, Watson is the big future focus.  So, file this under rumor and speculation, but watch this space.

The New 2017 Gartner Magic Quadrants for Cloud Strategic CPM (SCPM) and Cloud Financial CPM (FCPM) – How to Download; A Few Thoughts

For some odd reason, I always think of this scene — The New Phone Book’s Here – from an old Steve Martin comedy whenever Gartner rolls out their new Magic Quadrants (MQ) for corporate performance management (CPM). It’s probably because all of the excitement they generate.

Last year, Gartner researchers John Van Decker and Chris Iervolino kept that excitement up by making the provocative move of splitting the CPM quadrant in two — strategic CPM (SCPM) and financial CPM (FCPM). Never complacent, this year they stirred things up again by inserting the word “cloud” before the category name for each; we’ll discuss the ramifications of that in a minute.

Free Download of 2017 CPM Magic Quadrants

But first, let me provide some links where you can download the new FCPM and SCPM magic quadrants:

Significance of the New 2017 FPCM and SCPM Magic Quadrants

The biggest change this year is the insertion of the word “cloud” in the title of the magic quadrants.  This perhaps seemingly small change, like a butterfly effect, results in an entirely new world order where two of the three megavendors in the category (i.e., IBM, SAP) get displaced from market leadership due to the lack of the credibility and/or sophistication of their cloud offerings.

For example:

  • In the strategic CPM quadrant, IBM is relegated to the Visionary quadrant (bottom right) and SAP does not even make the cut.
  • In the financial CPM quadrant, IBM is relegated to the Challenger quadrant (top left) and SAP again does not even make the cut.

Well, I suppose one might then ask, well if IBM and SAP do poorly in the cloud financial and strategic CPM magic quadrants, then how do they do in the “regular” ones?

To which the answer is, there aren’t any “regular” ones; they only made cloud ones.  That’s the point.

So I view this as the mainstreaming of cloud in EPM [1].  Gartner is effectively saying a few things:

  • Who cares how much maintenance fees a vendor derives from legacy products?
  • The size of a vendor’s legacy base is independent of its position for the future.
  • The cloud is now the norm in CPM product selection, so it’s uninteresting to even produce a non-cloud MQ for CPM. The only CPM MQs are the cloud ones.

While I have plenty of beefs with Oracle as a prospective business partner — and nearly as many with their cloud EPM offerings — to their credit, they have been making an effort at cloud EPM while IBM and SAP seem to have somehow been caught off-guard, at least from an EPM perspective.

(Some of Oracle’s overall cloud revenue success is likely cloudwashing though they settled a related lawsuit with the whistleblower so we’ll never know the details.)

Unlikely Bedfellows:  Only Two Vendors are Leaders in Both FCPM and SCPM Magic Quadrants

This creates the rather odd situation where there are only two vendors in the Leaders section of both the financial and strategic CPM magic quadrants:  Host Analytics and Oracle.  That means only two vendors can provide the depth and breadth of products in the cloud to qualify for the Leaders quadrant in both the FCPM and SCPM MQ.

I know who I’d rather buy from.

In my view, Host Analytics has a more complete, mature, and proven product line – we’ve been at this a lot longer than they have — and, well, oligopolists aren’t really famous for their customer success and solutions orientation.  More infamous, in fact.  See the section of the FCPM report where it says Oracle ranks in the “bottom 25% of vendors in this MQ on ‘overall satisfaction with vendor.’”

Or how an Oracle alumni once defined “solution selling” for me:

Your problem is you are out of compliance with the license agreement and we’re going to shut down the system.  The solution is to give us money.


For more editorial, you can read John O’Rourke’s post on the Host Analytics corporate blog.

Download the 2017 FCPM and SCPM Magic Quadrants

Or you can download the new 2017 Gartner CPM MQs here.

# # #


[1] Gartner refers to the category as corporate performance management (CPM).  I generally refer to it as enterprise performance management (EPM), reflecting the fact that EPM software is useful not only for corporations, but other forms of organization such as not-for-profit, partnerships, government, etc.  That difference aside, I generally view EPM and CPM as synonyms.

Why has Standalone Cloud BI been such a Tough Slog?

I remember when I left Business Objects back in 2004 that it was early days in the cloud.  We were using Salesforce internally (and one of their larger customers at the time) so I was familiar with and a proponent of cloud-based applications, but never felt great about BI in the cloud.  Despite that, Business Objects and others were aggressively ramping on-demand offerings all of which amounted to pretty much nothing a few years later.

Startups were launched, too.  Specifically, I remember:

  • Birst, née Success Metrics, and founded in 2004 by Siebel BI veterans Brad Peters and Paul Staelin, which was originally supposed to be vertical industry analytic applications.
  • LucidEra, founded in 2005 by Salesforce and Siebel veteran Ken Rudin (et alia) whose original mission was to be to BI what Salesforce was to CRM.
  • PivotLink, which did their series A in 2007 (but was founded in 1998), positioned as on-demand BI and later moved into more vertically focused apps in retail.
  • GoodData, founded in 2007 by serial entrepreneur Roman Stanek, which early on focused on SaaS embedded BI and later moved to more of a high-end enterprise positioning.

These were great people — Brad, Ken, Roman, and others were brilliant, well educated veterans who knew the software business and their market space.

These were great investors — names like Andreessen Horowitz, Benchmark, Emergence, Matrix, Sequoia, StarVest, and Tenaya invested over $300M in those four companies alone.

This was theoretically a great, straightforward cloud-transformation play of a $10B+ market, a la Siebel to Salesforce.

But of the four companies named above only GoodData is doing well and still in the fight (with a high-end enterprise platform strategy that bears little resemblance to a straight cloud transformation play) and the three others all came to uneventful exits:

So, what the hell happened?

Meantime, recall that Tableau, founded in 2003, and armed in its early years with a measly $15M in venture capital, and with an exclusively on-premises business model, literally blew by all the cloud BI vendors, going public in May 2013 and despite the stock being cut by more than half since its July 2015 peak is still worth $4.2B today.

I can’t claim to have the definitive answer to the question I’ve posed in the title.  In the early days I thought it was related to technical issues like trust/security, trust/scale, and the complexities of cloud-based data integration.  But those aren’t issues today.  For a while back in the day I thought maybe the cloud was great for applications, but perhaps not for platforms or infrastructure.  While SaaS was the first cloud category to take off, we’ve obviously seen enormous success with both platforms (PaaS) and infrastructure (IaaS) in the cloud, so that can’t be it.

While some analysts lump EPM under BI, cloud-based EPM has not had similar troubles.  At Host, and our top competitors, we have never struggled with focus or positioning and we are all basically running slightly different variations on the standard cloud transformation play.  I’ve always believed that lumping EPM under BI is a mistake because while they use similar technologies, they are sold to different buyers (IT vs. finance) and the value proposition is totally different (tool vs. application).  While there’s plenty of technology in EPM, it is an applications play — you can’t sell it or implement it without domain knowledge in finance, sales, marketing or whatever domain for which you’re building the planning system.  So I’m not troubled to explain why cloud EPM hasn’t been a slog while cloud BI absolutely has been.

My latest belief is that the business model wasn’t the problem in BI.  The technology was.  Cloud transformation plays are all about business model transformation.  On-premises applications business models were badly broken:  the software cost $10s of millions to buy and $10s of millions more to implement (for large customers).  SMBs were often locked out of the market because they couldn’t afford the ante.  ERP and CRM were exposed because of this and the market wanted and needed a business model transformation.

With BI, I believe, the business model just wasn’t the problem.  By comparison to ERP and CRM, it was fraction of the cost to buy and implement.  A modest BusinessObjects license might have cost $150K and less than that to implement.  That problem was not that BI business model was broken, it was that the technology never delivered on the democratization promise that it made.  Despite shouting “BI for the masses” in 1995, BI never really made it beyond the analyst’s desk.

Just as RDBMS themselves failed to deliver information democracy with SQL (which, believe it or not, was part of the original pitch — end users could write SQL to answer their own queries!), BI tools — while they helped enable analysts — largely failed to help Joe User.  They weren’t easy enough to use.  They lacked information discovery.  They lacked, importantly, easy-yet-powerful visualization.

That’s why Tableau, and to a lesser extent Qlik, prospered while the cloud BI vendors struggled.  (It’s also why I find it profoundly ironic that Tableau is now in a massive rush to “go cloud” today.)  It’s also one reason why the world now needs companies like Alation — the information democracy brought by Tableau has turned into information anarchy and companies like Alation help rein that back in (see disclaimers).

So, I think that cloud BI proved to be such a slog because the cloud BI vendors solved the wrong problem. They fixed a business model that wasn’t fundamentally broken, all while missing the ease of use, data discovery, and visualization power that both required the horsepower of on-premises software and solved the real problems the users faced.

I suspect it’s simply another great, if simple, lesson is solving your customer’s problem.

Feel free to weigh in on this one as I know we have a lot of BI experts in the readership.