Category Archives: Modeling

The New Gartner 2018 Magic Quadrants for Cloud Financial Planning & Analysis and Cloud Financial Close Solutions

If all you’re looking for is the free download link, let’s cut to the chase:  here’s where you can download the new 2018 Gartner Magic Quadrant for Financial Planning and Analysis Solutions and the new 2018 Gartner Magic Quadrant for Cloud Financial Close Solutions.  These MQs are written jointly by John Van Decker and Chris Iervolino (with Chris as primary author on the first and John as primary author on the second).  Both are deep experts in the category with decades of experience.

Overall, I can say that at Host Analytics, we are honored to a leader in both MQs again this year.  We are also honored to be the only cloud pure-play vendor to be a leader in both MQs and we believe that speaks volumes about the depth and breadth of EPM functionality that we bring to the cloud.

So, if all you wanted was the links, thanks for visiting.  If, however, you’re looking for some Kellblog editorial on these MQs, then please continue on.

Whither CPM?
The first thing the astute reader will notice is that the category name, which Gartner formerly referred to as corporate performance management (CPM), and which others often referred to as enterprise performance management (EPM), is entirely missing from these MQs.  That’s no accident.  Gartner decided last fall to move away from CPM as a uber category descriptor in favor of referring more directly to the two related, but pretty different, categories beneath it.  Thus, in the future you won’t be hearing “CPM” from Gartner anymore, though I know that some vendors — including Host Analytics — will continue to use EPM/CPM until we can find a more suitable capstone name for the category.

Personally, I’m in favor of this move for two simple reasons.

  • CPM was a forced, analyst-driven category in the first place, dating back to Howard Dresner’s predictions that financial planning/budgeting would converge with business intelligence.  While Howard published the research that launched a thousand ships in terms of BI and financial planning industry consolidation (e.g., Cognos/Adaytum, BusinessObjects/SRC/Cartesis, Hyperion/Brio), the actual software itself never converged.  CPM never became like CRM — a true convergence of sales force automation (SFA) and contact center.  In each case, the two companies could be put under one roof, but they sold fundamentally different value propositions to very different buyers and thus never came together as one.
  • In accordance with the prior point, few customers actually refer to the category by CPM/EPM.  They say things much more akin to “financial planning” and “consolidation and close management.”  Since I like referring to things in the words that customers use, I am again in favor of this change.

It does, however, create one problem — Gartner has basically punted on trying to name a capstone category to include vendors who sell both financial planning and financial consolidation software.  Since we at Host Analytics think that’s important, and since we believe there are key advantages to buying both from the same vendor, we’d prefer if there were a single, standard capstone term.  If it were easy, I suppose a name would have already emerged [1].

How Not To Use Magic Quadrants
While they are Gartner’s flagship deliverable, magic quadrants (MQs) can generate a lot of confusion.  MQs don’t tell you which vendor is “best” because there is no universal best in any category.  MQs don’t tell you which vendor to pick to solve your problem because different solutions are designed around meeting different requirements.  MQs don’t predict the future of vendors — last-year’s movement vectors rarely predict this year’s positions.  And the folks I know at Gartner generally strongly dislike vector analysis of MQs because they view vendor placement as relative to each other at any moment in time [2].

Many things that customers seem to want from Gartner MQs are actually delivered by Gartner’s Critical Capabilities reports which get less attention because they don’t produce a simple, dramatic 2×2 output, but which are far better suited for determine the suitability of different products to different use-cases.

How To Use A Gartner Magic Quadrant?
In my experience after 25+ in enterprise software, I would use MQs for their overall purpose:  to group vendors into 4 different bucketsleaders, challengers, visionaries, and niche players.  That’s it.  If you want to know who the leaders are in a category, look top right.  If you want to know who the visionaries are, look bottom right.  You want to know which big companies are putting resources into the category but who thus far are lacking strategy/vision, then look top-left at the challengers quadrant.

But should you, in my humble opinion, get particularly excited about millimeter differences on either axes?  No.  Why?  Because what drives those deltas may have little, none, or in fact a counter-correlation to your situation.  In my experience, the analysts pay a lot of attention to the quadrants in which vendors end up in [2] so quadrant-placement, I’d say, is quite closely watched by the analysts.  Dot-placement, while closely watched by vendors, save for dramatic differences, doesn’t change much in the real world.  After all, they are called the magic quadrants, not the magic dots.

All that said, let me wind up with some observations on the MQs themselves.

Quick Thoughts on the 2018 Cloud FP&A Solutions MQ
While the MQs were published at the end of July 2018, they were based on information about the vendors gathered in and largely about 2017.  While there is always some phase-lag between the end of data collection and the publication data, this year it was rather unusually long — meaning that a lot may have changed in the market in the first half of 2018 that customers should be aware of. For that reason, if you’re a Gartner customer and using either the MQs or critical capabilities reports that accompany them, you should probably setup an appointment to call the analysts to ensure you’re working off the latest data.

Here are some of my quick thoughts the Cloud FP&A Solutions magic quadrant:

  • Gartner says the FP&A market is accelerating its shift from on-premises cloud.  I agree.
  • Gartner allows three types of “cloud” vendors into this (and the other) MQ:  cloud-only vendors, on-premise vendors with new built-for-the-cloud solutions, and on-premises vendors who allow their software to be run hosted on a third-party cloud platform.  While I understand their need to be inclusive, I think this is pretty broad — the total cost of ownership, cash flows, and incentives are quite different between pure cloud vendors and hosted on-premises solutions.  Caveat emptor.
  • To qualify for the MQ vendors must support at least two of the four following components of FP&A:  planning/budgeting, integrated financial planning, forecasting/modeling, management/performance reporting.  Thus the MQ is not terribly homogeneous in terms of vendor profile and use-cases.
  • For the second year in a row, (1) Host is a leader in this MQ and (2) is the only cloud pure-play vendor who is a leader in both.  We think this says a lot about the breadth and depth of our product line.
  • Customer references for Host cited ease of use, price, and solution flexibility as top three purchasing criteria.  We think this very much represents our philosophy of complex EPM made easy.

Quick Thoughts on the 2018 Cloud Financial Close Solutions MQ
Here are some of my quick thoughts on the Cloud Financial Close Solutions magic quadrant:

  • Gartner says that in the past two years the financial close market has shifted from mature on-premises to cloud solutions.  I agree.
  • While Gartner again allowed all three types of cloud vendors in this MQ, I believe some of the vendors in this MQ do just-enough, just-cloud-enough business to clear the bar, but are fundamentally still offering on-premise wolves in cloud sheep’s clothing.  Customers should look to things like total cost of ownership, upgrade frequency, and upgrade phase lags in order to flesh out real vs. fake cloud offerings.
  • This MQ is more of a mixed bag than the FP&A MQ or, for that matter, most Gartner MQs.  In general, MQs plot substitutes against each other — each dot on an MQ usually represents a vendor who does basically the same thing.  This is not true for the Cloud Financial Close (CFC) MQ — e.g., Workiva is a disclosure management vendor (and a partner of Host Analytics).  However, they do not offer financial consolidation software, as does say Host Analytics or Oracle.
  • Because the scope of this MQ is broad and both general and specialist vendors are included, customers should either call the Gartner for help (if they are Gartner customers) or just be mindful of the mixing and segmentation — e.g., Floqast (in SMB and MM) and Blackline (in enterprise) both do account reconciliation, but they are naturally segmented by customer size (and both are partners of Host, which does financial consolidation but not account reconciliation).
  • Net:  while I love that the analysts are willing to put different types of close-related, office-of-the-CFO-oriented vendors on the same MQ, it does require more than the usual amount of mindfulness in interpreting it.

Conclusion
Finally, if you want to analyze the source documents yourself, you can use the following link to download both the 2018 Gartner Magic Quadrant for Financial Planning and Analysis and Consolidation and Close Management.

# # #

Notes

[1] For Gartner, this is likely more than a semantic issue.  They are pretty strong believers in a “post-modern” ERP vision which eschews the idea of a monolithic application that includes all services, in favor of using and integrating a series of cloud-based services.  Since we are also huge believers in integrating best-of-breed cloud services, it’s hard for us to take too much issue with that.  So we’ll simply have to clearly articulate the advantages of using Host Planning and Host Consolidations together — from our viewpoint, two best-of-breed cloud services that happen to come from a single vendor.

[2] And not something done against absolute scales where you can track movement over time.  See, for example, the two explicit disclaimers in the FP&A MQ:

Capture

[3] I’m also a believer in a slightly more esoteric theory which says:  given that the Gartner dot-placement algorithm seems to try very hard to layout dots in a 45-degree-tilted football shaped pattern, it is always interesting to examine who, how, and why someone ends up outside that football.

The Use of Ramped Rep Equivalents (RREs) in Sales Analytics and Modeling

[Editor’s note:  revised 7/18, 6:00 PM to fix spreadsheet error and change numbers to make example easier to follow, if less realistic in terms of hiring patterns.]

How many times have you heard this conversation?

VC:  how many sales reps do you have? 

CEO:  Uh, 25.  But not really.

VC:  What do you mean, not really?

CEO:  Well, some of them are new and not fully productive yet.

VC:  How long does it take for them to fully ramp?

CEO:  Well, to full productivity, four quarters.

VC:  So how many fully-ramped reps do you have?

CEO:  9 fully ramped, but we have 15 in various stages of ramping, and 1 who’s brand new …

There’s a better way to have this conversion, to perform your sales analytics, and to build your bookings capacity waterfall model.  That better way involves creating a new metric called ramped rep equivalents (RREs). Let’s build up to talking about RREs by first looking at a classical sales bookings waterfall model.

ramped rep equivalents, picture 1, revised

I love building these models and they’re a lot of fun to play with, doing what-if analysis, varying the drivers (which are in the orange cells) and looking at the results.  This is a simplified version of what most sales VPs look at when trying to decide next year’s hiring, next year’s quotas [1], and next year’s targets.  This model assumes one type of salesrep [2]; a distribution of existing reps by tenure as 1 first-quarter, 3 second-quarter, 5 third-quarter, 7 fourth-quarter, and 9 steady-state reps; a hiring pattern of 1, 2, 4, 6 reps across the four quarters of 2019; and a salesrep productivity ramp whereby reps are expected to sell 0% of steady-state productivity in their first quarter with the company, and then 25%, 50%, 75% in quarters 2 through 4 and then become fully productive at quarter 5, selling at the steady-state productivity level of $1,000K in new ARR per year [3].

Using this model, a typical sales VP — provided they believed the productivity assumptions [4] and that they could realistically set quotas about 20% above the target productivity — would typically sign up for around a $22M new ARR bookings target for the coming year.

While these models work just fine, I have always felt like the second block (bookings capacity by tenure), while needed for intermediate calculations, is not terribly meaningful by itself.  The lost opportunity here is that we’re not creating any concept to more easily think about, discuss, and analyze the productivity we get from reps as they ramp.

Enter the Ramped Rep Equivalent (RRE)
Rather than thinking about the partial productivity of whole reps, we can think about partial reps against whole productivity — and build the model that way, instead.  This has the by-product of creating a very useful number, the RRE.  Then, to get bookings capacity just multiply the number of RREs times the steady-state productivity.  Let’s see an example below:

ramped rep equivalents, picture 2, revised

This provides a far more intuitive way of thinking about salesrep ramping.  In 1Q19, the company has 25 reps, only 9 of whom are fully ramped, and rest combine to give the productivity of 8.5 additional reps, resulting in an RRE total of 17.5.

“We have 25 reps on board, but thanks to ramping, we only have the capacity equivalent to 17.5 fully-ramped reps at this time.”

This also spits out three interesting metrics:

  • RRE/QCR ratio:  an effective vs. nominal capacity ratio — in 1Q19, nominally we have 25 reps, but we have only the effective capacity of 17.5 reps.  17.5/25 = 70%.
  • Capacity lost to ramping (dollars):  to make the prior figure more visceral, think of the sales capacity lost due to ramping (i.e., the delta between your nominal and effective capacity) expressed in dollars.  In this case, in 1Q19 we’re losing $1,875K of our bookings capacity due to ramping.
  • Capacity lost to ramping (percent):  the same concept as the prior metric, simply expressed in percentage terms.  In this case, in 1Q19 we’re losing 30% of our bookings capacity due to ramping.

Impacts and Cautions
If you want to move to an RRE mindset, here are a few tips:

  • RREs are useful for analytics, like sales productivity.  When looking at actuals you can measure sales productivity not just by starting-period or average-period reps, but by RRE.  It will provide a much more meaningful metric.
  • You can use RREs to measure sales effectiveness.  At the start of each quarter recalculate your theoretical capacity based on your actual staffing.  Then divide your actuals by that start-of-quarter theoretical capacity and you will get a measure of how well you are performing, i.e., the utilization of the quarterly starting capacity in your sales force.  When you’re missing sales targets it is typically for one of two reasons:  you don’t have enough capacity or you’re not making use of the capacity you have.  This helps you determine which.
  • Beware that if you have multiple types of reps (e.g., corporate and field), you be tempted to blend them in the same way you do whole reps today –i.e., when asked “how many reps do you have?” most people say “15” and not “9 enterprise plus 6 corporate.”  You have the same problem with RREs.  While it’s OK to present a blended RRE figure, just remember that it’s blended and if you want to calculate capacity from it, you should calculate RREs by rep type and then get capacity by multiplying the RRE for each rep type by their respective steady-state productivity.

I recommend moving to an RRE mindset for modeling and analyzing sales capacity.  If you want to play with the spreadsheet I made for this post, you can find it here.

Thanks to my friend Paul Albright for being the first person to introduce me to this idea.

# # #

Notes
[1] This is actually a productivity model, based on actual sales productivity — how much people have historically sold (and ergo should require little/no cushion before sales signs up for it).  Most people I know work with a productivity model and then uplift the desired productivity by 15 to 25% to set quotas.

[2] Most companies have two or three types (e.g., corporate vs. field), so you typically need to build a waterfall for each type of rep.

[3] To build this model, you also need to know the aging of your existing salesreps — i.e., how many second-, third-, fourth-, and steady-state-quarter reps you have at the start of the year.

[4] The glaring omission from this model is sales turnover.  In order to keep it simple, it’s not factored in here. While some people try to factor in sales turnover by using reduced sales productivity figures, I greatly prefer to model realistic sales productivity and explicitly model sales turnover in creating a sales bookings capacity model.

[5] This is one reason it’s so expensive to build an enterprise software sales force.  For several quarters you often get 100% of the cost and 50% of the sales capacity.

[6] Which should be an weighted average productivity by type of rep weighted by number of reps of each type.

Win Rates, Close Rates and Milestone vs. Flow Analysis

Hey, what’s your win rate?

It’s another seemingly simple question.  But, like most SaaS metrics, when you dig deeper you find it’s not.  In this post we’ll take a look at how to calculate win rates and use win rates to introduce the broader concept of milestone vs. flow analysis that applies to conversion rates across the entire sales funnel.

Let’s start with some assumptions.  Once an opportunity is accepted by sales (known as a sales-accepted opportunity, or SAL), it eventually will end up in one of three terminal states:

  • Won
  • Lost
  • Other (derailed, no decision)

Some people don’t like “other” and insist that opportunities should be exclusively either won or lost and that other is an unnecessary form of lost which should be tracked with a lost reason code as opposed to its own state.  I prefer to keep other, and call it derailed, because a competitive loss is conceptually different from a project cancellation, major delay, loss of sponsor, or a company acquisition that halts the project.  Whether you want to call it other, no decision, or derailed, I think having a third terminal state is warranted from first principles.  However, it can make things complicated.

For example, you’ll need to calculate win rates two ways:

  • Win rate, narrow = wins / (wins + losses)
  • Win rate, broad = wins / (wins + losses + derails)

Your narrow win rate tells you how good you are at beating the competition.  Your broad rates tells you how good you are at closing deals (that come to a terminal state).

Narrow win rate alone can be misleading.  If I told you a company had a 66% win rate, you might be tempted to say “time to add more salespeople and scale this thing up.”  If I told you they got the 66% win rate by derailing 94 out of every 100 opportunities it generated, won 4, and lost the other 2, then you’d say “not so fast.”  This, of course, would show up in the broad win rate of 4%.

This brings up the important question of timing.  Both these win rate calculations ignore deals that push out of a quarter.  So another degenerate case is a situation where you win 4, lose 2, derail 4, and push 90 opportunities.  In this case, narrow win rate = 66% and broad win rate = 40%.  Neither is shining a light on the problem (which, if it happens continuously, I call a rolling hairball problem.)

The issue here is thus far we’ve been performing what I call a milestone analysis.  In effect, we put observers by the side of the road at various milestones (created, won, lost, derailed) and ask them to count the number opportunities that pass by each quarter.  The issue, especially with companies that have long sales cycles, is that you have no idea of progression.  You don’t know if the opportunities that passed “win” this quarter came from the opportunities that passed “created” this quarter, or if they came from last quarter, the quarter before that, or even earlier.

Milestone analysis has two key advantages

  • It’s easy — you just need to count opportunities passing milestones
  • It’s instant — you don’t have to wait to see how things play out to generate answers

The big disadvantage is it can be misleading, because the opportunities hitting a terminal state this quarter were generated in many different time periods.  For a company with an average 9 month sales cycle, the opportunities hitting a terminal state in quarter N, were generated primarily in quarter N-3, but with some coming in quarters N-2 and N-1 and some coming in quarters N-4 and N-5.  Across that period very little was constant, for example, marketing programs and messages changed.  So a marketing effectiveness analysis would be very difficult when approached this way.

For those sorts of questions, I think it’s far better to do a cohort-based analysis, which I call a flow analysis.  Instead of looking at all the opportunities that hit a terminal state in a given time period, you go back in time, grab a cohort of opportunities (e.g., all those generated in 4Q16) and then see how they play out over time.  You go with the flow.

For marketing programs effectiveness, this is the only way to do it.  Instead of a time-based cohort, you’d take a programs-based cohort (e.g., all the opportunities generated by marketing program X), see how they play out, and then compare various programs in terms of effectiveness.

The big downside of flow analysis is you end up analyzing ancient history.  For example, if you have a 9 month average sales cycle with a wide distribution around the mean, you may need to wait 15-18 months before the vast majority of the opportunities hit a terminal state.  If you analyze too early, too many opportunities are still open.  But if you put off analysis then you may get important information, but too late.

You can compress the time window by analyzing programs effectiveness not to sales outcomes but to important steps along the funnel.  That way you could compare two programs on the basis of their ability to generate MQLs or SALs, but you still wouldn’t know whether and at what relative rate they generate actual customers.  So you could end up doubling down on a program that generates a lot of interest, but not a lot of deals.

Back to our original topic, the same concept comes up in analyzing win rates.  Regardless of which win rate you’re calculating, at most companies you’re calculating it on a milestone basis.  I find milestone-based win rates more volatile and less accurate that a flow-based SAL-to-close rate.  For example, if I were building a marketing funnel to determine how many deals I need to hit next year’s number, I’d want to use a SAL-to-close rate, not a win rate, to do so.  Why?  SAL-to-close rates:

  • Are less volatile because they’re damped by using long periods of time.
  • Are more accurate because they actually tracking what you care about — if I get 100 opportunities, how many close within a given time period.
  • Automatically factor in derails and slips (the former are ignored in the narrow win rate and the latter ignored in both the narrow and broad win rates).

Let’s look at an example.  Here’s a chart that tracks 20 opportunities, 10 generated in 1Q17 and 10 generated in 2Q17, through their entire lifetime to a terminal stage.

oppty tracking

In reality things are a lot more complicated than this picture because you have opportunities still being generated in 3Q17 through 4Q18 and you’ll have opportunities that are still in play generated in numerous quarters before 1Q17.  But to keep things simple, let’s just analyze this little slice of the world.  Let’s do a milestone-based win/loss analysis.

win-loss

First, you can see the milestone-based win/loss rates bounce around a lot.  Here it’s due in part due to law of small numbers, but I do see similar volatility in real life — in my experience win rates bounce within a fairly broad zone — so I think it’s a real issue.  Regardless of that, what’s indisputable is that in this example, this is how things will look to the milestone-based win/loss analyzer.  Not a very clear picture — and a lot to panic about in 4Q17.

Let’s look at what a flow-based cohort analysis produces.

cohort1

In this case, we analyze the cohort of opportunities generated in the year-ago quarter.  Since we only generate opportunities in two quarters, 1Q17 and 2Q17, we only have two cohorts to analyze, and we get only two sets of numbers.  The thin blue box shows in opportunity tracking chart shows the data summarized in the 1Q18 column and the thin orange box shows the data for the 2Q18 column.  Both boxes depict how 3 opportunities in each cohort are still open at the end of the analysis period (imagine you did the 1Q18 analysis in 1Q18) and haven’t come to final resolution.  The cohorts both produce a 50% narrow win rate, a 43% vs. 29% broad win rate, and a 30% vs. 20% close rate.  How good are these numbers?

Well, in our example, we have the luxury of finding the true rates by letting the six open opportunities close out over time.  By doing a flow-based analysis in 4Q18 of the 1H17 cohort, we can see that our true narrow win rate is 57%, our true broad win rate is 40%, and our close rate is also 40% (which, once everything has arrived at a terminal state, is definitionally identical to the broad win rate).

cohort7

Hopefully this post has helped you think about your funnel differently by introducing the concept of milestone- vs. flow-based analysis and by demonstrating how the same business situation results in a very different rates depending on both the choice of win rate and analysis type.

Please note that the math in this example backed me into a 40% close rate which is about double what I believe is the benchmark in enterprise software — I think 20 to 25% is a more normal range. 

 

The New 2017 Gartner Magic Quadrants for Cloud Strategic CPM (SCPM) and Cloud Financial CPM (FCPM) – How to Download; A Few Thoughts

For some odd reason, I always think of this scene — The New Phone Book’s Here – from an old Steve Martin comedy whenever Gartner rolls out their new Magic Quadrants (MQ) for corporate performance management (CPM). It’s probably because all of the excitement they generate.

Last year, Gartner researchers John Van Decker and Chris Iervolino kept that excitement up by making the provocative move of splitting the CPM quadrant in two — strategic CPM (SCPM) and financial CPM (FCPM). Never complacent, this year they stirred things up again by inserting the word “cloud” before the category name for each; we’ll discuss the ramifications of that in a minute.

Free Download of 2017 CPM Magic Quadrants

But first, let me provide some links where you can download the new FCPM and SCPM magic quadrants:

Significance of the New 2017 FPCM and SCPM Magic Quadrants

The biggest change this year is the insertion of the word “cloud” in the title of the magic quadrants.  This perhaps seemingly small change, like a butterfly effect, results in an entirely new world order where two of the three megavendors in the category (i.e., IBM, SAP) get displaced from market leadership due to the lack of the credibility and/or sophistication of their cloud offerings.

For example:

  • In the strategic CPM quadrant, IBM is relegated to the Visionary quadrant (bottom right) and SAP does not even make the cut.
  • In the financial CPM quadrant, IBM is relegated to the Challenger quadrant (top left) and SAP again does not even make the cut.

Well, I suppose one might then ask, well if IBM and SAP do poorly in the cloud financial and strategic CPM magic quadrants, then how do they do in the “regular” ones?

To which the answer is, there aren’t any “regular” ones; they only made cloud ones.  That’s the point.

So I view this as the mainstreaming of cloud in EPM [1].  Gartner is effectively saying a few things:

  • Who cares how much maintenance fees a vendor derives from legacy products?
  • The size of a vendor’s legacy base is independent of its position for the future.
  • The cloud is now the norm in CPM product selection, so it’s uninteresting to even produce a non-cloud MQ for CPM. The only CPM MQs are the cloud ones.

While I have plenty of beefs with Oracle as a prospective business partner — and nearly as many with their cloud EPM offerings — to their credit, they have been making an effort at cloud EPM while IBM and SAP seem to have somehow been caught off-guard, at least from an EPM perspective.

(Some of Oracle’s overall cloud revenue success is likely cloudwashing though they settled a related lawsuit with the whistleblower so we’ll never know the details.)

Unlikely Bedfellows:  Only Two Vendors are Leaders in Both FCPM and SCPM Magic Quadrants

This creates the rather odd situation where there are only two vendors in the Leaders section of both the financial and strategic CPM magic quadrants:  Host Analytics and Oracle.  That means only two vendors can provide the depth and breadth of products in the cloud to qualify for the Leaders quadrant in both the FCPM and SCPM MQ.

I know who I’d rather buy from.

In my view, Host Analytics has a more complete, mature, and proven product line – we’ve been at this a lot longer than they have — and, well, oligopolists aren’t really famous for their customer success and solutions orientation.  More infamous, in fact.  See the section of the FCPM report where it says Oracle ranks in the “bottom 25% of vendors in this MQ on ‘overall satisfaction with vendor.’”

Or how an Oracle alumni once defined “solution selling” for me:

Your problem is you are out of compliance with the license agreement and we’re going to shut down the system.  The solution is to give us money.

Nice.

For more editorial, you can read John O’Rourke’s post on the Host Analytics corporate blog.

Download the 2017 FCPM and SCPM Magic Quadrants

Or you can download the new 2017 Gartner CPM MQs here.

# # #

Notes:

[1] Gartner refers to the category as corporate performance management (CPM).  I generally refer to it as enterprise performance management (EPM), reflecting the fact that EPM software is useful not only for corporations, but other forms of organization such as not-for-profit, partnerships, government, etc.  That difference aside, I generally view EPM and CPM as synonyms.

EPM: Now More Than Ever

The theme of my presentation at past spring’s Host Analytics World was that EPM is needed in fair, foul, or uncertain weather.  While EPM is used differently in fair and foul weather scenarios, it is a critical navigational instrument to help pilot the business.

For example, in tougher times:

  • You’re constantly re-forecasting
  • You’re doing expense reduction modeling
  • You might do a zero-based budget (particularly popular among recently PE-acquired firms)
  • You’re likely to try and reduce capex (unless you see a quick rebound)
  • You’re probably making P&L, budget, and spend authority more centralized in order to keep tighter reins on the company.

In better times:

  • You model and compare new growth opportunities
  • You often build trended budgets more than bottom-up budgets
  • You adopt rolling forecasts
  • You increase capital investment and build for the future
  • You do more strategic initiatives planning
  • You decentralize P&L responsibility

These (and others) are all capabilities of a complete EPM suite.  The point is that you use that suite differently depending on the state of the business and the economy.

Well, now with the surprise election of our 45th President, Donald Trump, we can be certain of one thing:  uncertain times.

  • Will massive investments in infrastructure (including but not limited to, The Wall) happen and what effect will that have on economic growth and interest rates?
  • Will Trump deliver the promise 4% GDP growth that he’s promised or will the economy grow slower?
  • Will promised deregulation happen and if so will it accelerate economic growth?  What effects will deregulation have on key industries like financial services, energy, and raw materials?
  • What, as a result of this and foreign policies, will be the price of a barrel of oil in one year?  What effect will that have on key industries such as transportation?
  • Will Trump spark a trade war, increasing the price imports and reducing the purchasing power of low and middle-income consumers?  What effect might a trade war have on GDP growth?
  • What impact will all this have on financial markets and the cost and availability of capital?

I don’t pretend to know the answers to these questions.  I do know, however, that there is uncertainty about all of these questions– and dozens of others — that will directly impact businesses in their performance and planning.

If you cannot predict the future, you should at least be able to respond to it in agile way.

If your company takes 6 months to make a budget that gets changed once a year, you will be very exposed to surprise changes.  If you run on rolling forecasts, you will be far more agile.  If you have good EPM tools you will able to automate tasks like reporting, consolidation, and forecasting in order to free up time for the now much more important tasks of scenario planning and modeling.

Again, if you can’t know whether oil will be $40, $50, or $70 — you can at least have modeling out all three scenarios in advance so you can react quickly when it moves.

I’ve always been a big believer in planning and EPM.  And, in this uncertain environment, companies need EPM now more than ever.

The New Split CPM Magic Quadrants from Gartner

This week Gartner research vice president John Van Decker and research director Chris Iervolino took the bold move of splitting the corporate performance management (CPM), also known as enterprise performance management (EPM), magic quadrant in two.

Instead of publishing a single magic quadrant (MQ) for all of CPM, they published two MQs, one for strategic CPM and one for financial CPM, which they define as follows:

  • Strategic Corporate Performance Management (SCPM) Solutions – this includes Corporate Planning and Modeling, Integrated Financial Planning, Strategy Management, Profitability Management, and Performance Reporting.
  • Financial Corporate Performance Management (FCPM) Solutions – this includes Financial Consolidation, Financial Reporting, Management Reporting/Costing/Forecasting, Reconciliations/Close Management, Intercompany Transactions, and Disclosure Management (including XBRL tagging)

You can download these new CPM magic quadrants here.

What do I think about this?

  • It’s bold.  It’s the first time to my recollection that an MQ has included product from different categories.  Put differently, normally MQs are full of substitute products — e.g., 15 different types of butter.  Here, we have butter next to olive oil on the same MQ.
  • It’s smart.  Their uber point is that while CPM solutions are now pretty varied, that you can pretty easily classify them into more tactical/financial uses and more strategic uses.  Highlighting this by splitting the MQs does customers a service because it reminds them to think both tactically and strategically.  That’s important — and often needed in many finance departments who are struggling simply to keep up with the ongoing tactical workload.
  • It’s potentially confusing.  You can find not just substitutes but complements on the same MQ.  For example, Host Analytics and our partner Blackline are both on the FCPM MQ.  That’s cool because we both serve core finance needs.  It’s potentially confusing because we do one thing and they do another.
  • We are stoked.  Among cloud pure-play EPM vendors, Host Analytics is the only supplier listed on both MQs.   We believe this supports our contention that we have the broadest pure-play cloud EPM product line in the business.  Only Host has both!
  • In a hype-filled world, I think Gartner does a great job of seeing through the hype-haze and focusing on customers and solutions.  They do a better job than most at not being over-influenced by Halo Effects, and I suspect that’s because they spend a lot of time talking to real customers about solving real problems.

For more, see the Future of Finance blog post on the new MQs or just go ahead and download them here.

Host Analytics World 2016 EPM Keynote Address

We’re just finishing up a fantastic Host Analytics World 2016, with over 800 people gathered together in San Francisco to talk about enterprise performance management (EPM).   Here are a few pictures to give you a feel for the event.

Here’s 49ers football legend Steve Young delivering his keynote address:

IMG_3627

Here’s me delivering my keynote on EPM in fair weather and foul.

IMG_3614

Here’s an artsy shot of someone taking a picture during my keynote.

IMG_3615

And, of course, here are our mascots, Tick and Tie, stuffing bags for Project Night Night, the philanthropic activity we had at the conference cosponsored by Host Analytics and our amazing customer, Thrivent Financial.

tick and tie

The conference has been superb and I want to thank everyone — customers, prospective customers, analysts, journalists, pundits, and partners — for being a part of this great event.

I find it amazing that at such a great time to be in the cloud EPM market that we have competitors more focused on business intelligence (BI), predictive analytics, and functional performance management than on core EPM itself.  At Host Analytics, we know who we want to be:  the best vendor in cloud EPM, serving the fat middle 80% of the market.  More importantly, perhaps, we know who we don’t want to be:  we don’t want to be a visual analytics vendor, a social collaboration vendor, or a sales performance management vendor — hence our partnerships with Qlik, Socialcast, and Xactly.

We serve finance, we speak finance, and we’re proud of that.  Oh, and yes, our customers, finance leaders, care about the whole enterprise so we offer not only solutions to automate core finance processes but also tools to model the entire enterprise and align finance and operations.

You can hear about this and other topics by watching the 75 minute keynote speech and demo, embedded below.

 

Finally, please remember to save the date for Host Analytics World 2017 — May 16 through 19, 2017.

nashville