Category Archives: EPM

Putting the A Back in FP&A with Automated, Integrated Planning

I was reading this blog post on Continuous Planning by Rob Kugel of Ventana Research the other day and it reminded me of one of my (and Rob’s) favorite sayings:

We need to put the A back in FP&A

This means that the financial planning and analysis (FP&A) team at many companies is so busy doing other things that it doesn’t have time to focus on what it does best and where it can add the most value:  analysis.

This begs the question:  where did the A go?  What are the other things that are taking up so much time?  The answer:  data prep and spreadsheet jockeying.  These functions suck time away and the soul from the FP&A function.

dataprep

Data-related tasks — such as finding, integrating, and preparing data — take up more than 2/3rds of FP&A’s time.  Put differently, FP&A spends twice as much time getting ready to analyze data than it does analyzing it.  It might even be worse, depending on whether periodic and ad hoc reporting is included in data-related task or further carved out of the 28% of time remaining for analytics, as I suspect it is.

spreadsheetsrule

It’s not just finance who loves spreadsheets.  The business does do:  salesops, marketingops, supply chain planners, professional services ops, and customer support all love spreadsheets, too.  When I worked at Salesforce, we had one of the most sophisticated sales strategy and planning teams I’ve ever seen.  Their tool of choice?  Excel.

This comes back to haunt finance in three ways:

  • Warring models, for example, when the salesops new bookings model doesn’t foot to the finance one because they make different ramping and turnover assumptions.  These waste time with potential endless fights.
  • Non-integrated models.  Say sales and finance finally agree on a bookings target and to hire 5 more salespeople to support it.  Now we need to call marketing to update their leadgen model to ensure there’s enough budget to support them, customer service to ensure we’re staffed to handle the incremental customers they sign, professional services to ensure we’re have adequate consulting resources, and on and on.  Forget any of these steps and you’ll start the year out of balance, with unattainable targets somewhere.
  • Excel inundation.   FP&A develops battle fatigue dealing with and integrating some many different versions of so many spreadsheets, often late and night and under deadline pressure.  Mistakes gets made.

So how can prevent FP&A from being run over by these forces?  The answer is to automate, automate, and integrate.

  • Automate data integration and preparation.  Let’s free up time by use software that lets you “set and forget” data refreshes.  You should be able to setup a connector to a data source one, and then have it automatically run at periodic intervals going forward.  No more mailing spreadsheets around.
  • Automate periodic FP&A tasks.  Use software where you can invest in building the perfect monthly board pack, monthly management reports, quarterly ops review decks, and quarterly board reports once, and then automatically refresh it every period through these templates.  This not only free up time and reduces drudgery; it eliminates plenty of mistakes as well.
  • Integrate planning across the organization.  Move to a cloud-based enterprise performance platform (like Host Analytics) that not only accomplishes the prior two goals, but also offers a modeling platform that can be used across the organization to put finance, salesops, marketingops, professional services, supply chain, HR, and everyone else across the organization on a common footing.

Since the obligatory groundwork in FP&A is always heavy, you’re not going to succeed in putting the A back in FP&A simply by working harder and later.  The only way to put the A back in FP&A is to create time.  And you can do that with two doses of automation and one of integration.

Win Rates, Close Rates and Milestone vs. Flow Analysis

Hey, what’s your win rate?

It’s another seemingly simple question.  But, like most SaaS metrics, when you dig deeper you find it’s not.  In this post we’ll take a look at how to calculate win rates and use win rates to introduce the broader concept of milestone vs. flow analysis that applies to conversion rates across the entire sales funnel.

Let’s start with some assumptions.  Once an opportunity is accepted by sales (known as a sales-accepted opportunity, or SAL), it eventually will end up in one of three terminal states:

  • Won
  • Lost
  • Other (derailed, no decision)

Some people don’t like “other” and insist that opportunities should be exclusively either won or lost and that other is an unnecessary form of lost which should be tracked with a lost reason code as opposed to its own state.  I prefer to keep other, and call it derailed, because a competitive loss is conceptually different from a project cancellation, major delay, loss of sponsor, or a company acquisition that halts the project.  Whether you want to call it other, no decision, or derailed, I think having a third terminal state is warranted from first principles.  However, it can make things complicated.

For example, you’ll need to calculate win rates two ways:

  • Win rate, narrow = wins / (wins + losses)
  • Win rate, broad = wins / (wins + losses + derails)

Your narrow win rate tells you how good you are at beating the competition.  Your broad rates tells you how good you are at closing deals (that come to a terminal state).

Narrow win rate alone can be misleading.  If I told you a company had a 66% win rate, you might be tempted to say “time to add more salespeople and scale this thing up.”  If I told you they got the 66% win rate by derailing 94 out of every 100 opportunities it generated, won 4, and lost the other 2, then you’d say “not so fast.”  This, of course, would show up in the broad win rate of 4%.

This brings up the important question of timing.  Both these win rate calculations ignore deals that push out of a quarter.  So another degenerate case is a situation where you win 4, lose 2, derail 4, and push 90 opportunities.  In this case, narrow win rate = 66% and broad win rate = 40%.  Neither is shining a light on the problem (which, if it happens continuously, I call a rolling hairball problem.)

The issue here is thus far we’ve been performing what I call a milestone analysis.  In effect, we put observers by the side of the road at various milestones (created, won, lost, derailed) and ask them to count the number opportunities that pass by each quarter.  The issue, especially with companies that have long sales cycles, is that you have no idea of progression.  You don’t know if the opportunities that passed “win” this quarter came from the opportunities that passed “created” this quarter, or if they came from last quarter, the quarter before that, or even earlier.

Milestone analysis has two key advantages

  • It’s easy — you just need to count opportunities passing milestones
  • It’s instant — you don’t have to wait to see how things play out to generate answers

The big disadvantage is it can be misleading, because the opportunities hitting a terminal state this quarter were generated in many different time periods.  For a company with an average 9 month sales cycle, the opportunities hitting a terminal state in quarter N, were generated primarily in quarter N-3, but with some coming in quarters N-2 and N-1 and some coming in quarters N-4 and N-5.  Across that period very little was constant, for example, marketing programs and messages changed.  So a marketing effectiveness analysis would be very difficult when approached this way.

For those sorts of questions, I think it’s far better to do a cohort-based analysis, which I call a flow analysis.  Instead of looking at all the opportunities that hit a terminal state in a given time period, you go back in time, grab a cohort of opportunities (e.g., all those generated in 4Q16) and then see how they play out over time.  You go with the flow.

For marketing programs effectiveness, this is the only way to do it.  Instead of a time-based cohort, you’d take a programs-based cohort (e.g., all the opportunities generated by marketing program X), see how they play out, and then compare various programs in terms of effectiveness.

The big downside of flow analysis is you end up analyzing ancient history.  For example, if you have a 9 month average sales cycle with a wide distribution around the mean, you may need to wait 15-18 months before the vast majority of the opportunities hit a terminal state.  If you analyze too early, too many opportunities are still open.  But if you put off analysis then you may get important information, but too late.

You can compress the time window by analyzing programs effectiveness not to sales outcomes but to important steps along the funnel.  That way you could compare two programs on the basis of their ability to generate MQLs or SALs, but you still wouldn’t know whether and at what relative rate they generate actual customers.  So you could end up doubling down on a program that generates a lot of interest, but not a lot of deals.

Back to our original topic, the same concept comes up in analyzing win rates.  Regardless of which win rate you’re calculating, at most companies you’re calculating it on a milestone basis.  I find milestone-based win rates more volatile and less accurate that a flow-based SAL-to-close rate.  For example, if I were building a marketing funnel to determine how many deals I need to hit next year’s number, I’d want to use a SAL-to-close rate, not a win rate, to do so.  Why?  SAL-to-close rates:

  • Are less volatile because they’re damped by using long periods of time.
  • Are more accurate because they actually tracking what you care about — if I get 100 opportunities, how many close within a given time period.
  • Automatically factor in derails and slips (the former are ignored in the narrow win rate and the latter ignored in both the narrow and broad win rates).

Let’s look at an example.  Here’s a chart that tracks 20 opportunities, 10 generated in 1Q17 and 10 generated in 2Q17, through their entire lifetime to a terminal stage.

oppty tracking

In reality things are a lot more complicated than this picture because you have opportunities still being generated in 3Q17 through 4Q18 and you’ll have opportunities that are still in play generated in numerous quarters before 1Q17.  But to keep things simple, let’s just analyze this little slice of the world.  Let’s do a milestone-based win/loss analysis.

win-loss

First, you can see the milestone-based win/loss rates bounce around a lot.  Here it’s due in part due to law of small numbers, but I do see similar volatility in real life — in my experience win rates bounce within a fairly broad zone — so I think it’s a real issue.  Regardless of that, what’s indisputable is that in this example, this is how things will look to the milestone-based win/loss analyzer.  Not a very clear picture — and a lot to panic about in 4Q17.

Let’s look at what a flow-based cohort analysis produces.

cohort1

In this case, we analyze the cohort of opportunities generated in the year-ago quarter.  Since we only generate opportunities in two quarters, 1Q17 and 2Q17, we only have two cohorts to analyze, and we get only two sets of numbers.  The thin blue box shows in opportunity tracking chart shows the data summarized in the 1Q18 column and the thin orange box shows the data for the 2Q18 column.  Both boxes depict how 3 opportunities in each cohort are still open at the end of the analysis period (imagine you did the 1Q18 analysis in 1Q18) and haven’t come to final resolution.  The cohorts both produce a 50% narrow win rate, a 43% vs. 29% broad win rate, and a 30% vs. 20% close rate.  How good are these numbers?

Well, in our example, we have the luxury of finding the true rates by letting the six open opportunities close out over time.  By doing a flow-based analysis in 4Q18 of the 1H17 cohort, we can see that our true narrow win rate is 57%, our true broad win rate is 40%, and our close rate is also 40% (which, once everything has arrived at a terminal state, is definitionally identical to the broad win rate).

cohort7

Hopefully this post has helped you think about your funnel differently by introducing the concept of milestone- vs. flow-based analysis and by demonstrating how the same business situation results in a very different rates depending on both the choice of win rate and analysis type.

Please note that the math in this example backed me into a 40% close rate which is about double what I believe is the benchmark in enterprise software — I think 20 to 25% is a more normal range. 

 

Is IBM Getting Out of Enterprise Performance Management?

I noticed that IBM last week sold off several EPM products — IBM Cognos Disclosure Management (CDM), IBM Cognos Financial Statement Reporting (FSR), and IBM Clarity 7 products — to a company called Certent.

This, combined with a pretty weak performance in Gartner’s recent financial and strategic CPM magic quadrants — where IBM landed as Visionary and one and a Challenger in the other, and  a Leader in neither — got me wondering about IBM’s commitment to EPM as a category going forward.  Could Planning or TM1 be next?

Moreover, it wasn’t just the new Gartner magic quadrants where IBM didn’t fare well.  In the Dresner Wisdom of Crowds market study, IBM was bottom-right in the Customer Experience model and was the only vendor entirely left out of (i.e., “outside the magnifying glass”) the vendor credibility model.  And IBM’s ring in the spider chart seems to gotten worse, not better, in 2017 over 2016.

Yes, we all know IBM is quite busy re-branding everything that’s not nailed down Watson, but could they be backing off EPM?

Which got me wondering, as I surfed around IBM’s website, why some products appeared to be first-class “products” while others were found under “marketplace.”  Why is DB2 under analytics products while TM1 is under marketplace?

db2 v tm1

Maybe it’s nothing, but I decided to check around a bit.  My friends in the know seem to believe that IBM remains committed to EPM, but that they view Clarity as a legacy product and were tired of getting beaten by Workiva in disclosure management.  That is, they saw it as a desire to focus more on planning and consolidation and as well things like compensation management.

Me, I’m not so sure.  When companies start pruning in an area sometimes they keep pruning.  And, in general, we don’t see them that much in the marketplace — particularly when you think of the powerhouse that Cognos was back in the day.  And, they don’t seem to be doing that well.  And, Watson is the big future focus.  So, file this under rumor and speculation, but watch this space.

The New 2017 Gartner Magic Quadrants for Cloud Strategic CPM (SCPM) and Cloud Financial CPM (FCPM) – How to Download; A Few Thoughts

For some odd reason, I always think of this scene — The New Phone Book’s Here – from an old Steve Martin comedy whenever Gartner rolls out their new Magic Quadrants (MQ) for corporate performance management (CPM). It’s probably because all of the excitement they generate.

Last year, Gartner researchers John Van Decker and Chris Iervolino kept that excitement up by making the provocative move of splitting the CPM quadrant in two — strategic CPM (SCPM) and financial CPM (FCPM). Never complacent, this year they stirred things up again by inserting the word “cloud” before the category name for each; we’ll discuss the ramifications of that in a minute.

Free Download of 2017 CPM Magic Quadrants

But first, let me provide some links where you can download the new FCPM and SCPM magic quadrants:

Significance of the New 2017 FPCM and SCPM Magic Quadrants

The biggest change this year is the insertion of the word “cloud” in the title of the magic quadrants.  This perhaps seemingly small change, like a butterfly effect, results in an entirely new world order where two of the three megavendors in the category (i.e., IBM, SAP) get displaced from market leadership due to the lack of the credibility and/or sophistication of their cloud offerings.

For example:

  • In the strategic CPM quadrant, IBM is relegated to the Visionary quadrant (bottom right) and SAP does not even make the cut.
  • In the financial CPM quadrant, IBM is relegated to the Challenger quadrant (top left) and SAP again does not even make the cut.

Well, I suppose one might then ask, well if IBM and SAP do poorly in the cloud financial and strategic CPM magic quadrants, then how do they do in the “regular” ones?

To which the answer is, there aren’t any “regular” ones; they only made cloud ones.  That’s the point.

So I view this as the mainstreaming of cloud in EPM [1].  Gartner is effectively saying a few things:

  • Who cares how much maintenance fees a vendor derives from legacy products?
  • The size of a vendor’s legacy base is independent of its position for the future.
  • The cloud is now the norm in CPM product selection, so it’s uninteresting to even produce a non-cloud MQ for CPM. The only CPM MQs are the cloud ones.

While I have plenty of beefs with Oracle as a prospective business partner — and nearly as many with their cloud EPM offerings — to their credit, they have been making an effort at cloud EPM while IBM and SAP seem to have somehow been caught off-guard, at least from an EPM perspective.

(Some of Oracle’s overall cloud revenue success is likely cloudwashing though they settled a related lawsuit with the whistleblower so we’ll never know the details.)

Unlikely Bedfellows:  Only Two Vendors are Leaders in Both FCPM and SCPM Magic Quadrants

This creates the rather odd situation where there are only two vendors in the Leaders section of both the financial and strategic CPM magic quadrants:  Host Analytics and Oracle.  That means only two vendors can provide the depth and breadth of products in the cloud to qualify for the Leaders quadrant in both the FCPM and SCPM MQ.

I know who I’d rather buy from.

In my view, Host Analytics has a more complete, mature, and proven product line – we’ve been at this a lot longer than they have — and, well, oligopolists aren’t really famous for their customer success and solutions orientation.  More infamous, in fact.  See the section of the FCPM report where it says Oracle ranks in the “bottom 25% of vendors in this MQ on ‘overall satisfaction with vendor.’”

Or how an Oracle alumni once defined “solution selling” for me:

Your problem is you are out of compliance with the license agreement and we’re going to shut down the system.  The solution is to give us money.

Nice.

For more editorial, you can read John O’Rourke’s post on the Host Analytics corporate blog.

Download the 2017 FCPM and SCPM Magic Quadrants

Or you can download the new 2017 Gartner CPM MQs here.

# # #

Notes:

[1] Gartner refers to the category as corporate performance management (CPM).  I generally refer to it as enterprise performance management (EPM), reflecting the fact that EPM software is useful not only for corporations, but other forms of organization such as not-for-profit, partnerships, government, etc.  That difference aside, I generally view EPM and CPM as synonyms.

Why has Standalone Cloud BI been such a Tough Slog?

I remember when I left Business Objects back in 2004 that it was early days in the cloud.  We were using Salesforce internally (and one of their larger customers at the time) so I was familiar with and a proponent of cloud-based applications, but never felt great about BI in the cloud.  Despite that, Business Objects and others were aggressively ramping on-demand offerings all of which amounted to pretty much nothing a few years later.

Startups were launched, too.  Specifically, I remember:

  • Birst, née Success Metrics, and founded in 2004 by Siebel BI veterans Brad Peters and Paul Staelin, which was originally supposed to be vertical industry analytic applications.
  • LucidEra, founded in 2005 by Salesforce and Siebel veteran Ken Rudin (et alia) whose original mission was to be to BI what Salesforce was to CRM.
  • PivotLink, which did their series A in 2007 (but was founded in 1998), positioned as on-demand BI and later moved into more vertically focused apps in retail.
  • GoodData, founded in 2007 by serial entrepreneur Roman Stanek, which early on focused on SaaS embedded BI and later moved to more of a high-end enterprise positioning.

These were great people — Brad, Ken, Roman, and others were brilliant, well educated veterans who knew the software business and their market space.

These were great investors — names like Andreessen Horowitz, Benchmark, Emergence, Matrix, Sequoia, StarVest, and Tenaya invested over $300M in those four companies alone.

This was theoretically a great, straightforward cloud-transformation play of a $10B+ market, a la Siebel to Salesforce.

But of the four companies named above only GoodData is doing well and still in the fight (with a high-end enterprise platform strategy that bears little resemblance to a straight cloud transformation play) and the three others all came to uneventful exits:

So, what the hell happened?

Meantime, recall that Tableau, founded in 2003, and armed in its early years with a measly $15M in venture capital, and with an exclusively on-premises business model, literally blew by all the cloud BI vendors, going public in May 2013 and despite the stock being cut by more than half since its July 2015 peak is still worth $4.2B today.

I can’t claim to have the definitive answer to the question I’ve posed in the title.  In the early days I thought it was related to technical issues like trust/security, trust/scale, and the complexities of cloud-based data integration.  But those aren’t issues today.  For a while back in the day I thought maybe the cloud was great for applications, but perhaps not for platforms or infrastructure.  While SaaS was the first cloud category to take off, we’ve obviously seen enormous success with both platforms (PaaS) and infrastructure (IaaS) in the cloud, so that can’t be it.

While some analysts lump EPM under BI, cloud-based EPM has not had similar troubles.  At Host, and our top competitors, we have never struggled with focus or positioning and we are all basically running slightly different variations on the standard cloud transformation play.  I’ve always believed that lumping EPM under BI is a mistake because while they use similar technologies, they are sold to different buyers (IT vs. finance) and the value proposition is totally different (tool vs. application).  While there’s plenty of technology in EPM, it is an applications play — you can’t sell it or implement it without domain knowledge in finance, sales, marketing or whatever domain for which you’re building the planning system.  So I’m not troubled to explain why cloud EPM hasn’t been a slog while cloud BI absolutely has been.

My latest belief is that the business model wasn’t the problem in BI.  The technology was.  Cloud transformation plays are all about business model transformation.  On-premises applications business models were badly broken:  the software cost $10s of millions to buy and $10s of millions more to implement (for large customers).  SMBs were often locked out of the market because they couldn’t afford the ante.  ERP and CRM were exposed because of this and the market wanted and needed a business model transformation.

With BI, I believe, the business model just wasn’t the problem.  By comparison to ERP and CRM, it was fraction of the cost to buy and implement.  A modest BusinessObjects license might have cost $150K and less than that to implement.  That problem was not that BI business model was broken, it was that the technology never delivered on the democratization promise that it made.  Despite shouting “BI for the masses” in 1995, BI never really made it beyond the analyst’s desk.

Just as RDBMS themselves failed to deliver information democracy with SQL (which, believe it or not, was part of the original pitch — end users could write SQL to answer their own queries!), BI tools — while they helped enable analysts — largely failed to help Joe User.  They weren’t easy enough to use.  They lacked information discovery.  They lacked, importantly, easy-yet-powerful visualization.

That’s why Tableau, and to a lesser extent Qlik, prospered while the cloud BI vendors struggled.  (It’s also why I find it profoundly ironic that Tableau is now in a massive rush to “go cloud” today.)  It’s also one reason why the world now needs companies like Alation — the information democracy brought by Tableau has turned into information anarchy and companies like Alation help rein that back in (see disclaimers).

So, I think that cloud BI proved to be such a slog because the cloud BI vendors solved the wrong problem. They fixed a business model that wasn’t fundamentally broken, all while missing the ease of use, data discovery, and visualization power that both required the horsepower of on-premises software and solved the real problems the users faced.

I suspect it’s simply another great, if simple, lesson is solving your customer’s problem.

Feel free to weigh in on this one as I know we have a lot of BI experts in the readership.

Quick Thoughts on Tagetik Acquistion by Wolters Kluwer

Earlier today, the tax and accounting division of Dutch publishing giant Wolters Kluwer announced the acquistion of Italian enterprise performance management (EPM) vendor Tagetik for 300M Euros, or about $318M.

Founded in 1986, Tagetik was a strong regional European player in on-premises EPM and about 2.5 years ago had raised $37M in capital in order to attack the USA market and accelerate their transition from on-premises to cloud computing.

The press release said Tagetik was valued at 300M Euros off 57M Euros in 2016 revenues, of which 35% are “recurring in nature.”  At a hybrid on-premises / SaaS software company you have two types of revenue that’s recurring in nature:  (1) SaaS subscription fees and (2) on-premises annual maintenance fees.  Doing some back of the envelope math (detailed below), you end with Tagetik breaking into a roughly $13M SaaS business and a $47M on-premises business.

If you buy that analysis, then we can do some valuation guestimation.

While we know the overall multiple of 5.3x revenues, we need to estimate separate multiples paid for the estimated $13M SaaS business vs. the estimated $47M on-premises business.  While there is an infinite number of ways to weight the two pieces compromising the total valuation, my best guess is that Wolters Kluwer paid 10x revenues for the SaaS business and 3.9x revenues for the on-premises business, generally in line with the notion that $1 of SaaS revenue is worth about $3.0 of on-premises revenue.

White Bridge, who led the investment in 2014, got about a 3x return on investment by my math (with one assumption) over about a 3 year period, for an IRR of around 45%.

Market-wise, this is not the first EPM vendor to acquired by an off-axis competitor.  Axiom was acquired by vertically oriented management consultancy Kaufman Hall in 2014.

“The acquisition of Tagetik tightly aligns with our vision to expand our position in the faster growing areas of the corporate tax and accounting market,” said Tax & Accounting Division CEO Karen Abramson.

While Wolters Kluwer has a strong tax and accounting division, only one piece of EPM (consolidation) is generally sold to accounting.  Planning, in all its forms, represents about 65% of the EPM market and is typically sold to FP&A.  Bridging that gap, both in terms of buyer and mentality, will be important if the transaction is predicated on sales synergies.

Regardless of where it goes from here, congratulations to co-CEOs Marco Pierallini and Manuel Vellutini on a successful sale of their company.  Felicitazioni!

Kellblog’s 2017 Predictions  

New Year’s means three things in my world:  (1) time to thank our customers and team at Host Analytics for another great year, (2) time to finish up all the 2017 planning items and approvals that we need to get done before the sales kickoff (including the one most important thing to do before kickoff), and time to make some predictions for the coming year.

Before looking at 2017, let’s see how I did with my 2016 predictions.

2016 Predictions Review

  1. The great reckoning begins. Correct/nailed.  As predicted, since most of the bubble was tied up in private companies owned by private funds, the unwind would happen in slow motion.  But it’s happening.
  2. Silicon Valley cools off a bit. Partial.  While IPOs were down, you couldn’t see the cooling in anecdotal data, like my favorite metric, traffic on highway101.
  3. Porter’s five forces analysis makes a comeback. Partial.  So-called “momentum investing” did cool off, implying more rational situation analysis, but you didn’t hear people talking about Porter per se.
  4. Cyber-cash makes a rise. CorrectBitcoin more doubled on the year (and Ethereum was up 8x) which perversely reinforced my view that these crypto-currencies are too volatile — people want the anonymity of cash without a highly variable exchange rate.  The underlying technology for Bitcoin, blockchain, took off big time.
  5. Internet of Things goes into trough of disillusionment. Partial.  I think I may have been a little early on this one.  Seems like it’s still hovering at the peak of inflated expectations.
  6. Data science rises as profession. Correct/easy.  This continues inexorably.
  7. SAP realizes they are a complex enterprise application company. Incorrect.  They’re still “running simple” and talking too much about enabling technology.  The stock was up 9% on the year in line with revenues up around 8% thus far.
  8. Oracle’s cloud strategy gets revealed – “we’ll sell you any deployment model you want as long as your annual bill goes up.”  Partial.  I should have said “we’ll sell you any deployment model you want as long as we can call it cloud to Wall St.”
  9. Accounting irregularities discovered at one or more unicorns. Correct/nailed.  During these bubbles the pattern always repeats itself – some people always start breaking the rules in order to stand out, get famous, or get rich.  Fortune just ran an amazing story that talks about the “fake it till you make it” culture of some diseased startups.
  10. Startup workers get disappointed on exits. Partial.  I’m not aware of any lawsuits here but workers at many high flyers have been disappointed and there is a new awareness that the “unicorn party” may be a good thing for founders and VCs, but maybe not such a good thing for rank-and-file employees (and executive management).
  11. The first cloud EPM S-1 gets filed. Incorrect.  Not yet, at least.  While it’s always possible someone did the private filing process with the SEC, I’m guessing that didn’t happen either.
  12. 2016 will be a great year for Host Analytics. Correct.  We had a strong finish to the year and emerged stronger than we started with over 600 great customers, great partners, and a great team.

Now, let’s move on to my predictions for 2017 which – as a sign of the times – will include more macro and political content than usual.

  1. The United States will see a level of divisiveness and social discord not seen since the 1960s. Social media echo chambers will reinforce divisions.  To combat this, I encourage everyone to sign up for two publications/blogs they agree with and two they don’t lest they never again hear both sides of an issue. (See map below, coutesy of Ninja Economics, for help in choosing.)  On an optimistic note, per UCSD professor Lane Kenworthy people aren’t getting more polarized, political parties are.

news

  1. Social media companies finally step up and do something about fake news. While per a former Facebook designer, “it turns out that bullshit is highly engaging,” these sites will need to do something to filter, rate, or classify fake news (let alone stopping to recommend it).  Otherwise they will both lose credibility and readership – as well as fail to act in a responsible way commensurate with their information dissemination power.
  1. Gut feel makes a comeback. After a decade of Google-inspired heavily data-driven and A/B-tested management, the new US administration will increasingly be less data-driven and more gut-feel-driven in making decisions.  Riding against both common sense and the big data / analytics / data science trends, people will be increasingly skeptical of purely data-driven decisions and anti-data people will publicize data-driven failures to popularize their arguments.  This “war on data” will build during the year, fueled by Trump, and some of it will spill over into business.  Morale in the Intelligence Community will plummet.
  1. Under a volatile leader, who seems to exhibit all nine of the symptoms of narcissistic personality disorder, we can expect sharp reactions and knee-jerk decisions that rattle markets, drive a high rate of staff turnover in the Executive branch, and fuel an ongoing war with the media.  Whether you like his policies or not, Trump will bring a high level of volatility the country, to business, and to the markets.
  1. With the new administration’s promises of $1T in infrastructure spending, you can expect interest rates to raise and inflation to accelerate. Providing such a stimulus to already strong economy might well overheat it.  One smart move could be buying a house to lock in historic low interest rates for the next 30 years.  (See my FAQ for disclaimers, including that I am not a financial advisor.)
  1. Huge emphasis on security and privacy. Election-related hacking, including the spearfishing attack on John Podesta’s email, will serve as a major wake-up call to both government and the private sector to get their security act together.  Leaks will fuel major concerns about privacy.  Two-factor authentication using verification codes (e.g., Google Authenticator) will continue to take off as will encrypted communications.  Fear of leaks will also change how people use email and other written electronic communications; more people will follow the sage advice in this quip:

Dance like no one’s watching; E-mail like it will be read in a deposition

  1. In 2015, if you were flirting on Ashley Madison you were more likely talking to a fembot than a person.  In 2016, the same could be said of troll bots.  Bots are now capable of passing the Turing Test.  In 2017, we will see more bots for both good uses (e.g., customer service) and bad (e.g., trolling social media).  Left unchecked by the social media powerhouses, bots could damage social media usage.
  1. Artificial intelligence hits the peak of inflated expectations. If you view Salesforce as the bellwether for hyped enterprise technology (e.g., cloud, social), then the next few years are going to be dominated by artificial intelligence.  I’ve always believed that advanced analytics is not a standalone category, but instead fodder that vendors will build into smart applications.  They key is typically not the technology, but the problem to which to apply it.  As Infer founder Vik Singh said of Jim Gray, “he was really good at finding great problems,” the key is figuring out the best problems to solve with a given technology or modeling engine.  Application by application we will see people searching for the best problems to solve using AI technology.
  1. The IPO market comes back. After a year in which we saw only 13 VC-backed technology IPOs, I believe the window will open and 2017 will be a strong year for technology IPOs.  The usual big-name suspects include firms like Snap, Uber, AirBnB, and SpotifyCB Insights has identified 369 companies as strong 2017 IPO prospects.
  1. Megavendors mix up EPM and ERP or BI. Workday, which has had a confused history when it comes to planning, acquired struggling big data analytics vendor Platfora in July 2016, and seems to have combined analytics and EPM/planning into a single unit.  This is a mistake for several reasons:  (1) EPM and BI are sold to different buyers with different value propositions, (2) EPM is an applications sale, BI is a platform sale, and (3) Platfora’s technology stack, while appropriate for big data applications is not ideal for EPM/planning (ask Tidemark).  Combining the two together puts planning at risk.  Oracle combined their EPM and ERP go-to-market organizations and lost focus on EPM as a result.  While they will argue that they now have more EPM feet on the street, those feet know much less about EPM, leaving them exposed to specialist vendors who maintain a focus on EPM.  ERP is sold to the backward-looking part of finance; EPM is sold to the forward-looking part.  EPM is about 1/10th the market size of ERP.  ERP and EPM have different buyers and use different technologies.  In combining them, expect EPM to lose out.

And, as usual, I must add the bonus prediction that 2017 proves to be a strong year for Host Analytics.  We are entering the year with positive momentum, the category is strong, cloud adoption in finance continues to increase, and the megavendors generally lack sufficient focus on the category.  We continue to be the most customer-focused vendor in EPM, our new Modeling product gained strong momentum in 2016, and our strategy has worked very well for both our company and the customers who have chosen to put their faith in us.

I thank our customers, our partners, and our team and wish everyone a great 2017.

# # #