Category Archives: Modeling

How to Present an Operating Plan to your Board

I’ve been CEO of two startups and on the board of about ten.  That means I’ve presented a lot of operating plans to boards.  It also means I’ve had a lot of operating plans presented to me.  Frankly, most of the time, I don’t love how they’re presented.  Common problems include:

  • Lack of strategic context: management shows up with a budget more than a plan, and without explaining the strategic thinking (one wonders, if any) behind it.  For a primer, see here.
  • Lack of organizational design: management fails to show the proposed high-level organizational structure and how it supports the strategy.  They fail to show the alternative designs considered and why they settled on the one they’re proposing.
  • A laundry list of goals. OKRs are great.  But you should have a fairly small set – no more than 5 to 7 – and, again, management needs to show how they’re linked to the strategy.

Finance types on the board might view these as simple canapes served before the meal.  I view them as critical strategic context.  But, either way, the one thing on which everyone can agree is that the numbers are always the main course. Thus, in this post, I’m going to focus on how to best present the numbers in an annual operating plan.

Context is King
Strategic context isn’t the only context that’s typically missing.  A good operating plan should present financial context as well.  Your typical VC board member might sit on 8-10 boards, a typical independent on 2 (if they’re still in an operating role), and a professional independent might sit on 3-5.  While these people are generally pretty quantitative, that’s nevertheless a lot of numbers to memorize.  So, present context.  Specifically:

  • One year of history. This year that’s 2021.
  • One year of forecast. This year that’s your 2022 forecast, which is your first through third quarter actuals combined with your fourth-quarter forecast.
  • The proposed operating plan (2023).
  • The trajectory on which the proposed operating plan puts you for the next two years after that (i.e., 2024 and 2025).

The last point is critical for several reasons:

  • The oldest trick in the book is to hit 2023 financial goals (e.g., burn) by failing to invest in the second half of 2023 for growth in 2024.
  • The best way to prevent that is to show the 2024 model teed up by the proposed 2023 plan. That model doesn’t need to be made at the same granularity (e.g., months vs. quarters) or detail (e.g., mapping to GL accounts) as the proposed plan – but it can’t be pure fiction either.  Building this basically requires dovetailing a driver-based model to your proposed operating plan.
  • Showing the model for the out years helps generate board consensus on trajectory. While technically the board is only approving the proposed 2023 operating plan, that plan has a 2024 and 2025 model attached to it.  Thus, it’s pretty hard for the board to say they’re shocked when you begin the 2024 planning discussion using the 2024 model (that’s been shown for two years) as the starting point.

Presenting the Plan in Two Slides
To steal a line from Name That Tune, I think I can present an operating plan in two slides.  Well, as they say on the show:  “Dave, then present that plan!”

  • The first slide is focused on the ARR leaky bucket, metrics derived from ARR, and ARR-related product.ivity measures
  • The second slide is focused on the P&L and related measures.

There are subjective distinctions in play here.  For example, CAC ratio (the S&M cost of a dollar of new ARR) is certainly ARR-related, but it’s also P&L-driven because the S&M cost comes from the P&L.  I did my best to split things in a way that I think is logical and, more importantly, between the two slides I include all of the major things I want to see in an operating plan presentation and, even more importantly, none of the things that I don’t.

Slide 1: The Leaky Bucket of ARR and Related Metrics

Let’s review the lines, starting with the first block, the leaky bucket itself:

  • Starting ARR is the ARR level at the start of a period. The starting water level of the bucket.
  • New ARR is the sum of new logo (aka, new customer) ARR and expansion ARR (i.e., new ARR from existing customers). That amount of “water” the company poured into the bucket.
  • Churn ARR is the sum of ARR lost due to shrinking customers (aka, downsell) and lost customers. The amount of water that leaked out of the bucket.
  • Ending ARR is starting ARR + new ARR – churn ARR. (It’s + churn ARR if you assign a negative sign to churn, which I usually do.)  The ending water level of the bucket.
  • YoY growth % is the year-over-year growth of ending ARR. How fast the water level is changing in the bucket.  If I had to value a SaaS company with only two numbers, they would be ARR and YoY ARR growth rate.  Monthly SaaS companies often have a strong focus on sequential (QoQ) growth, so you can add a row for that too, if desired.

The next block has two rows focused on change in the ARR bucket:

  • Net new ARR = new ARR – churn ARR. The change in water level of the bucket.  Note that some people use “net new” to mean “net new customer” (i.e., new logo) which I find confusing.
  • Burn ratio = cashflow from operations / net new ARR. How much cash you consume to increase the water level of the bucket by $1.  Not to be confused with cash conversion score which is defined as an inception-to-date metric, not a period metric.  This ratio is similar to the CAC ratio, but done on a net-new ARR basis and for all cash consumption, not just S&M expense.

The next block looks at new vs. churn ARR growth as well as the mix within new ARR:

  • YoY growth in new ARR. The rate of growth in water added to the bucket.
  • YoY growth in churn ARR. The rate of growth in water leaking from the bucket.  I like putting them next to each other to see if one is growing faster than the other.
  • Expansion ARR as % of new ARR. Percent of new ARR that comes from existing customers.  The simplest metric to determine if you’re putting correct focus on the existing customer base.  Too low (e.g., 10%) and you’re likely ignoring them.  Too high (e.g., 40%) and people start to wonder why you’re not acquiring more new customers. (In a small-initial-land and big-expand model, this may run much higher than 30-40%, but that also depends on the definition of land – i.e., is the “land” just the first order or the total value of subscriptions acquired in the first 6 or 12 months.)

The next block focuses on retention rates:

  • Net dollar retention = current ARR from year-ago cohort / year-ago ARR from year-ago cohort. As I predicted a few years back, NRR has largely replaced LTV/CAC, because of the flaws with lifetime value (LTV) discussed in my SaaStr 2020 talk, Churn is Dead, Long Live Net Dollar Retention.
  • Gross dollar retention = current ARR from year-ago cohort excluding expansion / year-ago ARR from year-ago cohort. Excluding the offsetting effects of expansion, how much do customer cohorts shrink over a year?
  • Churn rate (ATR-based) = churn ARR/available-to-renew ARR. Percent of ARR that churns measured against only that eligible for renewal and not the entire ARR base.  An important metric for companies that do multi-year deals as putting effectively auto-renewing customers in the denominator damps out

The next block focuses on headcount:

  • Total employees, at end of period.
  • Quota-carrying reps (QCRs) = number of quota-carrying sellers at end of period. Includes those ramping, though I’ve argued that enterprise SaaS could also use a same-store sales metric.  In deeper presentations, you should also look at QCR density.
  • Customer success managers (CSMs) = the number of account managers in customer success. These organizations can explode so I’m always watching ARR/CSM and looking out for stealth CSM-like resources (e.g., customer success architects, technical account managers) that should arguably be included here or tracked in an additional row in deeper reports.
  • Code-committing developers (CCDs) = the number of developers in the company who, as Elon Musk might say, “actually write software.” Like sales, you should watch developer density to ensure organizations don’t get an imbalanced helper/doer ratio.

The final block looks at ARR-based productivity measures:

  • New ARR/ramped rep = new ARR from ramped reps / number of ramped reps. This is roughly “same-store sales [link].”  Almost no one tracks this, but it is one of several sales productivity metrics that I like which circle terminal productivity.  The rep ramp chart’s 4Q+ productivity is another way of getting at it.
  • ARR/CSM = starting ARR/number of CSMs, which measures how much ARR each CSM is managing.  Potentially include stealth CSMs in the form of support roles like technical account manager (TAM) or customer success architects (CSAs).
  • ARR/employee = ending ARR/ending employees, a gross overall measure of employee productivity.

Slide 2: The P&L and Related Metrics

This is a pretty standard, abbreviated SaaS P&L.

The first block is revenue, optionally split by subscription vs. services.

The second block is cost of goods sold.

The third block is gross margin.  It’s important to see both subscription and overall (aka, blended) gross margin for benchmarking purposes.  Subscription gross is margin, by the way, is probably the most overlooked-yet-important SaaS metric.  Bad subscription margins can kill an investment deal faster than a high churn rate.

The fourth block is operating expense (opex) by major category, which is useful for benchmarking.  It’s also useful for what I call glideslope planning, which you can use to agree with the board on a longer-term financial model and the path to get there.

The penultimate block shows a few more SaaS metrics.

  • CAC ratio = S&M cost of a $1 in new ARR
  • CAC payback period  = months of subscription gross profit to repay customer acquisition cost
  • Rule of 40 score = revenue growth rate + free cashflow margin

The last block is just one row:  ending cash.  The oxygen level for any business.  You should let this go negative (in your financial models only!) to indicate the need for future fundraising.

Scenario Comparisons
Finally, part of the planning process is discussing multiple options, often called scenarios.

While scenarios in the strategy sense are usually driven by strategic planning assumptions (e.g., “cheap oil”), in software they are often just different version of a plan optimized for different things:

  • Baseline: the default proposal that management usually thinks best meets all of the various goals and constraints.
  • Growth: an option that optimizes growth typically at the expense or hitting cash, CAC, or S&M expense goals.
  • Profit: an option that optimizes for cash runway, often at the expense of growth, innovation, or customer satisfaction.

Whatever scenarios you pick, and your reasons for picking them, are up to you.  But I want to help you present them in a way that is easy to grasp and compare.

Here’s one way to do that:

I like this hybrid format because it’s pulling only a handful of the most important rows, but laying them out with some historical context and, for each of the three proposed scenarios, showing not only the proposed 2023 plan also the 2024 model associated with it.  This is the kind of slide I want to look at while having a discussion about the relative merits of each scenario.

What’s Missing Here?
You can’t put everything on two slides.  The most important things I’m worried about missing in this format are:

  • Segment analysis: sometimes your business is a blended average of multiple different businesses (e.g., self-serve motion, enterprise motion) and thus it’s less meaningful to analyze the average than to look at its underlying components.  You’ll need to add probably one section per segment in order to address this.
  • Strategic challenges. For example, suppose that you’ve always struggled with enterprise customer CAC.  You may need to add one section focused solely on that.  “Yes, that’s the overall plan, but it’s contingent on getting cost/oppty to $X and the win rate to Y% and here’s the plan to do that.”
  • Zero-based budgeting. In tough times, this is a valuable approach to help CEOs and CFOs squeeze cost out of the business.  It takes more time, but it properly puts focus on overall spend and not simply on year-over-year increments.  In a perfect world, the board wouldn’t need to see any artifacts from the process, but only know that the expense models are tight because every expense was scrutinized using a zero-based budgeting process.

Conclusion
Hopefully this post has given you some ideas on how to better present your next operating plan to your board.  If you have questions or feedback let me know.  And I wish everyone a happy and successful completion of planning season.

You can download the spreadsheet used in this post, here.

Next-Generation Planning and Finance, A Broader and Slightly Deeper Look

This post was prompted by feedback to the last prediction in my 2021 annual predictions post, The Rebirth of Planning and Enterprise Performance Management.  Excerpt:

EPM 1.0 was Hyperion, Arbor, and TM1. EPM 2.0 was Adaptive Insights, Anaplan, and Planful (nee Host Analytics).  EPM 3.0 is being born today.  If you’ve not been tracking this, here a list of next-generation planning startups …

Since that post, I’ve received feedback with several more startups to add to the list and a request for a little more color on each one.  That’s what I’ll cover in this post.  I can say right now this got bigger, and took way longer, than I thought it would at the outset.  That means two things:  there may be more mistakes and omissions than usual and wow if I thought the space was being reborn before, I really think it now.  Look at how many of these firms were founded in the past two years!

Order is alphabetical.  Links are to sources.  All numbers are best I could find as of publication date (and I have no intent to update).  I have added and/or removed companies from the prior post based on feedback and my subjective perception as to whether I think they qualify as “next generation” planning.  Note that I have several and varied relationships with some of these companies (see prior post and disclaimers).  List is surely not inclusive of all relevant companies.

  • Allocadia.  Founded in Vancouver in 2010 by friends from Business Objects / Crystal Reports, this is a marketing performance management company that has raised $24M in capital and has 125 employees.  Marketing planning is a real problem and they’re taking, last I checked, the enterprise approach to it.  They have 93 reviews and 4.1 stars on G2.
  • Causal.  Founded in 2019 in London.  I can’t find them in Crunchbase, but their site shows they have seed capital from Coatue and Passion Capital.  They promise, among other things, to “make finance beautiful” and the whole thing strikes me as a product-led growth strategy for a new tool to build financial models outside of traditional spreadsheets.
  • Decipad.  Co-founded in late 2020 in the UK by friend, former MarkLogic consultant, and serial entrepreneur Nuno Job, Decipad is a seed-stage, currently fewer than 10 employee, startup that, last I checked, was working on a low-code product for planning and modeling for early-stage companies.
  • Finmark.  Raleigh-based, and founded in 2020, this company has raised $5M in seed capital from a bevy of investors including Y Combinator, IDEA Fund, Draper, and Bessemer.  The company has about 50 employees, a product in early access mode, and is a product built “by founders, for founders” to provide integrated finance for startups.
  • Grid.  This company offers a web-based tool that appears to layer atop spreadsheets, using them as a data source to build reports, dashboards and apps.  The company was founded in 2018, has around 20 people, and is based in Reykjavik.  The founder/CEO previously served as head of product management at Qlik and is a “proud data nerd.”  Love it.
  • LiveFlow was founded in 2021, based in Redwood City, has raised about $500K in pre-seed capital from Y Combinator and Seedcamp.  The company offers a spreadsheet that connects to your real-time data, supporting the creation of timely reports and dashboards.  Connectivity appears to be the special sauce here, and it’s definitely a problem that needs to be solved better.
  • OnPlan.  Founded in 2106 in San Francisco by serial entrepreneur and new friend, David Greenbaum, OnPlan is a financial modeling, scenario analysis, and forecasting tool.  The company has raised an undisclosed amount of angel financing and has over 30 employees.  Notably, they are building atop Google Sheets which allows them “stand on the shoulders of giants” and provide a rare option that is, I think, Google-first as opposed to Excel-first or Excel-replacement.
  • PlaceCPM.  Founded in 2018 in Austin, this company takes a focused approach, offering forecasting and planning for SaaS and professional services businesses, built on the Salesforce platform, and with pricing suggestive of an SMB/MM focus. The company has raised $4M in pre- and seed financing.  The product gets 4.9 stars on G2 across 13 reviews.
  • Plannuh.  Pronounced with a wicked Southie accent, Plannuh is Boston for Planner, and a marketing planning package that helps marketers create and manage plans and budgets.  Founded by (a fellow) former $1B company CMO, Peter Mahoney, the company has raised $4M and has over 30 employees.  As mentioned, I think marketing planning is a real problem and these guys are taking a velocity approach to it.  They have 5.0 stars on G2 across five reviews.  I’m an advisor and wrote the foreword to their The Next CMO book.
  • Pry.  Founded in San Francisco in 2019 by two startup-experienced Cal grads (Go Bears!), with investment from pre-seed fund Nomo Ventures, Pry has fewer than 10 employees, and a vision to make it simple for early-stage companies to manage their budget, hiring plan, financial models, and cash.
  • Runway.  This company is backed with a $4.5M seed round from the big guns at A16Z.  I can’t find them on Crunchbase and their website has the expected “big thinking but no detail” for a company that’s still in stealth.  Currently at about 10 people.
  • Stratify.  Founded in 2020 in Seattle, this company has raised $5.0M to pursue real-time and collaborative budgeting and forecasting to support “continuous planning” (which is reminiscent of Planful’s messaging).  Both the founder and the lead investor have enterprise roots (with SAP / Concur) and plenty of startup experience.  The company has fewer than 10 employees today.
  • TruePlan.  Founded in 2020, with three employees, and seemingly bootstrapped I may have found these guys on the early side.  While the product appears still in development, the vision looks clear:  dynamic headcount management, that ties together the departmental (budget owner) manager, finance, recruiting, and people ops.  Workforce planning is a real problem, let’s see what they do with it.
  • VaretoFounded in 2020 in Mountain View, with fewer than 10 employees and some pretty well pedigreed founders, the company seeks to help with strategic finance, reporting, and planning.  The website is pretty tight-lipped beyond that and I can’t find any public financing information.

Thanks to Ron Baden, Nuno Job, and Bill Rausch for helping me track down so many companies.

(Added Valsight 2/10/21.)

Appearance on the CFO Bookshelf Podcast with Mark Gandy

Just a quick post to highlight a recent interview I did on the CFO Bookshelf podcast with Mark Gandy.  The podcast episode, entitled Dave Kellogg Address The Rule of 40, EPM, SaaS Metrics and More, reflects the fun and somewhat wandering romp we had through a bunch of interesting topics.

Among other things, we talked about:

  • Why marketing is a great perch from which to become a CEO
  • Some reasons CEOs might not want to blog (and the dangers of so doing)
  • A discussion of the EPM market today
  • A discussion of BI and visualization, particularly as it relates to EPM
  • The Rule of 40 and small businesses
  • Some of my favorite SaaS operating metrics
  • My thoughts on NPS (net promoter score)
  • Why I like driver-based modeling (and what it has in common with prime factorization)
  • Why I still believe in the “CFO as business partner” trope

You can find the episode here on the web, here on Apple Podcasts, and here on Google Podcasts.

Mark was a great host, and thanks for having me.

How to Make and Use a Proper Sales Bookings Productivity and Quota Capacity Model

I’ve seen numerous startups try numerous ways to calculate their sales capacity.  Most are too back-of-the-envelope and too top-down for my taste.  Such models are, in my humble opinion, dangerous because the combination of relatively small errors in ramping, sales productivity, and sales turnover (with associated ramp resets) can result in a relatively big mistake in setting an operating plan.  Building off quota, instead of productivity, is another mistake for many reasons [1].  

Thus, to me, everything needs to begin with a sales productivity model that is Einsteinian in the sense that it is as simple as possible but no simpler.

What does such a model need to take into account?

  • Sales productivity, measured in ARR/rep, and at steady state (i.e., after a rep is fully ramped).  This is not quota (what you ask them to sell), this is productivity (what you actually expect them to sell) and it should be based on historical reality, with perhaps incremental, well justified, annual improvement.
  • Rep hiring plans, measured by new hires per quarter, which should be realistic in terms of your ability to recruit and close new reps.
  • Rep ramping, typically a vector that has percentage of steady-state productivity in the rep’s first, second, third, and fourth quarters [2].  This should be based in historical data as well.
  • Rep turnover, the annual rate at which sales reps leave the company for either voluntary or involuntary reasons.
  • Judgment, the model should have the built-in ability to let the CEO and/or sales VP manually adjust the output and provide analytical support for so doing [3].
  • Quota over-assignment, the extent to which you assign more quota at the “street” level (i.e., sum of the reps) beyond the operating plan targets
  • For extra credit and to help maintain organizational alignment — while you’re making a bookings model, with a little bit of extra math you can set pipeline goals for the company’s core pipeline generation sources [4], so I recommend doing so.

If your company is large or complex you will probably need to create an overall bookings model that aggregates models for the various pieces of your business.  For example, inside sales reps tend to have lower quotas and faster ramps than their external counterparts, so you’d want to make one model for inside sales, another for field sales, and then sum them together for the company model.

In this post, I’ll do two things:  I’ll walk you through what I view as a simple-yet-comprehensive productivity model and then I’ll show you two important and arguably clever ways in which to use it.

Walking Through the Model

Let’s take a quick walk through the model.  Cells in Excel “input” format (orange and blue) are either data or drivers that need to be entered; uncolored cells are either working calculations or outputs of the model.

You need to enter data into the model for 1Q20 (let’s pretend we’re making the model in December 2019) by entering what we expect to start the year with in terms of sales reps by tenure (column D).  The “first/hired quarter” row represents our hiring plans for the year.  The rest of this block is a waterfall that ages the rep downward as we move across quarters.  Next to the block ramp assumption, which expresses, as a percentage of steady-state productivity, how much we expect a rep to sell as their tenure increases with the company.  I’ve modeled a pretty slow ramp that takes five quarters to get to 100% productivity.

To the right of that we have more assumptions:

  • Annual turnover, the annual rate at which sales reps leave the company for any reason.  This drives attriting reps in row 12 which silently assumes that every departing rep was at steady state, a tacit fairly conservative assumption in the model.
  • Steady-state productivity, how much we expect a rep to actually sell per year once they are fully ramped.
  • Quota over-assignment.  I believe it’s best to start with a productivity model and uplift it to generate quotas [5]. 

The next block down calculates ramped rep equivalents (RREs), a very handy concept that far too few organizations use to convert the ramp-state to a single number equivalent to the number of fully ramped reps.  The steady-state row shows the number of fully ramped reps, a row that board members and investors will frequently ask about, particularly if you’re not proactively showing them RREs.

After that we calculate “productivity capacity,” which is a mouthful, but I want to disambiguate it from quota capacity, so it’s worth the extra syllables.  After that, I add a critical row called judgment, which allows the Sales VP or CEO to play with the model so that they’re not potentially signing up for targets that are straight model output, but instead also informed by their knowledge of the state of the deals and the pipeline.  Judgment can be negative (reducing targets), positive (increasing targets) or zero-sum where you have the same annual target but allocate it differently across quarters.

The section in italics, linearity and growth analysis, is there to help the Sales VP analyze the results of using the judgment row.  After changing targets, he/she can quickly see how the target is spread out across quarters and halves, and how any modifications affect both sequential and quarterly growth rates. I have spent many hours tweaking an operating plan using this part of the sheet, before presenting it to the board.

The next row shows quota capacity, which uplifts productivity capacity by the over-assignment percentage assumption higher up in the model.  This represents the minimum quota the Sales VP should assign at street level to have the assumed level of over-assignment.  Ideally this figure dovetails into a quota-assignment model.

Finally, while we’re at it, we’re only a few clicks away from generating the day-one pipeline coverage / contribution goals from our major pipeline sources: marketing, alliances, and outbound SDRs.  In this model, I start by assuming that sales or customer success managers (CSMs) generate the pipeline for upsell (i.e., sales to existing customers).  Therefore, when we’re looking at coverage, we really mean to say coverage of the newbiz ARR target (i.e., new ARR from new customers).  So, we first reduce the ARR goal by a percentage and then multiple it by the desired pipeline coverage ratio and then allocate the result across the pipeline-sources by presumably agreed-to percentages [6].  

Building the next-level models to support pipeline generation goals is beyond the scope of this post, but I have a few relevant posts on the subject including this three-part series, here, here, and here.

Two Clever Ways to Use the Model

The sad reality is that this kind of model gets a lot attention at the end of a fiscal year (while you’re making the plan for next year) and then typically gets thrown in the closet and ignored until it’s planning season again. 

That’s too bad because this model can be used both as an evaluation tool and a predictive tool throughout the year.

Let’s show that via an all-too-common example.  Let’s say we start 2020 with a new VP of Sales we just hired in November 2019 with hiring and performance targets in our original model (above) but with judgment set to zero so plan is equal to the capacity model.

Our “world-class” VP immediately proceeds to drive out a large number of salespeople.  While he hires 3 “all-star” reps during 1Q20, all 5 reps hired by his predecessor in the past 6 months leave the company along with, worse yet, two fully ramped reps.  Thus, instead of ending the quarter with 20 reps, we end with 12.  Worse yet, the VP delivers new ARR of $2,000K vs. a target of $3,125K, 64% of plan.  Realizing she has a disaster on her hands, the CEO “fails fast” and fires the newly hired VP of sales after 5 months.  She then appoints the RVP of Central, Joe, to acting VP of Sales on 4/2.  Joe proceeds to deliver 59%, 67%, and 75% of plan in 2Q20, 3Q20, and 4Q20.

Our question:  is Joe doing a good job?

At first blush, he appears more zero than hero:  59%, 67%, and 75% of plan is no way to go through life.

But to really answer this question we cannot reasonably evaluate Joe relative to the original operating plan.  He was handed a demoralized organization that was about 60% of its target size on 4/2.  In order to evaluate Joe’s performance, we need to compare it not to the original operating plan, but to the capacity model re-run with the actual rep hiring and aging at the start of each quarter.

When you do this you see, for example, that while Joe is constantly underperforming plan, he is also constantly outperforming the capacity model, delivering 101%, 103%, and 109% of model capacity in 2Q through 4Q.

If you looked at Joe the way most companies look at key metrics, he’d be fired.  But if you read this chart to the bottom you finally get the complete picture.  Joe is running a significantly smaller sales organization at above-model efficiency.  While Joe got handed an organization that was 8 heads under plan, he did more than double the organization to 26 heads and consistently outperformed the capacity model.  Joe is a hero, not a zero.  But you’d never know if you didn’t look at his performance relative to the actual sales capacity he was managing.

Second, I’ll say the other clever way to use a capacity model is as a forecasting tool. I have found a good capacity model, re-run at the start of the quarter with then-current sales hiring/aging is a very valuable predictive tool, often predicting the quarterly sales result better than my VP of Sales. Along with rep-level, manager-level, and VP-level forecasts and stage-weighted and forecast-category-weighted expected pipeline values, you can use the re-run sales capacity model as a great tool to triangulate on the sales forecast.

You can download the four-tab spreadsheet model I built for this post, here.

# # #

Notes

[1] Starting with quota starts you in the wrong mental place — what you want people to do, as opposed to productivity (what they have historically done). Additionally, there are clear instances where quotas get assigned against which we have little to no actual productivity assumption (e.g., a second-quarter rep typically has zero productivity but will nevertheless be assigned some partial quota). Sales most certainly has a quota-allocation problem, but that should be a separate, second exercise after building a corporate sales productivity model on which to base the operating plan.

[2] A typically such vector might be (0%, 25%, 50%, 100%) or (0%, 33%, 66%, 100%) reflecting the percentage of steady-state productivity they are expected to achieve in their first, second, third, and fourth quarters of employment.

[3] Without such a row, the plan is either de-linked from the model or the plan is the pure output of the model without any human judgement attached. This row is typically used to re-balance the annual number across quarters and/or to either add or subtract cushion relative to the model.

[4] Back in the day at Salesforce, we called pipeline generation sources “horsemen” I think (in a rather bad joke) because there were four of them (marketing, alliances, sales, and SDRs/outbound). That term was later dropped probably both because of the apocalypse reference and its non gender-neutrality. However, I’ve never known what to call them since, other than the rather sterile, “pipeline sources.”

[5] Many salesops people do it the reverse way — I think because they see the problem as allocating quota whereas I see the the problem as building an achievable operating plan. Starting with quota poses several problems, from the semantic (lopping 20% off quota is not 20% over-assignment, it’s actually 25% because over-assignment is relative to the smaller number) to the mathematical (first-quarter reps get assigned quota but we can realistically expect a 0% yield) to the procedural (quotas should be custom-tailored based on known state of the territory and this cannot really be built into a productivity model).

[6] One advantages of having those percentages here is they are placed front-and-center in the company’s bookings model which will force discussion and agreement. Otherwise, if not documented centrally, they will end up in different models across the organization with no real idea of whether they either foot to the bookings model or even sum to 100% across sources.

The New Gartner 2018 Magic Quadrants for Cloud Financial Planning & Analysis and Cloud Financial Close Solutions

If all you’re looking for is the free download link, let’s cut to the chase:  here’s where you can download the new 2018 Gartner Magic Quadrant for Financial Planning and Analysis Solutions and the new 2018 Gartner Magic Quadrant for Cloud Financial Close Solutions.  These MQs are written jointly by John Van Decker and Chris Iervolino (with Chris as primary author on the first and John as primary author on the second).  Both are deep experts in the category with decades of experience.

Overall, I can say that at Host Analytics, we are honored to a leader in both MQs again this year.  We are also honored to be the only cloud pure-play vendor to be a leader in both MQs and we believe that speaks volumes about the depth and breadth of EPM functionality that we bring to the cloud.

So, if all you wanted was the links, thanks for visiting.  If, however, you’re looking for some Kellblog editorial on these MQs, then please continue on.

Whither CPM?
The first thing the astute reader will notice is that the category name, which Gartner formerly referred to as corporate performance management (CPM), and which others often referred to as enterprise performance management (EPM), is entirely missing from these MQs.  That’s no accident.  Gartner decided last fall to move away from CPM as a uber category descriptor in favor of referring more directly to the two related, but pretty different, categories beneath it.  Thus, in the future you won’t be hearing “CPM” from Gartner anymore, though I know that some vendors — including Host Analytics — will continue to use EPM/CPM until we can find a more suitable capstone name for the category.

Personally, I’m in favor of this move for two simple reasons.

  • CPM was a forced, analyst-driven category in the first place, dating back to Howard Dresner’s predictions that financial planning/budgeting would converge with business intelligence.  While Howard published the research that launched a thousand ships in terms of BI and financial planning industry consolidation (e.g., Cognos/Adaytum, BusinessObjects/SRC/Cartesis, Hyperion/Brio), the actual software itself never converged.  CPM never became like CRM — a true convergence of sales force automation (SFA) and contact center.  In each case, the two companies could be put under one roof, but they sold fundamentally different value propositions to very different buyers and thus never came together as one.
  • In accordance with the prior point, few customers actually refer to the category by CPM/EPM.  They say things much more akin to “financial planning” and “consolidation and close management.”  Since I like referring to things in the words that customers use, I am again in favor of this change.

It does, however, create one problem — Gartner has basically punted on trying to name a capstone category to include vendors who sell both financial planning and financial consolidation software.  Since we at Host Analytics think that’s important, and since we believe there are key advantages to buying both from the same vendor, we’d prefer if there were a single, standard capstone term.  If it were easy, I suppose a name would have already emerged [1].

How Not To Use Magic Quadrants
While they are Gartner’s flagship deliverable, magic quadrants (MQs) can generate a lot of confusion.  MQs don’t tell you which vendor is “best” because there is no universal best in any category.  MQs don’t tell you which vendor to pick to solve your problem because different solutions are designed around meeting different requirements.  MQs don’t predict the future of vendors — last-year’s movement vectors rarely predict this year’s positions.  And the folks I know at Gartner generally strongly dislike vector analysis of MQs because they view vendor placement as relative to each other at any moment in time [2].

Many things that customers seem to want from Gartner MQs are actually delivered by Gartner’s Critical Capabilities reports which get less attention because they don’t produce a simple, dramatic 2×2 output, but which are far better suited for determine the suitability of different products to different use-cases.

How To Use A Gartner Magic Quadrant?
In my experience after 25+ in enterprise software, I would use MQs for their overall purpose:  to group vendors into 4 different bucketsleaders, challengers, visionaries, and niche players.  That’s it.  If you want to know who the leaders are in a category, look top right.  If you want to know who the visionaries are, look bottom right.  You want to know which big companies are putting resources into the category but who thus far are lacking strategy/vision, then look top-left at the challengers quadrant.

But should you, in my humble opinion, get particularly excited about millimeter differences on either axes?  No.  Why?  Because what drives those deltas may have little, none, or in fact a counter-correlation to your situation.  In my experience, the analysts pay a lot of attention to the quadrants in which vendors end up in [2] so quadrant-placement, I’d say, is quite closely watched by the analysts.  Dot-placement, while closely watched by vendors, save for dramatic differences, doesn’t change much in the real world.  After all, they are called the magic quadrants, not the magic dots.

All that said, let me wind up with some observations on the MQs themselves.

Quick Thoughts on the 2018 Cloud FP&A Solutions MQ
While the MQs were published at the end of July 2018, they were based on information about the vendors gathered in and largely about 2017.  While there is always some phase-lag between the end of data collection and the publication data, this year it was rather unusually long — meaning that a lot may have changed in the market in the first half of 2018 that customers should be aware of. For that reason, if you’re a Gartner customer and using either the MQs or critical capabilities reports that accompany them, you should probably setup an appointment to call the analysts to ensure you’re working off the latest data.

Here are some of my quick thoughts the Cloud FP&A Solutions magic quadrant:

  • Gartner says the FP&A market is accelerating its shift from on-premises cloud.  I agree.
  • Gartner allows three types of “cloud” vendors into this (and the other) MQ:  cloud-only vendors, on-premise vendors with new built-for-the-cloud solutions, and on-premises vendors who allow their software to be run hosted on a third-party cloud platform.  While I understand their need to be inclusive, I think this is pretty broad — the total cost of ownership, cash flows, and incentives are quite different between pure cloud vendors and hosted on-premises solutions.  Caveat emptor.
  • To qualify for the MQ vendors must support at least two of the four following components of FP&A:  planning/budgeting, integrated financial planning, forecasting/modeling, management/performance reporting.  Thus the MQ is not terribly homogeneous in terms of vendor profile and use-cases.
  • For the second year in a row, (1) Host is a leader in this MQ and (2) is the only cloud pure-play vendor who is a leader in both.  We think this says a lot about the breadth and depth of our product line.
  • Customer references for Host cited ease of use, price, and solution flexibility as top three purchasing criteria.  We think this very much represents our philosophy of complex EPM made easy.

Quick Thoughts on the 2018 Cloud Financial Close Solutions MQ
Here are some of my quick thoughts on the Cloud Financial Close Solutions magic quadrant:

  • Gartner says that in the past two years the financial close market has shifted from mature on-premises to cloud solutions.  I agree.
  • While Gartner again allowed all three types of cloud vendors in this MQ, I believe some of the vendors in this MQ do just-enough, just-cloud-enough business to clear the bar, but are fundamentally still offering on-premise wolves in cloud sheep’s clothing.  Customers should look to things like total cost of ownership, upgrade frequency, and upgrade phase lags in order to flesh out real vs. fake cloud offerings.
  • This MQ is more of a mixed bag than the FP&A MQ or, for that matter, most Gartner MQs.  In general, MQs plot substitutes against each other — each dot on an MQ usually represents a vendor who does basically the same thing.  This is not true for the Cloud Financial Close (CFC) MQ — e.g., Workiva is a disclosure management vendor (and a partner of Host Analytics).  However, they do not offer financial consolidation software, as does say Host Analytics or Oracle.
  • Because the scope of this MQ is broad and both general and specialist vendors are included, customers should either call the Gartner for help (if they are Gartner customers) or just be mindful of the mixing and segmentation — e.g., Floqast (in SMB and MM) and Blackline (in enterprise) both do account reconciliation, but they are naturally segmented by customer size (and both are partners of Host, which does financial consolidation but not account reconciliation).
  • Net:  while I love that the analysts are willing to put different types of close-related, office-of-the-CFO-oriented vendors on the same MQ, it does require more than the usual amount of mindfulness in interpreting it.

Conclusion
Finally, if you want to analyze the source documents yourself, you can use the following link to download both the 2018 Gartner Magic Quadrant for Financial Planning and Analysis and Consolidation and Close Management.

# # #

Notes

[1] For Gartner, this is likely more than a semantic issue.  They are pretty strong believers in a “post-modern” ERP vision which eschews the idea of a monolithic application that includes all services, in favor of using and integrating a series of cloud-based services.  Since we are also huge believers in integrating best-of-breed cloud services, it’s hard for us to take too much issue with that.  So we’ll simply have to clearly articulate the advantages of using Host Planning and Host Consolidations together — from our viewpoint, two best-of-breed cloud services that happen to come from a single vendor.

[2] And not something done against absolute scales where you can track movement over time.  See, for example, the two explicit disclaimers in the FP&A MQ:

Capture

[3] I’m also a believer in a slightly more esoteric theory which says:  given that the Gartner dot-placement algorithm seems to try very hard to layout dots in a 45-degree-tilted football shaped pattern, it is always interesting to examine who, how, and why someone ends up outside that football.