Category Archives: Modeling

Next-Generation Planning and Finance, A Broader and Slightly Deeper Look

This post was prompted by feedback to the last prediction in my 2021 annual predictions post, The Rebirth of Planning and Enterprise Performance Management.  Excerpt:

EPM 1.0 was Hyperion, Arbor, and TM1. EPM 2.0 was Adaptive Insights, Anaplan, and Planful (nee Host Analytics).  EPM 3.0 is being born today.  If you’ve not been tracking this, here a list of next-generation planning startups …

Since that post, I’ve received feedback with several more startups to add to the list and a request for a little more color on each one.  That’s what I’ll cover in this post.  I can say right now this got bigger, and took way longer, than I thought it would at the outset.  That means two things:  there may be more mistakes and omissions than usual and wow if I thought the space was being reborn before, I really think it now.  Look at how many of these firms were founded in the past two years!

Order is alphabetical.  Links are to sources.  All numbers are best I could find as of publication date (and I have no intent to update).  I have added and/or removed companies from the prior post based on feedback and my subjective perception as to whether I think they qualify as “next generation” planning.  Note that I have several and varied relationships with some of these companies (see prior post and disclaimers).  List is surely not inclusive of all relevant companies.

  • Allocadia.  Founded in Vancouver in 2010 by friends from Business Objects / Crystal Reports, this is a marketing performance management company that has raised $24M in capital and has 125 employees.  Marketing planning is a real problem and they’re taking, last I checked, the enterprise approach to it.  They have 93 reviews and 4.1 stars on G2.
  • Causal.  Founded in 2019 in London.  I can’t find them in Crunchbase, but their site shows they have seed capital from Coatue and Passion Capital.  They promise, among other things, to “make finance beautiful” and the whole thing strikes me as a product-led growth strategy for a new tool to build financial models outside of traditional spreadsheets.
  • Decipad.  Co-founded in late 2020 in the UK by friend, former MarkLogic consultant, and serial entrepreneur Nuno Job, Decipad is a seed-stage, currently fewer than 10 employee, startup that, last I checked, was working on a low-code product for planning and modeling for early-stage companies.
  • Finmark.  Raleigh-based, and founded in 2020, this company has raised $5M in seed capital from a bevy of investors including Y Combinator, IDEA Fund, Draper, and Bessemer.  The company has about 50 employees, a product in early access mode, and is a product built “by founders, for founders” to provide integrated finance for startups.
  • Grid.  This company offers a web-based tool that appears to layer atop spreadsheets, using them as a data source to build reports, dashboards and apps.  The company was founded in 2018, has around 20 people, and is based in Reykjavik.  The founder/CEO previously served as head of product management at Qlik and is a “proud data nerd.”  Love it.
  • LiveFlow was founded in 2021, based in Redwood City, has raised about $500K in pre-seed capital from Y Combinator and Seedcamp.  The company offers a spreadsheet that connects to your real-time data, supporting the creation of timely reports and dashboards.  Connectivity appears to be the special sauce here, and it’s definitely a problem that needs to be solved better.
  • OnPlan.  Founded in 2106 in San Francisco by serial entrepreneur and new friend, David Greenbaum, OnPlan is a financial modeling, scenario analysis, and forecasting tool.  The company has raised an undisclosed amount of angel financing and has over 30 employees.  Notably, they are building atop Google Sheets which allows them “stand on the shoulders of giants” and provide a rare option that is, I think, Google-first as opposed to Excel-first or Excel-replacement.
  • PlaceCPM.  Founded in 2018 in Austin, this company takes a focused approach, offering forecasting and planning for SaaS and professional services businesses, built on the Salesforce platform, and with pricing suggestive of an SMB/MM focus. The company has raised $4M in pre- and seed financing.  The product gets 4.9 stars on G2 across 13 reviews.
  • Plannuh.  Pronounced with a wicked Southie accent, Plannuh is Boston for Planner, and a marketing planning package that helps marketers create and manage plans and budgets.  Founded by (a fellow) former $1B company CMO, Peter Mahoney, the company has raised $4M and has over 30 employees.  As mentioned, I think marketing planning is a real problem and these guys are taking a velocity approach to it.  They have 5.0 stars on G2 across five reviews.  I’m an advisor and wrote the foreword to their The Next CMO book.
  • Pry.  Founded in San Francisco in 2019 by two startup-experienced Cal grads (Go Bears!), with investment from pre-seed fund Nomo Ventures, Pry has fewer than 10 employees, and a vision to make it simple for early-stage companies to manage their budget, hiring plan, financial models, and cash.
  • Runway.  This company is backed with a $4.5M seed round from the big guns at A16Z.  I can’t find them on Crunchbase and their website has the expected “big thinking but no detail” for a company that’s still in stealth.  Currently at about 10 people.
  • Stratify.  Founded in 2020 in Seattle, this company has raised $5.0M to pursue real-time and collaborative budgeting and forecasting to support “continuous planning” (which is reminiscent of Planful’s messaging).  Both the founder and the lead investor have enterprise roots (with SAP / Concur) and plenty of startup experience.  The company has fewer than 10 employees today.
  • TruePlan.  Founded in 2020, with three employees, and seemingly bootstrapped I may have found these guys on the early side.  While the product appears still in development, the vision looks clear:  dynamic headcount management, that ties together the departmental (budget owner) manager, finance, recruiting, and people ops.  Workforce planning is a real problem, let’s see what they do with it.
  • VaretoFounded in 2020 in Mountain View, with fewer than 10 employees and some pretty well pedigreed founders, the company seeks to help with strategic finance, reporting, and planning.  The website is pretty tight-lipped beyond that and I can’t find any public financing information.

Thanks to Ron Baden, Nuno Job, and Bill Rausch for helping me track down so many companies.

(Added Valsight 2/10/21.)

Appearance on the CFO Bookshelf Podcast with Mark Gandy

Just a quick post to highlight a recent interview I did on the CFO Bookshelf podcast with Mark Gandy.  The podcast episode, entitled Dave Kellogg Address The Rule of 40, EPM, SaaS Metrics and More, reflects the fun and somewhat wandering romp we had through a bunch of interesting topics.

Among other things, we talked about:

  • Why marketing is a great perch from which to become a CEO
  • Some reasons CEOs might not want to blog (and the dangers of so doing)
  • A discussion of the EPM market today
  • A discussion of BI and visualization, particularly as it relates to EPM
  • The Rule of 40 and small businesses
  • Some of my favorite SaaS operating metrics
  • My thoughts on NPS (net promoter score)
  • Why I like driver-based modeling (and what it has in common with prime factorization)
  • Why I still believe in the “CFO as business partner” trope

You can find the episode here on the web, here on Apple Podcasts, and here on Google Podcasts.

Mark was a great host, and thanks for having me.

How to Make and Use a Proper Sales Bookings Productivity and Quota Capacity Model

I’ve seen numerous startups try numerous ways to calculate their sales capacity.  Most are too back-of-the-envelope and to top-down for my taste.  Such models are, in my humble opinion, dangerous because the combination of relatively small errors in ramping, sales productivity, and sales turnover (with associated ramp resets) can result in a relatively big mistake in setting an operating plan.  Building off quota, instead of productivity, is another mistake for many reasons [1].  

Thus, to me, everything needs to begin with a sales productivity model that is Einsteinian in the sense that it is as simple as possible but no simpler.

What does such a model need to take into account?

  • Sales productivity, measured in ARR/rep, and at steady state (i.e., after a rep is fully ramped).  This is not quota (what you ask them to sell), this is productivity (what you actually expect them to sell) and it should be based on historical reality, with perhaps incremental, well justified, annual improvement.
  • Rep hiring plans, measured by new hires per quarter, which should be realistic in terms of your ability to recruit and close new reps.
  • Rep ramping, typically a vector that has percentage of steady-state productivity in the rep’s first, second, third, and fourth quarters [2].  This should be based in historical data as well.
  • Rep turnover, the annual rate at which sales reps leave the company for either voluntary or involuntary reasons.
  • Judgment, the model should have the built-in ability to let the CEO and/or sales VP manually adjust the output and provide analytical support for so doing [3].
  • Quota over-assignment, the extent to which you assign more quota at the “street” level (i.e., sum of the reps) beyond the operating plan targets
  • For extra credit and to help maintain organizational alignment — while you’re making a bookings model, with a little bit of extra math you can set pipeline goals for the company’s core pipeline generation sources [4], so I recommend doing so.

If your company is large or complex you will probably need to create an overall bookings model that aggregates models for the various pieces of your business.  For example, inside sales reps tend to have lower quotas and faster ramps than their external counterparts, so you’d want to make one model for inside sales, another for field sales, and then sum them together for the company model.

In this post, I’ll do two things:  I’ll walk you through what I view as a simple-yet-comprehensive productivity model and then I’ll show you two important and arguably clever ways in which to use it.

Walking Through the Model

Let’s take a quick walk through the model.  Cells in Excel “input” format (orange and blue) are either data or drivers that need to be entered; uncolored cells are either working calculations or outputs of the model.

You need to enter data into the model for 1Q20 (let’s pretend we’re making the model in December 2019) by entering what we expect to start the year with in terms of sales reps by tenure (column D).  The “first/hired quarter” row represents our hiring plans for the year.  The rest of this block is a waterfall that ages the rep downward as we move across quarters.  Next to the block ramp assumption, which expresses, as a percentage of steady-state productivity, how much we expect a rep to sell as their tenure increases with the company.  I’ve modeled a pretty slow ramp that takes five quarters to get to 100% productivity.

To the right of that we have more assumptions:

  • Annual turnover, the annual rate at which sales reps leave the company for any reason.  This drives attriting reps in row 12 which silently assumes that every departing rep was at steady state, a tacit fairly conservative assumption in the model.
  • Steady-state productivity, how much we expect a rep to actually sell per year once they are fully ramped.
  • Quota over-assignment.  I believe it’s best to start with a productivity model and uplift it to generate quotas [5]. 

The next block down calculates ramped rep equivalents (RREs), a very handy concept that far too few organizations use to convert the ramp-state to a single number equivalent to the number of fully ramped reps.  The steady-state row shows the number of fully ramped reps, a row that board members and investors will frequently ask about, particularly if you’re not proactively showing them RREs.

After that we calculate “productivity capacity,” which is a mouthful, but I want to disambiguate it from quota capacity, so it’s worth the extra syllables.  After that, I add a critical row called judgment, which allows the Sales VP or CEO to play with the model so that they’re not potentially signing up for targets that are straight model output, but instead also informed by their knowledge of the state of the deals and the pipeline.  Judgment can be negative (reducing targets), positive (increasing targets) or zero-sum where you have the same annual target but allocate it differently across quarters.

The section in italics, linearity and growth analysis, is there to help the Sales VP analyze the results of using the judgment row.  After changing targets, he/she can quickly see how the target is spread out across quarters and halves, and how any modifications affect both sequential and quarterly growth rates. I have spent many hours tweaking an operating plan using this part of the sheet, before presenting it to the board.

The next row shows quota capacity, which uplifts productivity capacity by the over-assignment percentage assumption higher up in the model.  This represents the minimum quota the Sales VP should assign at street level to have the assumed level of over-assignment.  Ideally this figure dovetails into a quota-assignment model.

Finally, while we’re at it, we’re only a few clicks away from generating the day-one pipeline coverage / contribution goals from our major pipeline sources: marketing, alliances, and outbound SDRs.  In this model, I start by assuming that sales or customer success managers (CSMs) generate the pipeline for upsell (i.e., sales to existing customers).  Therefore, when we’re looking at coverage, we really mean to say coverage of the newbiz ARR target (i.e., new ARR from new customers).  So, we first reduce the ARR goal by a percentage and then multiple it by the desired pipeline coverage ratio and then allocate the result across the pipeline-sources by presumably agreed-to percentages [6].  

Building the next-level models to support pipeline generation goals is beyond the scope of this post, but I have a few relevant posts on the subject including this three-part series, here, here, and here.

Two Clever Ways to Use the Model

The sad reality is that this kind of model gets a lot attention at the end of a fiscal year (while you’re making the plan for next year) and then typically gets thrown in the closet and ignored until it’s planning season again. 

That’s too bad because this model can be used both as an evaluation tool and a predictive tool throughout the year.

Let’s show that via an all-too-common example.  Let’s say we start 2020 with a new VP of Sales we just hired in November 2019 with hiring and performance targets in our original model (above) but with judgment set to zero so plan is equal to the capacity model.

Our “world-class” VP immediately proceeds to drive out a large number of salespeople.  While he hires 3 “all-star” reps during 1Q20, all 5 reps hired by his predecessor in the past 6 months leave the company along with, worse yet, two fully ramped reps.  Thus, instead of ending the quarter with 20 reps, we end with 12.  Worse yet, the VP delivers new ARR of $2,000K vs. a target of $3,125K, 64% of plan.  Realizing she has a disaster on her hands, the CEO “fails fast” and fires the newly hired VP of sales after 5 months.  She then appoints the RVP of Central, Joe, to acting VP of Sales on 4/2.  Joe proceeds to deliver 59%, 67%, and 75% of plan in 2Q20, 3Q20, and 4Q20.

Our question:  is Joe doing a good job?

At first blush, he appears more zero than hero:  59%, 67%, and 75% of plan is no way to go through life.

But to really answer this question we cannot reasonably evaluate Joe relative to the original operating plan.  He was handed a demoralized organization that was about 60% of its target size on 4/2.  In order to evaluate Joe’s performance, we need to compare it not to the original operating plan, but to the capacity model re-run with the actual rep hiring and aging at the start of each quarter.

When you do this you see, for example, that while Joe is constantly underperforming plan, he is also constantly outperforming the capacity model, delivering 101%, 103%, and 109% of model capacity in 2Q through 4Q.

If you looked at Joe the way most companies look at key metrics, he’d be fired.  But if you read this chart to the bottom you finally get the complete picture.  Joe is running a significantly smaller sales organization at above-model efficiency.  While Joe got handed an organization that was 8 heads under plan, he did more than double the organization to 26 heads and consistently outperformed the capacity model.  Joe is a hero, not a zero.  But you’d never know if you didn’t look at his performance relative to the actual sales capacity he was managing.

Second, I’ll say the other clever way to use a capacity model is as a forecasting tool. I have found a good capacity model, re-run at the start of the quarter with then-current sales hiring/aging is a very valuable predictive tool, often predicting the quarterly sales result better than my VP of Sales. Along with rep-level, manager-level, and VP-level forecasts and stage-weighted and forecast-category-weighted expected pipeline values, you can use the re-run sales capacity model as a great tool to triangulate on the sales forecast.

You can download the four-tab spreadsheet model I built for this post, here.

# # #

Notes

[1] Starting with quota starts you in the wrong mental place — what you want people to do, as opposed to productivity (what they have historically done). Additionally, there are clear instances where quotas get assigned against which we have little to no actual productivity assumption (e.g., a second-quarter rep typically has zero productivity but will nevertheless be assigned some partial quota). Sales most certainly has a quota-allocation problem, but that should be a separate, second exercise after building a corporate sales productivity model on which to base the operating plan.

[2] A typically such vector might be (0%, 25%, 50%, 100%) or (0%, 33%, 66%, 100%) reflecting the percentage of steady-state productivity they are expected to achieve in their first, second, third, and fourth quarters of employment.

[3] Without such a row, the plan is either de-linked from the model or the plan is the pure output of the model without any human judgement attached. This row is typically used to re-balance the annual number across quarters and/or to either add or subtract cushion relative to the model.

[4] Back in the day at Salesforce, we called pipeline generation sources “horsemen” I think (in a rather bad joke) because there were four of them (marketing, alliances, sales, and SDRs/outbound). That term was later dropped probably both because of the apocalypse reference and its non gender-neutrality. However, I’ve never known what to call them since, other than the rather sterile, “pipeline sources.”

[5] Many salesops people do it the reverse way — I think because they see the problem as allocating quota whereas I see the the problem as building an achievable operating plan. Starting with quota poses several problems, from the semantic (lopping 20% off quota is not 20% over-assignment, it’s actually 25% because over-assignment is relative to the smaller number) to the mathematical (first-quarter reps get assigned quota but we can realistically expect a 0% yield) to the procedural (quotas should be custom-tailored based on known state of the territory and this cannot really be built into a productivity model).

[6] One advantages of having those percentages here is they are placed front-and-center in the company’s bookings model which will force discussion and agreement. Otherwise, if not documented centrally, they will end up in different models across the organization with no real idea of whether they either foot to the bookings model or even sum to 100% across sources.

The New Gartner 2018 Magic Quadrants for Cloud Financial Planning & Analysis and Cloud Financial Close Solutions

If all you’re looking for is the free download link, let’s cut to the chase:  here’s where you can download the new 2018 Gartner Magic Quadrant for Financial Planning and Analysis Solutions and the new 2018 Gartner Magic Quadrant for Cloud Financial Close Solutions.  These MQs are written jointly by John Van Decker and Chris Iervolino (with Chris as primary author on the first and John as primary author on the second).  Both are deep experts in the category with decades of experience.

Overall, I can say that at Host Analytics, we are honored to a leader in both MQs again this year.  We are also honored to be the only cloud pure-play vendor to be a leader in both MQs and we believe that speaks volumes about the depth and breadth of EPM functionality that we bring to the cloud.

So, if all you wanted was the links, thanks for visiting.  If, however, you’re looking for some Kellblog editorial on these MQs, then please continue on.

Whither CPM?
The first thing the astute reader will notice is that the category name, which Gartner formerly referred to as corporate performance management (CPM), and which others often referred to as enterprise performance management (EPM), is entirely missing from these MQs.  That’s no accident.  Gartner decided last fall to move away from CPM as a uber category descriptor in favor of referring more directly to the two related, but pretty different, categories beneath it.  Thus, in the future you won’t be hearing “CPM” from Gartner anymore, though I know that some vendors — including Host Analytics — will continue to use EPM/CPM until we can find a more suitable capstone name for the category.

Personally, I’m in favor of this move for two simple reasons.

  • CPM was a forced, analyst-driven category in the first place, dating back to Howard Dresner’s predictions that financial planning/budgeting would converge with business intelligence.  While Howard published the research that launched a thousand ships in terms of BI and financial planning industry consolidation (e.g., Cognos/Adaytum, BusinessObjects/SRC/Cartesis, Hyperion/Brio), the actual software itself never converged.  CPM never became like CRM — a true convergence of sales force automation (SFA) and contact center.  In each case, the two companies could be put under one roof, but they sold fundamentally different value propositions to very different buyers and thus never came together as one.
  • In accordance with the prior point, few customers actually refer to the category by CPM/EPM.  They say things much more akin to “financial planning” and “consolidation and close management.”  Since I like referring to things in the words that customers use, I am again in favor of this change.

It does, however, create one problem — Gartner has basically punted on trying to name a capstone category to include vendors who sell both financial planning and financial consolidation software.  Since we at Host Analytics think that’s important, and since we believe there are key advantages to buying both from the same vendor, we’d prefer if there were a single, standard capstone term.  If it were easy, I suppose a name would have already emerged [1].

How Not To Use Magic Quadrants
While they are Gartner’s flagship deliverable, magic quadrants (MQs) can generate a lot of confusion.  MQs don’t tell you which vendor is “best” because there is no universal best in any category.  MQs don’t tell you which vendor to pick to solve your problem because different solutions are designed around meeting different requirements.  MQs don’t predict the future of vendors — last-year’s movement vectors rarely predict this year’s positions.  And the folks I know at Gartner generally strongly dislike vector analysis of MQs because they view vendor placement as relative to each other at any moment in time [2].

Many things that customers seem to want from Gartner MQs are actually delivered by Gartner’s Critical Capabilities reports which get less attention because they don’t produce a simple, dramatic 2×2 output, but which are far better suited for determine the suitability of different products to different use-cases.

How To Use A Gartner Magic Quadrant?
In my experience after 25+ in enterprise software, I would use MQs for their overall purpose:  to group vendors into 4 different bucketsleaders, challengers, visionaries, and niche players.  That’s it.  If you want to know who the leaders are in a category, look top right.  If you want to know who the visionaries are, look bottom right.  You want to know which big companies are putting resources into the category but who thus far are lacking strategy/vision, then look top-left at the challengers quadrant.

But should you, in my humble opinion, get particularly excited about millimeter differences on either axes?  No.  Why?  Because what drives those deltas may have little, none, or in fact a counter-correlation to your situation.  In my experience, the analysts pay a lot of attention to the quadrants in which vendors end up in [2] so quadrant-placement, I’d say, is quite closely watched by the analysts.  Dot-placement, while closely watched by vendors, save for dramatic differences, doesn’t change much in the real world.  After all, they are called the magic quadrants, not the magic dots.

All that said, let me wind up with some observations on the MQs themselves.

Quick Thoughts on the 2018 Cloud FP&A Solutions MQ
While the MQs were published at the end of July 2018, they were based on information about the vendors gathered in and largely about 2017.  While there is always some phase-lag between the end of data collection and the publication data, this year it was rather unusually long — meaning that a lot may have changed in the market in the first half of 2018 that customers should be aware of. For that reason, if you’re a Gartner customer and using either the MQs or critical capabilities reports that accompany them, you should probably setup an appointment to call the analysts to ensure you’re working off the latest data.

Here are some of my quick thoughts the Cloud FP&A Solutions magic quadrant:

  • Gartner says the FP&A market is accelerating its shift from on-premises cloud.  I agree.
  • Gartner allows three types of “cloud” vendors into this (and the other) MQ:  cloud-only vendors, on-premise vendors with new built-for-the-cloud solutions, and on-premises vendors who allow their software to be run hosted on a third-party cloud platform.  While I understand their need to be inclusive, I think this is pretty broad — the total cost of ownership, cash flows, and incentives are quite different between pure cloud vendors and hosted on-premises solutions.  Caveat emptor.
  • To qualify for the MQ vendors must support at least two of the four following components of FP&A:  planning/budgeting, integrated financial planning, forecasting/modeling, management/performance reporting.  Thus the MQ is not terribly homogeneous in terms of vendor profile and use-cases.
  • For the second year in a row, (1) Host is a leader in this MQ and (2) is the only cloud pure-play vendor who is a leader in both.  We think this says a lot about the breadth and depth of our product line.
  • Customer references for Host cited ease of use, price, and solution flexibility as top three purchasing criteria.  We think this very much represents our philosophy of complex EPM made easy.

Quick Thoughts on the 2018 Cloud Financial Close Solutions MQ
Here are some of my quick thoughts on the Cloud Financial Close Solutions magic quadrant:

  • Gartner says that in the past two years the financial close market has shifted from mature on-premises to cloud solutions.  I agree.
  • While Gartner again allowed all three types of cloud vendors in this MQ, I believe some of the vendors in this MQ do just-enough, just-cloud-enough business to clear the bar, but are fundamentally still offering on-premise wolves in cloud sheep’s clothing.  Customers should look to things like total cost of ownership, upgrade frequency, and upgrade phase lags in order to flesh out real vs. fake cloud offerings.
  • This MQ is more of a mixed bag than the FP&A MQ or, for that matter, most Gartner MQs.  In general, MQs plot substitutes against each other — each dot on an MQ usually represents a vendor who does basically the same thing.  This is not true for the Cloud Financial Close (CFC) MQ — e.g., Workiva is a disclosure management vendor (and a partner of Host Analytics).  However, they do not offer financial consolidation software, as does say Host Analytics or Oracle.
  • Because the scope of this MQ is broad and both general and specialist vendors are included, customers should either call the Gartner for help (if they are Gartner customers) or just be mindful of the mixing and segmentation — e.g., Floqast (in SMB and MM) and Blackline (in enterprise) both do account reconciliation, but they are naturally segmented by customer size (and both are partners of Host, which does financial consolidation but not account reconciliation).
  • Net:  while I love that the analysts are willing to put different types of close-related, office-of-the-CFO-oriented vendors on the same MQ, it does require more than the usual amount of mindfulness in interpreting it.

Conclusion
Finally, if you want to analyze the source documents yourself, you can use the following link to download both the 2018 Gartner Magic Quadrant for Financial Planning and Analysis and Consolidation and Close Management.

# # #

Notes

[1] For Gartner, this is likely more than a semantic issue.  They are pretty strong believers in a “post-modern” ERP vision which eschews the idea of a monolithic application that includes all services, in favor of using and integrating a series of cloud-based services.  Since we are also huge believers in integrating best-of-breed cloud services, it’s hard for us to take too much issue with that.  So we’ll simply have to clearly articulate the advantages of using Host Planning and Host Consolidations together — from our viewpoint, two best-of-breed cloud services that happen to come from a single vendor.

[2] And not something done against absolute scales where you can track movement over time.  See, for example, the two explicit disclaimers in the FP&A MQ:

Capture

[3] I’m also a believer in a slightly more esoteric theory which says:  given that the Gartner dot-placement algorithm seems to try very hard to layout dots in a 45-degree-tilted football shaped pattern, it is always interesting to examine who, how, and why someone ends up outside that football.

The Use of Ramped Rep Equivalents (RREs) in Sales Analytics and Modeling

How many times have you heard this conversation?

VC:  how many sales reps do you have? 

CEO:  Uh, 25.  But not really.

VC:  What do you mean, not really?

CEO:  Well, some of them are new and not fully productive yet.

VC:  How long does it take for them to fully ramp?

CEO:  Well, to full productivity, four quarters.

VC:  So how many fully-ramped reps do you have?

CEO:  9 fully ramped, but we have 15 in various stages of ramping, and 1 who’s brand new …

There’s a better way to have this conversion, to perform your sales analytics, and to build your bookings capacity waterfall model.  That better way involves creating a new metric called ramped rep equivalents (RREs). Let’s build up to talking about RREs by first looking at a classical sales bookings waterfall model.

ramped rep equivalents, picture 1, revised

I love building these models and they’re a lot of fun to play with, doing what-if analysis, varying the drivers (which are in the orange cells) and looking at the results.  This is a simplified version of what most sales VPs look at when trying to decide next year’s hiring, next year’s quotas [1], and next year’s targets.  This model assumes one type of salesrep [2]; a distribution of existing reps by tenure as 1 first-quarter, 3 second-quarter, 5 third-quarter, 7 fourth-quarter, and 9 steady-state reps; a hiring pattern of 1, 2, 4, 6 reps across the four quarters of 2019; and a salesrep productivity ramp whereby reps are expected to sell 0% of steady-state productivity in their first quarter with the company, and then 25%, 50%, 75% in quarters 2 through 4 and then become fully productive at quarter 5, selling at the steady-state productivity level of $1,000K in new ARR per year [3].

Using this model, a typical sales VP — provided they believed the productivity assumptions [4] and that they could realistically set quotas about 20% above the target productivity — would typically sign up for around a $22M new ARR bookings target for the coming year.

While these models work just fine, I have always felt like the second block (bookings capacity by tenure), while needed for intermediate calculations, is not terribly meaningful by itself.  The lost opportunity here is that we’re not creating any concept to more easily think about, discuss, and analyze the productivity we get from reps as they ramp.

Enter the Ramped Rep Equivalent (RRE)
Rather than thinking about the partial productivity of whole reps, we can think about partial reps against whole productivity — and build the model that way, instead.  This has the by-product of creating a very useful number, the RRE.  Then, to get bookings capacity just multiply the number of RREs times the steady-state productivity.  Let’s see an example below:

ramped rep equivalents, picture 2, revised

This provides a far more intuitive way of thinking about salesrep ramping.  In 1Q19, the company has 25 reps, only 9 of whom are fully ramped, and rest combine to give the productivity of 8.5 additional reps, resulting in an RRE total of 17.5.

“We have 25 reps on board, but thanks to ramping, we only have the capacity equivalent to 17.5 fully-ramped reps at this time.”

This also spits out three interesting metrics:

  • RRE/QCR ratio:  an effective vs. nominal capacity ratio — in 1Q19, nominally we have 25 reps, but we have only the effective capacity of 17.5 reps.  17.5/25 = 70%.
  • Capacity lost to ramping (dollars):  to make the prior figure more visceral, think of the sales capacity lost due to ramping (i.e., the delta between your nominal and effective capacity) expressed in dollars.  In this case, in 1Q19 we’re losing $1,875K of our bookings capacity due to ramping.
  • Capacity lost to ramping (percent):  the same concept as the prior metric, simply expressed in percentage terms.  In this case, in 1Q19 we’re losing 30% of our bookings capacity due to ramping.

Impacts and Cautions
If you want to move to an RRE mindset, here are a few tips:

  • RREs are useful for analytics, like sales productivity.  When looking at actuals you can measure sales productivity not just by starting-period or average-period reps, but by RRE.  It will provide a much more meaningful metric.
  • You can use RREs to measure sales effectiveness.  At the start of each quarter recalculate your theoretical capacity based on your actual staffing.  Then divide your actuals by that start-of-quarter theoretical capacity and you will get a measure of how well you are performing, i.e., the utilization of the quarterly starting capacity in your sales force.  When you’re missing sales targets it is typically for one of two reasons:  you don’t have enough capacity or you’re not making use of the capacity you have.  This helps you determine which.
  • Beware that if you have multiple types of reps (e.g., corporate and field), you be tempted to blend them in the same way you do whole reps today –i.e., when asked “how many reps do you have?” most people say “15” and not “9 enterprise plus 6 corporate.”  You have the same problem with RREs.  While it’s OK to present a blended RRE figure, just remember that it’s blended and if you want to calculate capacity from it, you should calculate RREs by rep type and then get capacity by multiplying the RRE for each rep type by their respective steady-state productivity.

I recommend moving to an RRE mindset for modeling and analyzing sales capacity.  If you want to play with the spreadsheet I made for this post, you can find it here.

Thanks to my friend Paul Albright for being the first person to introduce me to this idea.

# # #

Notes
[1] This is actually a productivity model, based on actual sales productivity — how much people have historically sold (and ergo should require little/no cushion before sales signs up for it).  Most people I know work with a productivity model and then uplift the desired productivity by 15 to 25% to set quotas.

[2] Most companies have two or three types (e.g., corporate vs. field), so you typically need to build a waterfall for each type of rep.

[3] To build this model, you also need to know the aging of your existing salesreps — i.e., how many second-, third-, fourth-, and steady-state-quarter reps you have at the start of the year.

[4] The glaring omission from this model is sales turnover.  In order to keep it simple, it’s not factored in here. While some people try to factor in sales turnover by using reduced sales productivity figures, I greatly prefer to model realistic sales productivity and explicitly model sales turnover in creating a sales bookings capacity model.

[5] This is one reason it’s so expensive to build an enterprise software sales force.  For several quarters you often get 100% of the cost and 50% of the sales capacity.

[6] Which should be an weighted average productivity by type of rep weighted by number of reps of each type.