I’ve seen numerous startups try numerous ways to calculate their sales capacity. Most are too back-of-the-envelope and to top-down for my taste. Such models are, in my humble opinion, dangerous because the combination of relatively small errors in ramping, sales productivity, and sales turnover (with associated ramp resets) can result in a relatively big mistake in setting an operating plan. Building off quota, instead of productivity, is another mistake for many reasons .
Sales productivity, measured in ARR/rep, and at steady state (i.e., after a rep is fully ramped). This is not quota (what you ask them to sell), this is productivity (what you actually expect them to sell) and it should be based on historical reality, with perhaps incremental, well justified, annual improvement.
Rep hiring plans, measured by new hires per quarter, which should be realistic in terms of your ability to recruit and close new reps.
Rep ramping, typically a vector that has percentage of steady-state productivity in the rep’s first, second, third, and fourth quarters . This should be based in historical data as well.
Rep turnover, the annual rate at which sales reps leave the company for either voluntary or involuntary reasons.
Judgment, the model should have the built-in ability to let the CEO and/or sales VP manually adjust the output and provide analytical support for so doing .
Quota over-assignment, the extent to which you assign more quota at the “street” level (i.e., sum of the reps) beyond the operating plan targets
For extra credit and to help maintain organizational alignment — while you’re making a bookings model, with a little bit of extra math you can set pipeline goals for the company’s core pipeline generation sources , so I recommend doing so.
If your company is large or complex you will probably need to create an overall bookings model that aggregates models for the various pieces of your business. For example, inside sales reps tend to have lower quotas and faster ramps than their external counterparts, so you’d want to make one model for inside sales, another for field sales, and then sum them together for the company model.
In this post, I’ll do two things: I’ll walk you through what I view as a simple-yet-comprehensive productivity model and then I’ll show you two important and arguably clever ways in which to use it.
Walking Through the Model
Let’s take a quick walk through the model. Cells in Excel “input” format (orange and blue) are either data or drivers that need to be entered; uncolored cells are either working calculations or outputs of the model.
You need to enter data into the model for 1Q20 (let’s pretend we’re making the model in December 2019) by entering what we expect to start the year with in terms of sales reps by tenure (column D). The “first/hired quarter” row represents our hiring plans for the year. The rest of this block is a waterfall that ages the rep downward as we move across quarters. Next to the block ramp assumption, which expresses, as a percentage of steady-state productivity, how much we expect a rep to sell as their tenure increases with the company. I’ve modeled a pretty slow ramp that takes five quarters to get to 100% productivity.
To the right of that we have more assumptions:
Annual turnover, the annual rate at which sales reps leave the company for any reason. This drives attriting reps in row 12 which silently assumes that every departing rep was at steady state, a tacit fairly conservative assumption in the model.
Steady-state productivity, how much we expect a rep to actually sell per year once they are fully ramped.
Quota over-assignment. I believe it’s best to start with a productivity model and uplift it to generate quotas .
The next block down calculates ramped rep equivalents (RREs), a very handy concept that far too few organizations use to convert the ramp-state to a single number equivalent to the number of fully ramped reps. The steady-state row shows the number of fully ramped reps, a row that board members and investors will frequently ask about, particularly if you’re not proactively showing them RREs.
After that we calculate “productivity capacity,” which is a mouthful, but I want to disambiguate it from quota capacity, so it’s worth the extra syllables. After that, I add a critical row called judgment, which allows the Sales VP or CEO to play with the model so that they’re not potentially signing up for targets that are straight model output, but instead also informed by their knowledge of the state of the deals and the pipeline. Judgment can be negative (reducing targets), positive (increasing targets) or zero-sum where you have the same annual target but allocate it differently across quarters.
The section in italics, linearity and growth analysis, is there to help the Sales VP analyze the results of using the judgment row. After changing targets, he/she can quickly see how the target is spread out across quarters and halves, and how any modifications affect both sequential and quarterly growth rates. I have spent many hours tweaking an operating plan using this part of the sheet, before presenting it to the board.
The next row shows quota capacity, which uplifts productivity capacity by the over-assignment percentage assumption higher up in the model. This represents the minimum quota the Sales VP should assign at street level to have the assumed level of over-assignment. Ideally this figure dovetails into a quota-assignment model.
Finally, while we’re at it, we’re only a few clicks away from generating the day-one pipeline coverage / contribution goals from our major pipeline sources: marketing, alliances, and outbound SDRs. In this model, I start by assuming that sales or customer success managers (CSMs) generate the pipeline for upsell (i.e., sales to existing customers). Therefore, when we’re looking at coverage, we really mean to say coverage of the newbiz ARR target (i.e., new ARR from new customers). So, we first reduce the ARR goal by a percentage and then multiple it by the desired pipeline coverage ratio and then allocate the result across the pipeline-sources by presumably agreed-to percentages .
Building the next-level models to support pipeline generation goals is beyond the scope of this post, but I have a few relevant posts on the subject including this three-part series, here, here, and here.
Two Clever Ways to Use the Model
The sad reality is that this kind of model gets a lot attention at the end of a fiscal year (while you’re making the plan for next year) and then typically gets thrown in the closet and ignored until it’s planning season again.
That’s too bad because this model can be used both as an evaluation tool and a predictive tool throughout the year.
Let’s show that via an all-too-common example. Let’s say we start 2020 with a new VP of Sales we just hired in November 2019 with hiring and performance targets in our original model (above) but with judgment set to zero so plan is equal to the capacity model.
Our “world-class” VP immediately proceeds to drive out a large number of salespeople. While he hires 3 “all-star” reps during 1Q20, all 5 reps hired by his predecessor in the past 6 months leave the company along with, worse yet, two fully ramped reps. Thus, instead of ending the quarter with 20 reps, we end with 12. Worse yet, the VP delivers new ARR of $2,000K vs. a target of $3,125K, 64% of plan. Realizing she has a disaster on her hands, the CEO “fails fast” and fires the newly hired VP of sales after 5 months. She then appoints the RVP of Central, Joe, to acting VP of Sales on 4/2. Joe proceeds to deliver 59%, 67%, and 75% of plan in 2Q20, 3Q20, and 4Q20.
Our question: is Joe doing a good job?
At first blush, he appears more zero than hero: 59%, 67%, and 75% of plan is no way to go through life.
But to really answer this question we cannot reasonably evaluate Joe relative to the original operating plan. He was handed a demoralized organization that was about 60% of its target size on 4/2. In order to evaluate Joe’s performance, we need to compare it not to the original operating plan, but to the capacity model re-run with the actual rep hiring and aging at the start of each quarter.
When you do this you see, for example, that while Joe is constantly underperforming plan, he is also constantly outperforming the capacity model, delivering 101%, 103%, and 109% of model capacity in 2Q through 4Q.
If you looked at Joe the way most companies look at key metrics, he’d be fired. But if you read this chart to the bottom you finally get the complete picture. Joe is running a significantly smaller sales organization at above-model efficiency. While Joe got handed an organization that was 8 heads under plan, he did more than double the organization to 26 heads and consistently outperformed the capacity model. Joe is a hero, not a zero. But you’d never know if you didn’t look at his performance relative to the actual sales capacity he was managing.
Second, I’ll say the other clever way to use a capacity model is as a forecasting tool. I have found a good capacity model, re-run at the start of the quarter with then-current sales hiring/aging is a very valuable predictive tool, often predicting the quarterly sales result better than my VP of Sales. Along with rep-level, manager-level, and VP-level forecasts and stage-weighted and forecast-category-weighted expected pipeline values, you can use the re-run sales capacity model as a great tool to triangulate on the sales forecast.
You can download the four-tab spreadsheet model I built for this post, here.
# # #
 Starting with quota starts you in the wrong mental place — what you want people to do, as opposed to productivity (what they have historically done). Additionally, there are clear instances where quotas get assigned against which we have little to no actual productivity assumption (e.g., a second-quarter rep typically has zero productivity but will nevertheless be assigned some partial quota). Sales most certainly has a quota-allocation problem, but that should be a separate, second exercise after building a corporate sales productivity model on which to base the operating plan.
 A typically such vector might be (0%, 25%, 50%, 100%) or (0%, 33%, 66%, 100%) reflecting the percentage of steady-state productivity they are expected to achieve in their first, second, third, and fourth quarters of employment.
 Without such a row, the plan is either de-linked from the model or the plan is the pure output of the model without any human judgement attached. This row is typically used to re-balance the annual number across quarters and/or to either add or subtract cushion relative to the model.
 Back in the day at Salesforce, we called pipeline generation sources “horsemen” I think (in a rather bad joke) because there were four of them (marketing, alliances, sales, and SDRs/outbound). That term was later dropped probably both because of the apocalypse reference and its non gender-neutrality. However, I’ve never known what to call them since, other than the rather sterile, “pipeline sources.”
 Many salesops people do it the reverse way — I think because they see the problem as allocating quota whereas I see the the problem as building an achievable operating plan. Starting with quota poses several problems, from the semantic (lopping 20% off quota is not 20% over-assignment, it’s actually 25% because over-assignment is relative to the smaller number) to the mathematical (first-quarter reps get assigned quota but we can realistically expect a 0% yield) to the procedural (quotas should be custom-tailored based on known state of the territory and this cannot really be built into a productivity model).
 One advantages of having those percentages here is they are placed front-and-center in the company’s bookings model which will force discussion and agreement. Otherwise, if not documented centrally, they will end up in different models across the organization with no real idea of whether they either foot to the bookings model or even sum to 100% across sources.
Overall, I can say that at Host Analytics, we are honored to a leader in both MQs again this year. We are also honored to be the only cloud pure-play vendor to be a leader in both MQs and we believe that speaks volumes about the depth and breadth of EPM functionality that we bring to the cloud.
So, if all you wanted was the links, thanks for visiting. If, however, you’re looking for some Kellblog editorial on these MQs, then please continue on.
The first thing the astute reader will notice is that the category name, which Gartner formerly referred to as corporate performance management (CPM), and which others often referred to as enterprise performance management (EPM), is entirely missing from these MQs. That’s no accident. Gartner decided last fall to move away from CPM as a uber category descriptor in favor of referring more directly to the two related, but pretty different, categories beneath it. Thus, in the future you won’t be hearing “CPM” from Gartner anymore, though I know that some vendors — including Host Analytics — will continue to use EPM/CPM until we can find a more suitable capstone name for the category.
Personally, I’m in favor of this move for two simple reasons.
CPM was a forced, analyst-driven category in the first place, dating back to Howard Dresner’s predictions that financial planning/budgeting would converge with business intelligence. While Howard published the research that launched a thousand ships in terms of BI and financial planning industry consolidation (e.g., Cognos/Adaytum, BusinessObjects/SRC/Cartesis, Hyperion/Brio), the actual software itself never converged. CPM never became like CRM — a true convergence of sales force automation (SFA) and contact center. In each case, the two companies could be put under one roof, but they sold fundamentally different value propositions to very different buyers and thus never came together as one.
In accordance with the prior point, few customers actually refer to the category by CPM/EPM. They say things much more akin to “financial planning” and “consolidation and close management.” Since I like referring to things in the words that customers use, I am again in favor of this change.
It does, however, create one problem — Gartner has basically punted on trying to name a capstone category to include vendors who sell both financial planning and financial consolidation software. Since we at Host Analytics think that’s important, and since we believe there are key advantages to buying both from the same vendor, we’d prefer if there were a single, standard capstone term. If it were easy, I suppose a name would have already emerged .
How Not To Use Magic Quadrants
While they are Gartner’s flagship deliverable, magic quadrants (MQs) can generate a lot of confusion. MQs don’t tell you which vendor is “best” because there is no universal best in any category. MQs don’t tell you which vendor to pick to solve your problem because different solutions are designed around meeting different requirements. MQs don’t predict the future of vendors — last-year’s movement vectors rarely predict this year’s positions. And the folks I know at Gartner generally strongly dislike vector analysis of MQs because they view vendor placement as relative to each other at any moment in time .
Many things that customers seem to want from Gartner MQs are actually delivered by Gartner’s Critical Capabilities reports which get less attention because they don’t produce a simple, dramatic 2×2 output, but which are far better suited for determine the suitability of different products to different use-cases.
How To Use A Gartner Magic Quadrant?
In my experience after 25+ in enterprise software, I would use MQs for their overall purpose: to group vendors into 4 different buckets: leaders, challengers, visionaries, and niche players. That’s it. If you want to know who the leaders are in a category, look top right. If you want to know who the visionaries are, look bottom right. You want to know which big companies are putting resources into the category but who thus far are lacking strategy/vision, then look top-left at the challengers quadrant.
But should you, in my humble opinion, get particularly excited about millimeter differences on either axes? No. Why? Because what drives those deltas may have little, none, or in fact a counter-correlation to your situation. In my experience, the analysts pay a lot of attention to the quadrants in which vendors end up in  so quadrant-placement, I’d say, is quite closely watched by the analysts. Dot-placement, while closely watched by vendors, save for dramatic differences, doesn’t change much in the real world. After all, they are called the magic quadrants, not the magic dots.
All that said, let me wind up with some observations on the MQs themselves.
Quick Thoughts on the 2018 Cloud FP&A Solutions MQ
While the MQs were published at the end of July 2018, they were based on information about the vendors gathered in and largely about 2017. While there is always some phase-lag between the end of data collection and the publication data, this year it was rather unusually long — meaning that a lot may have changed in the market in the first half of 2018 that customers should be aware of. For that reason, if you’re a Gartner customer and using either the MQs or critical capabilities reports that accompany them, you should probably setup an appointment to call the analysts to ensure you’re working off the latest data.
Here are some of my quick thoughts the Cloud FP&A Solutions magic quadrant:
Gartner says the FP&A market is accelerating its shift from on-premises cloud. I agree.
Gartner allows three types of “cloud” vendors into this (and the other) MQ: cloud-only vendors, on-premise vendors with new built-for-the-cloud solutions, and on-premises vendors who allow their software to be run hosted on a third-party cloud platform. While I understand their need to be inclusive, I think this is pretty broad — the total cost of ownership, cash flows, and incentives are quite different between pure cloud vendors and hosted on-premises solutions. Caveat emptor.
To qualify for the MQ vendors must support at least two of the four following components of FP&A: planning/budgeting, integrated financial planning, forecasting/modeling, management/performance reporting. Thus the MQ is not terribly homogeneous in terms of vendor profile and use-cases.
For the second year in a row, (1) Host is a leader in this MQ and (2) is the only cloud pure-play vendor who is a leader in both. We think this says a lot about the breadth and depth of our product line.
Customer references for Host cited ease of use, price, and solution flexibility as top three purchasing criteria. We think this very much represents our philosophy of complex EPM made easy.
Quick Thoughts on the 2018 Cloud Financial Close Solutions MQ
Here are some of my quick thoughts on the Cloud Financial Close Solutions magic quadrant:
Gartner says that in the past two years the financial close market has shifted from mature on-premises to cloud solutions. I agree.
While Gartner again allowed all three types of cloud vendors in this MQ, I believe some of the vendors in this MQ do just-enough, just-cloud-enough business to clear the bar, but are fundamentally still offering on-premise wolves in cloud sheep’s clothing. Customers should look to things like total cost of ownership, upgrade frequency, and upgrade phase lags in order to flesh out real vs. fake cloud offerings.
This MQ is more of a mixed bag than the FP&A MQ or, for that matter, most Gartner MQs. In general, MQs plot substitutes against each other — each dot on an MQ usually represents a vendor who does basically the same thing. This is not true for the Cloud Financial Close (CFC) MQ — e.g., Workiva is a disclosure management vendor (and a partner of Host Analytics). However, they do not offer financial consolidation software, as does say Host Analytics or Oracle.
Because the scope of this MQ is broad and both general and specialist vendors are included, customers should either call the Gartner for help (if they are Gartner customers) or just be mindful of the mixing and segmentation — e.g., Floqast (in SMB and MM) and Blackline (in enterprise) both do account reconciliation, but they are naturally segmented by customer size (and both are partners of Host, which does financial consolidation but not account reconciliation).
Net: while I love that the analysts are willing to put different types of close-related, office-of-the-CFO-oriented vendors on the same MQ, it does require more than the usual amount of mindfulness in interpreting it.
 For Gartner, this is likely more than a semantic issue. They are pretty strong believers in a “post-modern” ERP vision which eschews the idea of a monolithic application that includes all services, in favor of using and integrating a series of cloud-based services. Since we are also huge believers in integrating best-of-breed cloud services, it’s hard for us to take too much issue with that. So we’ll simply have to clearly articulate the advantages of using Host Planning and Host Consolidations together — from our viewpoint, two best-of-breed cloud services that happen to come from a single vendor.
 And not something done against absolute scales where you can track movement over time. See, for example, the two explicit disclaimers in the FP&A MQ:
CEO: Well, some of them are new and not fully productive yet.
VC: How long does it take for them to fully ramp?
CEO: Well, to full productivity, four quarters.
VC: So how many fully-ramped reps do you have?
CEO: 9 fully ramped, but we have 15 in various stages of ramping, and 1 who’s brand new …
There’s a better way to have this conversion, to perform your sales analytics, and to build your bookings capacity waterfall model. That better way involves creating a new metric called ramped rep equivalents (RREs). Let’s build up to talking about RREs by first looking at a classical sales bookings waterfall model.
I love building these models and they’re a lot of fun to play with, doing what-if analysis, varying the drivers (which are in the orange cells) and looking at the results. This is a simplified version of what most sales VPs look at when trying to decide next year’s hiring, next year’s quotas , and next year’s targets. This model assumes one type of salesrep ; a distribution of existing reps by tenure as 1 first-quarter, 3 second-quarter, 5 third-quarter, 7 fourth-quarter, and 9 steady-state reps; a hiring pattern of 1, 2, 4, 6 reps across the four quarters of 2019; and a salesrep productivity ramp whereby reps are expected to sell 0% of steady-state productivity in their first quarter with the company, and then 25%, 50%, 75% in quarters 2 through 4 and then become fully productive at quarter 5, selling at the steady-state productivity level of $1,000K in new ARR per year .
Using this model, a typical sales VP — provided they believed the productivity assumptions  and that they could realistically set quotas about 20% above the target productivity — would typically sign up for around a $22M new ARR bookings target for the coming year.
While these models work just fine, I have always felt like the second block (bookings capacity by tenure), while needed for intermediate calculations, is not terribly meaningful by itself. The lost opportunity here is that we’re not creating any concept to more easily think about, discuss, and analyze the productivity we get from reps as they ramp.
Enter the Ramped Rep Equivalent (RRE)
Rather than thinking about the partial productivity of whole reps, we can think about partial reps against whole productivity — and build the model that way, instead. This has the by-product of creating a very useful number, the RRE. Then, to get bookings capacity just multiply the number of RREs times the steady-state productivity. Let’s see an example below:
This provides a far more intuitive way of thinking about salesrep ramping. In 1Q19, the company has 25 reps, only 9 of whom are fully ramped, and rest combine to give the productivity of 8.5 additional reps, resulting in an RRE total of 17.5.
“We have 25 reps on board, but thanks to ramping, we only have the capacity equivalent to 17.5 fully-ramped reps at this time.”
This also spits out three interesting metrics:
RRE/QCR ratio: an effective vs. nominal capacity ratio — in 1Q19, nominally we have 25 reps, but we have only the effective capacity of 17.5 reps. 17.5/25 = 70%.
Capacity lost to ramping (dollars): to make the prior figure more visceral, think of the sales capacity lost due to ramping (i.e., the delta between your nominal and effective capacity) expressed in dollars. In this case, in 1Q19 we’re losing $1,875K of our bookings capacity due to ramping.
Capacity lost to ramping (percent): the same concept as the prior metric, simply expressed in percentage terms. In this case, in 1Q19 we’re losing 30% of our bookings capacity due to ramping.
Impacts and Cautions
If you want to move to an RRE mindset, here are a few tips:
RREs are useful for analytics, like sales productivity. When looking at actuals you can measure sales productivity not just by starting-period or average-period reps, but by RRE. It will provide a much more meaningful metric.
You can use RREs to measure sales effectiveness. At the start of each quarter recalculate your theoretical capacity based on your actual staffing. Then divide your actuals by that start-of-quarter theoretical capacity and you will get a measure of how well you are performing, i.e., the utilization of the quarterly starting capacity in your sales force. When you’re missing sales targets it is typically for one of two reasons: you don’t have enough capacity or you’re not making use of the capacity you have. This helps you determine which.
Beware that if you have multiple types of reps (e.g., corporate and field), you be tempted to blend them in the same way you do whole reps today –i.e., when asked “how many reps do you have?” most people say “15” and not “9 enterprise plus 6 corporate.” You have the same problem with RREs. While it’s OK to present a blended RRE figure, just remember that it’s blended and if you want to calculate capacity from it, you should calculate RREs by rep type and then get capacity by multiplying the RRE for each rep type by their respective steady-state productivity.
I recommend moving to an RRE mindset for modeling and analyzing sales capacity. If you want to play with the spreadsheet I made for this post, you can find it here.
Thanks to my friend Paul Albright for being the first person to introduce me to this idea.
# # #
 This is actually a productivity model, based on actual sales productivity — how much people have historically sold (and ergo should require little/no cushion before sales signs up for it). Most people I know work with a productivity model and then uplift the desired productivity by 15 to 25% to set quotas.
 Most companies have two or three types (e.g., corporate vs. field), so you typically need to build a waterfall for each type of rep.
 To build this model, you also need to know the aging of your existing salesreps — i.e., how many second-, third-, fourth-, and steady-state-quarter reps you have at the start of the year.
 The glaring omission from this model is sales turnover. In order to keep it simple, it’s not factored in here. While some people try to factor in sales turnover by using reduced sales productivity figures, I greatly prefer to model realistic sales productivity and explicitly model sales turnover in creating a sales bookings capacity model.
 This is one reason it’s so expensive to build an enterprise software sales force. For several quarters you often get 100% of the cost and 50% of the sales capacity.
 Which should be an weighted average productivity by type of rep weighted by number of reps of each type.
It’s another seemingly simple question. But, like most SaaS metrics, when you dig deeper you find it’s not. In this post we’ll take a look at how to calculate win rates and use win rates to introduce the broader concept of milestone vs. flow analysis that applies to conversion rates across the entire sales funnel.
Let’s start with some assumptions. Once an opportunity is accepted by sales (known as a sales-accepted opportunity, or SAL), it eventually will end up in one of three terminal states:
Other (derailed, no decision)
Some people don’t like “other” and insist that opportunities should be exclusively either won or lost and that other is an unnecessary form of lost which should be tracked with a lost reason code as opposed to its own state. I prefer to keep other, and call it derailed, because a competitive loss is conceptually different from a project cancellation, major delay, loss of sponsor, or a company acquisition that halts the project. Whether you want to call it other, no decision, or derailed, I think having a third terminal state is warranted from first principles. However, it can make things complicated.
For example, you’ll need to calculate win rates two ways:
Your narrow win rate tells you how good you are at beating the competition. Your broad rates tells you how good you are at closing deals (that come to a terminal state).
Narrow win rate alone can be misleading. If I told you a company had a 66% win rate, you might be tempted to say “time to add more salespeople and scale this thing up.” If I told you they got the 66% win rate by derailing 94 out of every 100 opportunities it generated, won 4, and lost the other 2, then you’d say “not so fast.” This, of course, would show up in the broad win rate of 4%.
This brings up the important question of timing. Both these win rate calculations ignore deals that push out of a quarter. So another degenerate case is a situation where you win 4, lose 2, derail 4, and push 90 opportunities. In this case, narrow win rate = 66% and broad win rate = 40%. Neither is shining a light on the problem (which, if it happens continuously, I call a rolling hairball problem.)
The issue here is thus far we’ve been performing what I call a milestone analysis. In effect, we put observers by the side of the road at various milestones (created, won, lost, derailed) and ask them to count the number opportunities that pass by each quarter. The issue, especially with companies that have long sales cycles, is that you have no idea of progression. You don’t know if the opportunities that passed “win” this quarter came from the opportunities that passed “created” this quarter, or if they came from last quarter, the quarter before that, or even earlier.
Milestone analysis has two key advantages
It’s easy — you just need to count opportunities passing milestones
It’s instant — you don’t have to wait to see how things play out to generate answers
The big disadvantage is it can be misleading, because the opportunities hitting a terminal state this quarter were generated in many different time periods. For a company with an average 9 month sales cycle, the opportunities hitting a terminal state in quarter N, were generated primarily in quarter N-3, but with some coming in quarters N-2 and N-1 and some coming in quarters N-4 and N-5. Across that period very little was constant, for example, marketing programs and messages changed. So a marketing effectiveness analysis would be very difficult when approached this way.
For those sorts of questions, I think it’s far better to do a cohort-based analysis, which I call a flow analysis. Instead of looking at all the opportunities that hit a terminal state in a given time period, you go back in time, grab a cohort of opportunities (e.g., all those generated in 4Q16) and then see how they play out over time. You go with the flow.
For marketing programs effectiveness, this is the only way to do it. Instead of a time-based cohort, you’d take a programs-based cohort (e.g., all the opportunities generated by marketing program X), see how they play out, and then compare various programs in terms of effectiveness.
The big downside of flow analysis is you end up analyzing ancient history. For example, if you have a 9 month average sales cycle with a wide distribution around the mean, you may need to wait 15-18 months before the vast majority of the opportunities hit a terminal state. If you analyze too early, too many opportunities are still open. But if you put off analysis then you may get important information, but too late.
You can compress the time window by analyzing programs effectiveness not to sales outcomes but to important steps along the funnel. That way you could compare two programs on the basis of their ability to generate MQLs or SALs, but you still wouldn’t know whether and at what relative rate they generate actual customers. So you could end up doubling down on a program that generates a lot of interest, but not a lot of deals.
Back to our original topic, the same concept comes up in analyzing win rates. Regardless of which win rate you’re calculating, at most companies you’re calculating it on a milestone basis. I find milestone-based win rates more volatile and less accurate that a flow-based SAL-to-close rate. For example, if I were building a marketing funnel to determine how many deals I need to hit next year’s number, I’d want to use a SAL-to-close rate, not a win rate, to do so. Why? SAL-to-close rates:
Are less volatile because they’re damped by using long periods of time.
Are more accurate because they actually tracking what you care about — if I get 100 opportunities, how many close within a given time period.
Automatically factor in derails and slips (the former are ignored in the narrow win rate and the latter ignored in both the narrow and broad win rates).
Let’s look at an example. Here’s a chart that tracks 20 opportunities, 10 generated in 1Q17 and 10 generated in 2Q17, through their entire lifetime to a terminal stage.
In reality things are a lot more complicated than this picture because you have opportunities still being generated in 3Q17 through 4Q18 and you’ll have opportunities that are still in play generated in numerous quarters before 1Q17. But to keep things simple, let’s just analyze this little slice of the world. Let’s do a milestone-based win/loss analysis.
First, you can see the milestone-based win/loss rates bounce around a lot. Here it’s due in part due to law of small numbers, but I do see similar volatility in real life — in my experience win rates bounce within a fairly broad zone — so I think it’s a real issue. Regardless of that, what’s indisputable is that in this example, this is how things will look to the milestone-based win/loss analyzer. Not a very clear picture — and a lot to panic about in 4Q17.
Let’s look at what a flow-based cohort analysis produces.
In this case, we analyze the cohort of opportunities generated in the year-ago quarter. Since we only generate opportunities in two quarters, 1Q17 and 2Q17, we only have two cohorts to analyze, and we get only two sets of numbers. The thin blue box shows in opportunity tracking chart shows the data summarized in the 1Q18 column and the thin orange box shows the data for the 2Q18 column. Both boxes depict how 3 opportunities in each cohort are still open at the end of the analysis period (imagine you did the 1Q18 analysis in 1Q18) and haven’t come to final resolution. The cohorts both produce a 50% narrow win rate, a 43% vs. 29% broad win rate, and a 30% vs. 20% close rate. How good are these numbers?
Well, in our example, we have the luxury of finding the true rates by letting the six open opportunities close out over time. By doing a flow-based analysis in 4Q18 of the 1H17 cohort, we can see that our true narrow win rate is 57%, our true broad win rate is 40%, and our close rate is also 40% (which, once everything has arrived at a terminal state, is definitionally identical to the broad win rate).
Hopefully this post has helped you think about your funnel differently by introducing the concept of milestone- vs. flow-based analysis and by demonstrating how the same business situation results in a very different rates depending on both the choice of win rate and analysis type.
Please note that the math in this example backed me into a 40% close rate which is about double what I believe is the benchmark in enterprise software — I think 20 to 25% is a more normal range.
This week Gartner research vice president John Van Decker and research director Chris Iervolino took the bold move of splitting the corporate performance management (CPM), also known as enterprise performance management (EPM), magic quadrant in two.
Instead of publishing a single magic quadrant (MQ) for all of CPM, they published two MQs, one for strategic CPM and one for financial CPM, which they define as follows:
Strategic Corporate Performance Management (SCPM) Solutions – this includes Corporate Planning and Modeling, Integrated Financial Planning, Strategy Management, Profitability Management, and Performance Reporting.
Financial Corporate Performance Management (FCPM) Solutions – this includes Financial Consolidation, Financial Reporting, Management Reporting/Costing/Forecasting, Reconciliations/Close Management, Intercompany Transactions, and Disclosure Management (including XBRL tagging)
It’s bold. It’s the first time to my recollection that an MQ has included product from different categories. Put differently, normally MQs are full of substitute products — e.g., 15 different types of butter. Here, we have butter next to olive oil on the same MQ.
It’s smart. Their uber point is that while CPM solutions are now pretty varied, that you can pretty easily classify them into more tactical/financial uses and more strategic uses. Highlighting this by splitting the MQs does customers a service because it reminds them to think both tactically and strategically. That’s important — and often needed in many finance departments who are struggling simply to keep up with the ongoing tactical workload.
It’s potentially confusing. You can find not just substitutes but complements on the same MQ. For example, Host Analytics and our partner Blackline are both on the FCPM MQ. That’s cool because we both serve core finance needs. It’s potentially confusing because we do one thing and they do another.
We are stoked. Among cloud pure-play EPM vendors, Host Analytics is the only supplier listed on both MQs. We believe this supports our contention that we have the broadest pure-play cloud EPM product line in the business. Only Host has both!
In a hype-filled world, I think Gartner does a great job of seeing through the hype-haze and focusing on customers and solutions. They do a better job than most at not being over-influenced by Halo Effects, and I suspect that’s because they spend a lot of time talking to real customers about solving real problems.
We’re just finishing up a fantastic Host Analytics World 2016, with over 800 people gathered together in San Francisco to talk about enterprise performance management (EPM). Here are a few pictures to give you a feel for the event.
Here’s 49ers football legend Steve Young delivering his keynote address:
Here’s me delivering my keynote on EPM in fair weather and foul.
Here’s an artsy shot of someone taking a picture during my keynote.
The conference has been superb and I want to thank everyone — customers, prospective customers, analysts, journalists, pundits, and partners — for being a part of this great event.
I find it amazing that at such a great time to be in the cloud EPM market that we have competitors more focused on business intelligence (BI), predictive analytics, and functional performance management than on core EPM itself. At Host Analytics, we know who we want to be: the best vendor in cloud EPM, serving the fat middle 80% of the market. More importantly, perhaps, we know who we don’t want to be: we don’t want to be a visual analytics vendor, a social collaboration vendor, or a sales performance management vendor — hence our partnerships with Qlik, Socialcast, and Xactly.
We serve finance, we speak finance, and we’re proud of that. Oh, and yes, our customers, finance leaders, care about the whole enterprise so we offer not only solutions to automate core finance processes but also tools to model the entire enterprise and align finance and operations.
You can hear about this and other topics by watching the 75 minute keynote speech and demo, embedded below.
Finally, please remember to save the date for Host Analytics World 2017 — May 16 through 19, 2017.
I’m Dave Kellogg, consultant, independent director, advisor, and blogger focused on enterprise software startups.
I bring a unique perspective to startup challenges having 10 years’ experience at each of the CEO, CMO, and independent director levels across 10+ companies ranging in size from zero to over $1B in revenues.
From 2012 to 2018, I was CEO of cloud enterprise performance management vendor Host Analytics, where we quintupled ARR while halving customer acquisition costs in a competitive market, ultimately selling the company in a private equity transaction.
Previously, I was SVP/GM of Service Cloud at Salesforce and CEO at NoSQL database provider MarkLogic, which we grew from zero to $80M in run-rate revenues during my tenure. Before that, I was CMO at Business Objects for nearly a decade as we grew from $30M to over $1B. I started my career in technical and product marketing positions at Ingres and Versant.
I love disruption, startups, and Silicon Valley and have had the pleasure of working in varied capacities with companies including Bluecore, Cyral, FloQast, Fortella, GainSight, MongoDB, Plannuh, Recorded Future, and Tableau. I currently sit on the boards of Alation (data catalogs), Nuxeo (content management) and Profisee (master data management). I previously sat on the boards of agtech leader Granular (acquired by DuPont for $300M) and big data leader Aster Data (acquired by Teradata for $325M).
I periodically speak to strategy and entrepreneurship classes at the Haas School of Business (UC Berkeley) and Hautes Études Commerciales de Paris (HEC).