Enterprise SaaS and retailers have more in common than you might think.
Let’s think about retailers for a minute. Retailers drive growth in two ways:
They open new stores
They increase sales at existing stores
Opening new stores is great, but it’s an expensive way to drive new sales and requires a lot of up-front investment. It’s also risky because, despite having a small army of MBAs working to determine the right locations, sometimes new locations just don’t work out. Blending the results of these two different activities can blur what’s really happening. For example, consider this company:
Things look reasonable overall, the company is growing at 17%. But when you dig deeper you see that virtually all of the growth is coming from new stores. Revenue from existing stores is virtually flat at 2%.
It’s for this reason that retailers routinely publish same-store sales in their financial results. So you can see not only overall, blended growth but also understand how much of that growth is coming from new store openings vs. increasing sales at existing stores.
Now, let’s think about enterprise software.
Enterprise software vendors drive growth in two ways:
They hire new salesreps
They increase productivity of existing salesreps
Hiring new salesreps is great, but it’s an expensive way to drive new sales and requires a lot of up-front investment. It’s also risky because, despite having a small army of MBAs working to determine the right territories, hiring profiles and interviewing process, sometimes new salesreps just don’t work out. Blending the results of these two different activities can blur what’s really happening. For example, consider this company:
If you’re feeling a certain déjà vu, you’re right. I simply copy-and-pasted the text, substituting “enterprise software vendor” for “retailer” and “salesrep” for “store.” It’s exactly the same concept.
The problem is that we, as an industry, have basically no metric that addresses it.
Revenue, bookings, and billings growth are all blended metrics that mix results from existing and new salespeople 
Retention and expansion rates are about cohorts, but cohorts of customers, not cohorts of salespeople 
Sales productivity is typically measured as ARR/salesrep which blends new and existing salesreps 
Sales per ramped rep, measured as ARR/ramped-rep, starts to get close, but it’s not cohort-based, few companies measure it, and those that do often calculate it wrong 
So what we need is a cohort-based metric that compares the productivity of reps here today with those here a year ago . Unlike retail, where stores don’t really ramp , we need to consider ramping in defining the cohort, and thus define the year-ago cohort to include only fully-ramped reps .
So here’s how I define same-rep sales: sales from reps who were fully ramped a year ago and still here.
Here’s an example of presenting it:
The above table shows same-rep sales via an example where overall sales growth is good at 48%, driven by a 17% increase in same-rep sales and an 89% increase in new-rep sales. Note that enterprise software is a business largely built on the back of sales force expansion so — absent an acquisition or new product launch to put something new in sale’s proverbial bag — I view a 17% increase in same-rep sales as pretty good.
Let’s conclude by sharing a table of sales productivity metrics discussed in this post that I think provides a nice view of sales productivity as related to hiring and ramping.
The spreadsheet I used for this post is available for download, here.
# # #
 Billings is a public company SaaS metric and typically a proxy for bookings.
 Public companies never release this but most public and private companies track it.
 By taking overall new ARR (i.e., from all reps) and dividing it by the number of ramped reps, thus blending contribution from both new and existing reps in the numerator. Plus, these are usually calculated on a snapshot (not a cohort) basis.
 This is not survivor-biased in my mind because I am trying to get a productivity metric. By analogy, I believe stores that closed in the interim are not included in same-store sales calculations.
 Or to the extent they do, it takes weeks or months, not quarters. Thus you can simply include all stores open in the year-ago cohort, even if they just opened.
 I am trying to avoid seeing an increase in same-rep sales due to ramping — e.g., someone who just started in the year-ago cohort will have year sales, but should increase to full productivity simply by virtue of ramping.
I’ve seen numerous startups try numerous ways to calculate their sales capacity. Most are too back-of-the-envelope and to top-down for my taste. Such models are, in my humble opinion, dangerous because the combination of relatively small errors in ramping, sales productivity, and sales turnover (with associated ramp resets) can result in a relatively big mistake in setting an operating plan. Building off quota, instead of productivity, is another mistake for many reasons .
Sales productivity, measured in ARR/rep, and at steady state (i.e., after a rep is fully ramped). This is not quota (what you ask them to sell), this is productivity (what you actually expect them to sell) and it should be based on historical reality, with perhaps incremental, well justified, annual improvement.
Rep hiring plans, measured by new hires per quarter, which should be realistic in terms of your ability to recruit and close new reps.
Rep ramping, typically a vector that has percentage of steady-state productivity in the rep’s first, second, third, and fourth quarters . This should be based in historical data as well.
Rep turnover, the annual rate at which sales reps leave the company for either voluntary or involuntary reasons.
Judgment, the model should have the built-in ability to let the CEO and/or sales VP manually adjust the output and provide analytical support for so doing .
Quota over-assignment, the extent to which you assign more quota at the “street” level (i.e., sum of the reps) beyond the operating plan targets
For extra credit and to help maintain organizational alignment — while you’re making a bookings model, with a little bit of extra math you can set pipeline goals for the company’s core pipeline generation sources , so I recommend doing so.
If your company is large or complex you will probably need to create an overall bookings model that aggregates models for the various pieces of your business. For example, inside sales reps tend to have lower quotas and faster ramps than their external counterparts, so you’d want to make one model for inside sales, another for field sales, and then sum them together for the company model.
In this post, I’ll do two things: I’ll walk you through what I view as a simple-yet-comprehensive productivity model and then I’ll show you two important and arguably clever ways in which to use it.
Walking Through the Model
Let’s take a quick walk through the model. Cells in Excel “input” format (orange and blue) are either data or drivers that need to be entered; uncolored cells are either working calculations or outputs of the model.
You need to enter data into the model for 1Q20 (let’s pretend we’re making the model in December 2019) by entering what we expect to start the year with in terms of sales reps by tenure (column D). The “first/hired quarter” row represents our hiring plans for the year. The rest of this block is a waterfall that ages the rep downward as we move across quarters. Next to the block ramp assumption, which expresses, as a percentage of steady-state productivity, how much we expect a rep to sell as their tenure increases with the company. I’ve modeled a pretty slow ramp that takes five quarters to get to 100% productivity.
To the right of that we have more assumptions:
Annual turnover, the annual rate at which sales reps leave the company for any reason. This drives attriting reps in row 12 which silently assumes that every departing rep was at steady state, a tacit fairly conservative assumption in the model.
Steady-state productivity, how much we expect a rep to actually sell per year once they are fully ramped.
Quota over-assignment. I believe it’s best to start with a productivity model and uplift it to generate quotas .
The next block down calculates ramped rep equivalents (RREs), a very handy concept that far too few organizations use to convert the ramp-state to a single number equivalent to the number of fully ramped reps. The steady-state row shows the number of fully ramped reps, a row that board members and investors will frequently ask about, particularly if you’re not proactively showing them RREs.
After that we calculate “productivity capacity,” which is a mouthful, but I want to disambiguate it from quota capacity, so it’s worth the extra syllables. After that, I add a critical row called judgment, which allows the Sales VP or CEO to play with the model so that they’re not potentially signing up for targets that are straight model output, but instead also informed by their knowledge of the state of the deals and the pipeline. Judgment can be negative (reducing targets), positive (increasing targets) or zero-sum where you have the same annual target but allocate it differently across quarters.
The section in italics, linearity and growth analysis, is there to help the Sales VP analyze the results of using the judgment row. After changing targets, he/she can quickly see how the target is spread out across quarters and halves, and how any modifications affect both sequential and quarterly growth rates. I have spent many hours tweaking an operating plan using this part of the sheet, before presenting it to the board.
The next row shows quota capacity, which uplifts productivity capacity by the over-assignment percentage assumption higher up in the model. This represents the minimum quota the Sales VP should assign at street level to have the assumed level of over-assignment. Ideally this figure dovetails into a quota-assignment model.
Finally, while we’re at it, we’re only a few clicks away from generating the day-one pipeline coverage / contribution goals from our major pipeline sources: marketing, alliances, and outbound SDRs. In this model, I start by assuming that sales or customer success managers (CSMs) generate the pipeline for upsell (i.e., sales to existing customers). Therefore, when we’re looking at coverage, we really mean to say coverage of the newbiz ARR target (i.e., new ARR from new customers). So, we first reduce the ARR goal by a percentage and then multiple it by the desired pipeline coverage ratio and then allocate the result across the pipeline-sources by presumably agreed-to percentages .
Building the next-level models to support pipeline generation goals is beyond the scope of this post, but I have a few relevant posts on the subject including this three-part series, here, here, and here.
Two Clever Ways to Use the Model
The sad reality is that this kind of model gets a lot attention at the end of a fiscal year (while you’re making the plan for next year) and then typically gets thrown in the closet and ignored until it’s planning season again.
That’s too bad because this model can be used both as an evaluation tool and a predictive tool throughout the year.
Let’s show that via an all-too-common example. Let’s say we start 2020 with a new VP of Sales we just hired in November 2019 with hiring and performance targets in our original model (above) but with judgment set to zero so plan is equal to the capacity model.
Our “world-class” VP immediately proceeds to drive out a large number of salespeople. While he hires 3 “all-star” reps during 1Q20, all 5 reps hired by his predecessor in the past 6 months leave the company along with, worse yet, two fully ramped reps. Thus, instead of ending the quarter with 20 reps, we end with 12. Worse yet, the VP delivers new ARR of $2,000K vs. a target of $3,125K, 64% of plan. Realizing she has a disaster on her hands, the CEO “fails fast” and fires the newly hired VP of sales after 5 months. She then appoints the RVP of Central, Joe, to acting VP of Sales on 4/2. Joe proceeds to deliver 59%, 67%, and 75% of plan in 2Q20, 3Q20, and 4Q20.
Our question: is Joe doing a good job?
At first blush, he appears more zero than hero: 59%, 67%, and 75% of plan is no way to go through life.
But to really answer this question we cannot reasonably evaluate Joe relative to the original operating plan. He was handed a demoralized organization that was about 60% of its target size on 4/2. In order to evaluate Joe’s performance, we need to compare it not to the original operating plan, but to the capacity model re-run with the actual rep hiring and aging at the start of each quarter.
When you do this you see, for example, that while Joe is constantly underperforming plan, he is also constantly outperforming the capacity model, delivering 101%, 103%, and 109% of model capacity in 2Q through 4Q.
If you looked at Joe the way most companies look at key metrics, he’d be fired. But if you read this chart to the bottom you finally get the complete picture. Joe is running a significantly smaller sales organization at above-model efficiency. While Joe got handed an organization that was 8 heads under plan, he did more than double the organization to 26 heads and consistently outperformed the capacity model. Joe is a hero, not a zero. But you’d never know if you didn’t look at his performance relative to the actual sales capacity he was managing.
Second, I’ll say the other clever way to use a capacity model is as a forecasting tool. I have found a good capacity model, re-run at the start of the quarter with then-current sales hiring/aging is a very valuable predictive tool, often predicting the quarterly sales result better than my VP of Sales. Along with rep-level, manager-level, and VP-level forecasts and stage-weighted and forecast-category-weighted expected pipeline values, you can use the re-run sales capacity model as a great tool to triangulate on the sales forecast.
You can download the four-tab spreadsheet model I built for this post, here.
# # #
 Starting with quota starts you in the wrong mental place — what you want people to do, as opposed to productivity (what they have historically done). Additionally, there are clear instances where quotas get assigned against which we have little to no actual productivity assumption (e.g., a second-quarter rep typically has zero productivity but will nevertheless be assigned some partial quota). Sales most certainly has a quota-allocation problem, but that should be a separate, second exercise after building a corporate sales productivity model on which to base the operating plan.
 A typically such vector might be (0%, 25%, 50%, 100%) or (0%, 33%, 66%, 100%) reflecting the percentage of steady-state productivity they are expected to achieve in their first, second, third, and fourth quarters of employment.
 Without such a row, the plan is either de-linked from the model or the plan is the pure output of the model without any human judgement attached. This row is typically used to re-balance the annual number across quarters and/or to either add or subtract cushion relative to the model.
 Back in the day at Salesforce, we called pipeline generation sources “horsemen” I think (in a rather bad joke) because there were four of them (marketing, alliances, sales, and SDRs/outbound). That term was later dropped probably both because of the apocalypse reference and its non gender-neutrality. However, I’ve never known what to call them since, other than the rather sterile, “pipeline sources.”
 Many salesops people do it the reverse way — I think because they see the problem as allocating quota whereas I see the the problem as building an achievable operating plan. Starting with quota poses several problems, from the semantic (lopping 20% off quota is not 20% over-assignment, it’s actually 25% because over-assignment is relative to the smaller number) to the mathematical (first-quarter reps get assigned quota but we can realistically expect a 0% yield) to the procedural (quotas should be custom-tailored based on known state of the territory and this cannot really be built into a productivity model).
 One advantages of having those percentages here is they are placed front-and-center in the company’s bookings model which will force discussion and agreement. Otherwise, if not documented centrally, they will end up in different models across the organization with no real idea of whether they either foot to the bookings model or even sum to 100% across sources.
In the previous post, I introduced the idea of an inverted demand generation (demandgen) funnel which we can use to calculate a marketing demandgen budget based given a sales target, an average sales price (ASP), and a set of conversion rates along the funnel. This is a handy tool, isn’t hard to make, and will force you into the very good habit of measuring (and presumably improving) a set of conversion rates along your demand funnel.
In the previous post, as a simplifying assumption, we assumed a steady-state situation where a company had a $2M new ARR target every quarter. The steady-state assumption allowed us to ignore two very real factors that we are going to address today:
Time. There are two phase-lags along the funnel. MQLs might take a quarter to turn into SALs and SALs might take two quarters to turn into closed deals. So any MQL we generate now won’t likely become a closed deal until 3 quarters from now.
Growth. No SaaS company wants to operate at steady state; sales targets go up every year. Thus if we generate only enough MQLs to hit this-quarter’s target we will invariably come up short because those MQLs are working to support a (presumably larger) target 3 quarters in the future.
In order to solve these problems we will start with the inverted funnel model from the previous post and do three things:
Quarter-ize it. Instead of just showing one steady-state quarter (or a single year), we are going to stretch the model out across quarters.
Phase shift it. If SALs take two quarters to close and MQLs take 1 quarter to become SALS we will reflect this in the model, by saying 4Q20 deals need come from SALs generated in 2Q20 which in turn come from MQLs generated in 1Q20.
Extend it. Because of the three-quarter phase shift, the vast majority of the MQLs we’ll be generating 2020 are actually to support 2021 business, so we need to extend the model in 2021 (with a growth assumption) in order to determine how big of a business we need to support.
Here’s what the model looks like when you do this:
You can see that this model generates a varying demandgen budget based on the future sales targets and if you play with the drivers, you can see the impact of growth. At 50% new ARR growth, we need a $1.47M demandgen budget in 2020, at 0% we’d need $1.09M, and at 100% we’d need $1.85M.
Rather than walk through the phase-shifting with words, let me activate Excel’s trace-precedents feature so you can see how things flow:
With these corrections, we have transformed the inverted funnel into a pretty realistic tool for modeling MQL requirements of the company’s future growth plan.
In reality, your business may consist of multiple funnels with different assumption sets.
Partner-sourced deals are likely to have smaller deal sizes (due to margin given to the channel) but faster conversion timeframes and higher conversion rates. (Because we will learn about deals later in the cycle, hear only about the good ones, and the partner may expedite the evaluation process.)
Upsell business will almost certainly have smaller deal sizes, faster conversion timeframes, and much higher conversion rates than business to entirely new customers.
Corporate (or inside) sales is likely to have a materially different funnel from enterprise sales. Using a single funnel that averages the two might work, provided your mix isn’t changing, but it is likely to leave corporate sales starving for opportunities (since they do much smaller deals, they need many more opportunities).
How many of these funnels you need is up to you. Because the model is particularly sensitive to deal size (given a constant set of conversion rates) I would say that if a certain type of business has a very different ASP from the main business, then it likely needs its own funnel. So instead of building one funnel that averages everything across your company, you might be three — e.g.,
A new business funnel
An upsell funnel
A channel funnel
In part III of this series, we’ll discuss how to combine the idea of the inverted funnel with time-based close rates to create an even more accurate model of your demand funnel.
The spreadsheet I made for this series of posts is available here.
Does my company spend too much on marketing? Too little? How I do know? What is the right level of marketing spend at an enterprise software startup? I get asked these questions all the time by startup CEOs, CMOs, marketing VPs, and marketing directors.
You can turn to financial benchmarks, like the KeyBanc Annual SaaS Survey for some great high-level answers. You can subscribe to SiriusDecisions for best practices and survey data. Or you can buy detailed benchmark data  from OPEXEngine. These are all great sources and I recommend them heartily to anyone who can afford them.
But, in addition to sometimes being too high-level , there is one key problem with all these forms of benchmark data: they’re not about you. They’re not based on your operating history. While I certainly recommend that executives know their relevant financial benchmarks, there’s a difference between knowing what’s typical for the industry and what’s typical for you.
So, if you want to know if your company is spending enough on marketing , the first thing you should do is to make an inverted demand generation (aka, demandgen) funnel to figure out if you’re spending enough on demandgen. It’s quite simple and I’m frankly surprised how few folks take the time to do it.
Here’s an inverted demandgen funnel in its simplest form:
Let’s walk through the model. Note that all orange cells are drivers (inputs) and the white cells are calculations (outputs). This model assumes a steady-state situation  where the company’s new ARR target is $2,000,000 each quarter. From there, we simply walk up the funnel using historical deal sizes and conversion rates .
With an average sales price (ASP) of $75,000, the company needs to close 27 opportunities each quarter.
With a 20% sales qualified lead (SQL) to close rate we will need 133 SQLs per quarter.
If marketing is responsible for generating 80% of the sales pipeline, then marketing will need to generate 107 of those SQLs.
If our sales development representatives (SDRs) can output 2.5 opportunities per week then we will need 5 SDRs (rounding up).
With an 80% SAL to SQL conversion rate we will need 133 SALs per quarter.
With a 10% MQL to SAL conversion rate we will need 1,333 MQLs per quarter.
With a cost of $250 per MQL, we will need a demandgen budget  of $333,333 per quarter.
The world’s simplest way to calculate the overall marketing budget at this point would be to annualize demandgen to $1.3M and then double it, assuming the traditional 50/50 people/programs ratio .
Not accounting for phase lag or growth (which will be the subjects of part II and part III of this post), let’s improve our inverted funnel by adding benchmark and historical data.
Let’s look at what’s changed. I’ve added two columns, one with 2019 actuals and one with benchmark data from our favorite source. I’ve left the $2M target in both columns because I want to compare funnels to see what it would take to generate $2M using either last year’s or our benchmark’s conversion rates. Because I didn’t want to change the orange indicators (of driver cells) in the left column, when we have deviations from the benchmark I color-coded the benchmark column instead. While our projected 20% SQL-to-close rate is an improvement from the 18% rate in 2019, we are still well below the benchmark figure of 25% — hence I coded the benchmark red to indicate a problem in this row. Our 10% MQL-to-SQL conversion rate in the 2020 budget is a little below the benchmark figure of 12%, so I coded it yellow. Our $250 cost/MQL is well below the benchmark figure of $325 so I coded it green.
Finally, I added a row to show the relative efficiency improvement of the proposed 2020 budget compared to last year’s actuals and the benchmark. This is critical — this is the proof that marketing is raising the bar on itself and committed to efficiency improvement in the coming year. While our proposed funnel is overall 13% more efficient than the 2019 funnel, we still have work to do over the next few years because we are 23% less efficient than we would be if we were at the benchmark on all rates.
However, because we can’t count on fixing everything at once, we are taking a conservative approach where we show material improvement over last year’s actuals, but not overnight convergence to the benchmark — which could take us from kaizen-land to fantasy-land and result in a critical pipeline shortage downstream.
Moreover because this approach shows not only a 13% overall efficiency improvement but precisely where you expect it to come from, the CEO can challenge sales and marketing leadership:
Why are we expecting to increase our ASP by $5K to $75K?
Why do you think we can improve the SQL-to-close rate from 18% to 20% — and what you are doing to drive that improvement? 
What are we doing to improve the MQL-to-SAL conversion rate?
How are we going to improve our already excellent cost per MQL by $25?
In part II and part III of this post, we’ll discuss two ways of modeling phase-lag, modeling growth, and the separation of the new business and upsell funnels.
You can download my spreadsheet for this post, here.
 For marketing or virtually anything else.
 i.e., looking at either S&M aggregated or even marketing overall.
 The other two pillars of marketing are product marketing and communications. The high-level benchmarks can help you analyze spend on these two areas by subtracting your calculated demandgen budget from the total marketing budget suggested by a benchmark to see “what’s left” for the other two pillars. Caution: sometimes that result is negative!
 The astute reader will instantly see two problems: (a) phase-lag introduced by both the lead maturation (name to MQL) and sales (SQL to close) cycles and (b) growth. That is, in a normal high-growth startup, you need enough leads not to generate this quarter’s new ARR target but the target 3-4 quarters out, which is likely to be significantly larger. Assuming a steady-state situation gets rid of both these problems and simplifies the model. See part II and part III of this post for how I like to manage that added real-world complexity.
 Hint: if you’re not tracking these rates, the first good thing about this model is that it will force you to do so.
 When I say demandgen budget, I mean money spent on generating leads through marketing campaigns. Sometimes that very directly (e.g., adwords). Other times it’s a bit indirectly (e.g., an SEO program). I do not include demandgen staff because I am trying to calculate the marginal cost of generating an extra MQL. That is, I’m not trying to calculate what the company spends, in total, on demandgen activities (which would include salary, benefits, stock-based comp, etc. for demandgen staff) but instead the marketing programs cost to generate a lead (e.g., in case we need to figure out how much to budget to generate 200 more of them).
 In an increasingly tech-heavy world where marketing needs to invest a lot in infrastructure as well, I have adapted the traditional 50/50 people/programs rule to a more modern 45/45/10 people/programs/infrastructure rule, or even an infrastructure-heavy split of 40/40/20.
 Better closing tools, an ROI calculator, or a new sales training program could all be valid explanations for assuming an improved close rate.
Slowly and steadily, over the past decade, the industry has evolved from a mentality of “all salesreps must do everything” – including some percent of their time prospecting — to one of specialization. We, with the help of books like Predictable Revenue, have collectively decided that in-bound lead processing is different from outbound lead prospecting is different from low-end, velocity sales is different from high-end, enterprise sales.
Despite the old-school, almost-character-building emphasis on prospecting, we have collectively realized that having our top hunters dialing for dollars and digging through inbound leads isn’t, well, the best use of their time.
Industrialization typically involves specialization and the industrialization
of once purely artisanal software sales has been no exception. As part of this specialization the sales
development representative (SDR) role has risen to prominence. In this post, we’ll do a quick review of what
SDRs typically do and discuss the relative merits of having them report into
sales vs. marketing.
“Everyone under 25 in San Francisco is an SDR.” – Anonymous startup CEO
SDRs Bridge the Two
SDRs typically form the bridge between sales and marketing. A typical SDR job is take inbound leads from
marketing, perform some basic BANT-style  qualification on them, and then
pass them to sales if indicated. While SDRs typically have activity quotas
(e.g., 50 calls/day) they should be primarily measured on the number of
opportunities they create per week. In enterprise software, typically that quota
is 2-3 oppties/week.
As companies get bigger they tend to separate SDRs into two
Inbound SDRs, those who only process in-bound
Outbound SDRs, those who primarily do targeted outreach
over the phone or email
Being an SDR is a hard job.
Typical SDR challenges include:
Adhering to service-level agreements for all leads (i.e., touches with timeframes)
Contacting prospects in an increasingly spam-hostile, call-hostile environment
Figuring out which leads to work on the hardest (e.g., which merit homework to customize the message and which don’t)
Remembering that their job is to sell meetings and not product 
Supporting multiple salespeople with often conflicting priorities 
Managing the conflict between supporting salespeople and executing the process
Getting salespeople to show-up at the hand-off meeting 
Avoiding burnout in a high-pressure environment
To Which Department
Should SDRs Report: Sales or Marketing?
Historically, SDRs reported to sales. That’s probably because sales first decided to fund SDR teams as a way getting inbound lead management out of the hands of salespeople . Doing so would:
Enable the company to consistently respond in a
timely manner to all inquiries
Free up sales to spend more time on selling
Avoid the problem of individual reps not
processing new leads once they are “full up” on opportunities 
The problem is that most enterprise software sales VPs are not particularly process-oriented , because they grew up in a pre-industrialized era of sales . In fact, nothing drives me crazier than an old-school, artisanal, deal-person CRO insisting on owning the SDR organization despite the total inability to manage it. They rationalize: “Oh, I can hire someone process-oriented to manage it.” And I think: “but what can that person learn from you  about how to manage it?” And the answer is nothing. Your desire to own it is either pure ego or simply a ploy to enrich your resume.
I’ll say again because it drives me crazy: do not be the VP of Sales who insists on owning the SDR organization in the annual planning meeting but then shows zero interest in it for the rest of the year. You’re not helping anyone!
As mentioned in a footnote in a prior post, I greatly prefer SDRs reporting to marketing versus sales. Why?
Marketing leadgen and nurture people are metrics- and process-oriented animals, naturally suited to manage a process-oriented department.
It provides a simple, clear conceptual model: marketing is the opportunity creation factory and sales is the opportunity closing machine.
In short, marketing’s job is to make opportunities. Sales’ job is to close them.
# # #
 BANT = budget, authority, need, time-frame.
 Most early- and mid-stage startups put SDRs in their regular sales training sessions which I think does them a disservice. Normal sales training is about selling products/solutions. SDRs “sell” meetings. They should not attempt to build business value or differentiation. Training them to do so tempts them to do – even when it is not their job.
 A typical QCR:SDR ratio is 3-4:1, though I’ve seen as
low as 1:1 and as high as 6:1
 Believe it or not, this sometimes happens (typically when your reps are already carrying a lot of oppties). Few things reflect worse on the company than a last-minute rescheduling of the meet-your-salesperson call. You don’t get a second chance to make a firm impression.
 Although most early models had wide bypass rules – e.g., “leads with VP title at this list of key accounts will get passed directly to reps for qualification” – reflecting a lack of trust in marketing beyond dropping leaflets from airplanes.
 That problem could still exist at hand-off (i.e., opportunity creation) time but at least we have combed through the leads to find the good ones, and reports can easily identify overloaded reps.
 While they may be process-oriented when it comes to the sales process for a deal moving across stages during a quarter, that is not quite the same thing as a velocity mentality driven by daily or weekly goals with tracking metrics. If you will, there’s process-oriented and Process-Oriented.
 One simple test:
if your sales org doesn’t have monthly cadence (e.g., goals, forecasts)
then your sales VP is probably not capital P process-oriented.
 On the theory you should always build organizations where people can learn from their managers.
Most startups today use some variation on the now fairly standard terms SAL (sales accepted lead) and SQL (sales qualified lead). Below see the classic  lead funnel model from marketing bellwether Sirius Decisions that defines this.
One great thing about working as an independent board member and consultant is that you get to work with lots of companies. In doing this, I’ve noticed that while virtually everyone uses the terminology SQL and SAL, that some people define SQL before SAL and others define SAL before SQL.
Why’s that? I think the terminology was poorly chosen and is confusing. After all, what sounds like it comes first: sales accepting a lead or sales qualifying a lead? A lot of folks would say, “well you need to accept it before you can qualify it.” But others would say “you need to qualify it before you can accept it.” And therein lies the problem.
The correct answer, as seen above, is that SAL comes before SQL. I have a simple way of remembering this: A comes before Q in the alphabet, and SAL comes before SQL in the funnel. Until I came up with that I was perpetually confused.
More importantly, I think I also have a way of explaining it. Start by remembering two things:
This model was defined at a time when sales development reps (SDRs) generally reported to sales, not marketing .
This model was defined from the point of view of marketing.
Thus, sales accepting the lead didn’t mean a quota-carrying rep (QCR) accepted the lead – it meant an SDR, who works in the sales department, accepted the lead. So it’s sales accepting the lead in the sense that the sales department accepted it. Think: we, marketing, passed it to sales.
After the SDR worked on the lead, if they decided to pass it to a QCR, the QCR would do an initial qualification call, and then the QCR would decide whether to accept it. So it’s a sales qualified lead, in the sense that a salesperson has qualified it and decided to accept it as an opportunity.
Think: accepted by an SDR, qualified by a salesrep.
Personally, I prefer avoid the semantic swamp and just say “stage 1 opportunity” and “stage 2 opportunity” in order to keep things simple and clear.
# # #
 This model has since been replaced with a newer demand unit waterfall model that nevertheless still uses the term SQL but seems to abandon SAL.
 I greatly prefer SDRs reporting to marketing for two
reasons: [a] unless you are running a
pure velocity sales model, your sales leadership is more likely to deal-people
than process-people – and running the SDRs is a process-oriented job and [b] it
eliminates a potential crack in the funnel by passing leads to sales “too early”. When SDRs report to marketing, you have a
clean conceptual model: marketing is the
opportunity creation factory and sales is the opportunity closing factory.
In my last post, I made the case that the simplest, most intuitive metric for understanding whether you have too much, too little, or just the right amount of pipeline is opportunities/salesrep, calculated for both the current-quarter and the all-quarters pipeline.
This post builds upon the prior one by examining potential (and usually inevitable) problems with pipeline distribution. If the problem uncovered by the first post was that “ARR hides weak opportunity count,” the problem uncovered by this post is that “averages hide uneven distributions.”
In reality, the pipeline is almost never evenly distributed:
Despite the salesops team’s best effort to create equal territories at the start of the year, opportunities invariably end up unevenly distributed across them.
If you view marketing as dropping leads from airplanes, the odds that those leads fall evenly over your territories is zero. In some cases, marketing can control where leads land (e.g., a local CFO event in Chicago), but in most cases they cannot.
Tenured salesreps (who have had more time to develop their territories) usually have more opportunities than junior ones.
Warm territories tend to have more opportunities than cold ones .
High-activity salesreps  tend to have more opportunities than their more average-activity counterparts.
The result is that even my favorite pipeline metric, opportunities/salesrep, can be misleading because it’s a mathematical average and a single average can be produced by very different distributions. So, much as I generally prefer tables of numbers to charts, here’s a case where we’re going to need a chart to get a look at the distribution.
Here’s an example:
Let’s say this company thinks its salesreps need 7 this-quarter and 16 all-quarters opportunities in order to be successful. The averages here, shown by the blue and orange dotted lines respectively, say they’re in great shape — the average this-quarter opportunities/salesrep is 7.1 and the average all-quarters is 16.6.
But behind that lies a terrible distribution: only 4 salesreps (reps 2, 7, 10, and 13) have more than 7 opportunities in the current quarter. The other 11 are all starving to various degrees with 5 reps having 4 or fewer opportunities.
The all-quarters pipeline is somewhat healthier. There are 8 reps above the target of 16, but nevertheless, certain reps are starving on both a this-quarter and all-quarters basis (reps 4, 11, 12, and 14) and have little chance at either short- or mid-term success.
Now that we can use this chart to highlight this problem, let’s examine the three ways to solve it.
Generate more opportunities, ideally in a super-targeted way to help the starving reps without further burying the loaded reps. Sales loves to ask for this solution. In practice, it’s hard to execute and inherently phase-lagged.
Reduce the number of reps. If reps 4, 11, and 12 have been at the company for a long time and continuously struggled to hit their numbers, we can “Lord of the Flies” them, and reassign their opportunities to some of the surviving reps. The problem here is that you’re reducing sales quota capacity — it’s a potentially good short-term fix that hurts long-term growth .
Reallocate opportunities from loaded reps to starving reps. Sales management usually loathes this “Robin Hood” approach because there are few things more difficult than taking an opportunity from a sales rep. (Think: you can pry it from my cold dead fingers.) This is a real problem because it is the best solution to the problem  — there is no way that reps 7 and 13 can actively service all their opportunities and the company is likely to be losing deals it could have won because of it .
You can download the spreadsheet for this post, here.
# # #
 The distinction here is whether the territory has been continuously and actively covered (warm) vs. either totally uncovered or partially covered by another rep who did not actively manage it (cold).
 Yes, David C., if you’re reading this while doing a demo from the back seat of your car that someone else is driving on the NJ Turnpike, you are the archtype!
 It’s also a bad solution if they are proven salesreps simply caught in a pipeline crunch, perhaps after having had a blow-out result in the prior quarter.
 Other solutions include negotiating with the reps — e.g., “if you hand off these four opportunities I’ll uplift the commissions twenty percent and you’ll split it with salesrep I assign them to — 60% of something is a lot more than 100% of zero, which is what you’ll get if you can’t put enough time into the deal.”
 Better yet, in anticipation of the inevitable opportunity distribution problem, sales management can and should leave fallow (i.e., unmapped) territories, so they can do dynamic rebalancing as opportunities are created without enduring the painful “taking” of an opportunity from a salesrep who thinks they own it.
I’m Dave Kellogg, consultant, independent director, advisor, and blogger focused on enterprise software startups.
I bring a unique perspective to startup challenges having 10 years’ experience at each of the CEO, CMO, and independent director levels across 10+ companies ranging in size from zero to over $1B in revenues.
From 2012 to 2018, I was CEO of cloud enterprise performance management vendor Host Analytics, where we quintupled ARR while halving customer acquisition costs in a competitive market, ultimately selling the company in a private equity transaction.
Previously, I was SVP/GM of Service Cloud at Salesforce and CEO at NoSQL database provider MarkLogic, which we grew from zero to $80M in run-rate revenues during my tenure. Before that, I was CMO at Business Objects for nearly a decade as we grew from $30M to over $1B. I started my career in technical and product marketing positions at Ingres and Versant.
I love disruption, startups, and Silicon Valley and have had the pleasure of working in varied capacities with companies including Cyral, FloQast, Fortella, GainSight, Kelda, MongoDB, Plannuh, Recorded Future, and Tableau. I currently sit on the boards of Alation (data catalogs), Nuxeo (content management) and Profisee (master data management). I previously sat on the boards of agtech leader Granular (acquired by DuPont for $300M) and big data leader Aster Data (acquired by Teradata for $325M).
I periodically speak to strategy and entrepreneurship classes at the Haas School of Business (UC Berkeley) and Hautes Études Commerciales de Paris (HEC).