Category Archives: Startups

How to Make and Use a Proper Sales Bookings Productivity and Quota Capacity Model

I’ve seen numerous startups try numerous ways to calculate their sales capacity.  Most are too back-of-the-envelope and to top-down for my taste.  Such models are, in my humble opinion, dangerous because the combination of relatively small errors in ramping, sales productivity, and sales turnover (with associated ramp resets) can result in a relatively big mistake in setting an operating plan.  Building off quota, instead of productivity, is another mistake for many reasons [1].  

Thus, to me, everything needs to begin with a sales productivity model that is Einsteinian in the sense that it is as simple as possible but no simpler.

What does such a model need to take into account?

  • Sales productivity, measured in ARR/rep, and at steady state (i.e., after a rep is fully ramped).  This is not quota (what you ask them to sell), this is productivity (what you actually expect them to sell) and it should be based on historical reality, with perhaps incremental, well justified, annual improvement.
  • Rep hiring plans, measured by new hires per quarter, which should be realistic in terms of your ability to recruit and close new reps.
  • Rep ramping, typically a vector that has percentage of steady-state productivity in the rep’s first, second, third, and fourth quarters [2].  This should be based in historical data as well.
  • Rep turnover, the annual rate at which sales reps leave the company for either voluntary or involuntary reasons.
  • Judgment, the model should have the built-in ability to let the CEO and/or sales VP manually adjust the output and provide analytical support for so doing [3].
  • Quota over-assignment, the extent to which you assign more quota at the “street” level (i.e., sum of the reps) beyond the operating plan targets
  • For extra credit and to help maintain organizational alignment — while you’re making a bookings model, with a little bit of extra math you can set pipeline goals for the company’s core pipeline generation sources [4], so I recommend doing so.

If your company is large or complex you will probably need to create an overall bookings model that aggregates models for the various pieces of your business.  For example, inside sales reps tend to have lower quotas and faster ramps than their external counterparts, so you’d want to make one model for inside sales, another for field sales, and then sum them together for the company model.

In this post, I’ll do two things:  I’ll walk you through what I view as a simple-yet-comprehensive productivity model and then I’ll show you two important and arguably clever ways in which to use it.

Walking Through the Model

Let’s take a quick walk through the model.  Cells in Excel “input” format (orange and blue) are either data or drivers that need to be entered; uncolored cells are either working calculations or outputs of the model.

You need to enter data into the model for 1Q20 (let’s pretend we’re making the model in December 2019) by entering what we expect to start the year with in terms of sales reps by tenure (column D).  The “first/hired quarter” row represents our hiring plans for the year.  The rest of this block is a waterfall that ages the rep downward as we move across quarters.  Next to the block ramp assumption, which expresses, as a percentage of steady-state productivity, how much we expect a rep to sell as their tenure increases with the company.  I’ve modeled a pretty slow ramp that takes five quarters to get to 100% productivity.

To the right of that we have more assumptions:

  • Annual turnover, the annual rate at which sales reps leave the company for any reason.  This drives attriting reps in row 12 which silently assumes that every departing rep was at steady state, a tacit fairly conservative assumption in the model.
  • Steady-state productivity, how much we expect a rep to actually sell per year once they are fully ramped.
  • Quota over-assignment.  I believe it’s best to start with a productivity model and uplift it to generate quotas [5]. 

The next block down calculates ramped rep equivalents (RREs), a very handy concept that far too few organizations use to convert the ramp-state to a single number equivalent to the number of fully ramped reps.  The steady-state row shows the number of fully ramped reps, a row that board members and investors will frequently ask about, particularly if you’re not proactively showing them RREs.

After that we calculate “productivity capacity,” which is a mouthful, but I want to disambiguate it from quota capacity, so it’s worth the extra syllables.  After that, I add a critical row called judgment, which allows the Sales VP or CEO to play with the model so that they’re not potentially signing up for targets that are straight model output, but instead also informed by their knowledge of the state of the deals and the pipeline.  Judgment can be negative (reducing targets), positive (increasing targets) or zero-sum where you have the same annual target but allocate it differently across quarters.

The section in italics, linearity and growth analysis, is there to help the Sales VP analyze the results of using the judgment row.  After changing targets, he/she can quickly see how the target is spread out across quarters and halves, and how any modifications affect both sequential and quarterly growth rates. I have spent many hours tweaking an operating plan using this part of the sheet, before presenting it to the board.

The next row shows quota capacity, which uplifts productivity capacity by the over-assignment percentage assumption higher up in the model.  This represents the minimum quota the Sales VP should assign at street level to have the assumed level of over-assignment.  Ideally this figure dovetails into a quota-assignment model.

Finally, while we’re at it, we’re only a few clicks away from generating the day-one pipeline coverage / contribution goals from our major pipeline sources: marketing, alliances, and outbound SDRs.  In this model, I start by assuming that sales or customer success managers (CSMs) generate the pipeline for upsell (i.e., sales to existing customers).  Therefore, when we’re looking at coverage, we really mean to say coverage of the newbiz ARR target (i.e., new ARR from new customers).  So, we first reduce the ARR goal by a percentage and then multiple it by the desired pipeline coverage ratio and then allocate the result across the pipeline-sources by presumably agreed-to percentages [6].  

Building the next-level models to support pipeline generation goals is beyond the scope of this post, but I have a few relevant posts on the subject including this three-part series, here, here, and here.

Two Clever Ways to Use the Model

The sad reality is that this kind of model gets a lot attention at the end of a fiscal year (while you’re making the plan for next year) and then typically gets thrown in the closet and ignored until it’s planning season again. 

That’s too bad because this model can be used both as an evaluation tool and a predictive tool throughout the year.

Let’s show that via an all-too-common example.  Let’s say we start 2020 with a new VP of Sales we just hired in November 2019 with hiring and performance targets in our original model (above) but with judgment set to zero so plan is equal to the capacity model.

Our “world-class” VP immediately proceeds to drive out a large number of salespeople.  While he hires 3 “all-star” reps during 1Q20, all 5 reps hired by his predecessor in the past 6 months leave the company along with, worse yet, two fully ramped reps.  Thus, instead of ending the quarter with 20 reps, we end with 12.  Worse yet, the VP delivers new ARR of $2,000K vs. a target of $3,125K, 64% of plan.  Realizing she has a disaster on her hands, the CEO “fails fast” and fires the newly hired VP of sales after 5 months.  She then appoints the RVP of Central, Joe, to acting VP of Sales on 4/2.  Joe proceeds to deliver 59%, 67%, and 75% of plan in 2Q20, 3Q20, and 4Q20.

Our question:  is Joe doing a good job?

At first blush, he appears more zero than hero:  59%, 67%, and 75% of plan is no way to go through life.

But to really answer this question we cannot reasonably evaluate Joe relative to the original operating plan.  He was handed a demoralized organization that was about 60% of its target size on 4/2.  In order to evaluate Joe’s performance, we need to compare it not to the original operating plan, but to the capacity model re-run with the actual rep hiring and aging at the start of each quarter.

When you do this you see, for example, that while Joe is constantly underperforming plan, he is also constantly outperforming the capacity model, delivering 101%, 103%, and 109% of model capacity in 2Q through 4Q.

If you looked at Joe the way most companies look at key metrics, he’d be fired.  But if you read this chart to the bottom you finally get the complete picture.  Joe is running a significantly smaller sales organization at above-model efficiency.  While Joe got handed an organization that was 8 heads under plan, he did more than double the organization to 26 heads and consistently outperformed the capacity model.  Joe is a hero, not a zero.  But you’d never know if you didn’t look at his performance relative to the actual sales capacity he was managing.

Second, I’ll say the other clever way to use a capacity model is as a forecasting tool. I have found a good capacity model, re-run at the start of the quarter with then-current sales hiring/aging is a very valuable predictive tool, often predicting the quarterly sales result better than my VP of Sales. Along with rep-level, manager-level, and VP-level forecasts and stage-weighted and forecast-category-weighted expected pipeline values, you can use the re-run sales capacity model as a great tool to triangulate on the sales forecast.

You can download the four-tab spreadsheet model I built for this post, here.

# # #

Notes

[1] Starting with quota starts you in the wrong mental place — what you want people to do, as opposed to productivity (what they have historically done). Additionally, there are clear instances where quotas get assigned against which we have little to no actual productivity assumption (e.g., a second-quarter rep typically has zero productivity but will nevertheless be assigned some partial quota). Sales most certainly has a quota-allocation problem, but that should be a separate, second exercise after building a corporate sales productivity model on which to base the operating plan.

[2] A typically such vector might be (0%, 25%, 50%, 100%) or (0%, 33%, 66%, 100%) reflecting the percentage of steady-state productivity they are expected to achieve in their first, second, third, and fourth quarters of employment.

[3] Without such a row, the plan is either de-linked from the model or the plan is the pure output of the model without any human judgement attached. This row is typically used to re-balance the annual number across quarters and/or to either add or subtract cushion relative to the model.

[4] Back in the day at Salesforce, we called pipeline generation sources “horsemen” I think (in a rather bad joke) because there were four of them (marketing, alliances, sales, and SDRs/outbound). That term was later dropped probably both because of the apocalypse reference and its non gender-neutrality. However, I’ve never known what to call them since, other than the rather sterile, “pipeline sources.”

[5] Many salesops people do it the reverse way — I think because they see the problem as allocating quota whereas I see the the problem as building an achievable operating plan. Starting with quota poses several problems, from the semantic (lopping 20% off quota is not 20% over-assignment, it’s actually 25% because over-assignment is relative to the smaller number) to the mathematical (first-quarter reps get assigned quota but we can realistically expect a 0% yield) to the procedural (quotas should be custom-tailored based on known state of the territory and this cannot really be built into a productivity model).

[6] One advantages of having those percentages here is they are placed front-and-center in the company’s bookings model which will force discussion and agreement. Otherwise, if not documented centrally, they will end up in different models across the organization with no real idea of whether they either foot to the bookings model or even sum to 100% across sources.

Ten Questions Founder CEOs Should Always Be Able to Answer About Their Startups

I’m working with more early-stage companies these days (e.g., pre-seed, seed, seed-plus [1]) and one of the things I’ve noticed is that many founders cannot clearly, succinctly, and confidently answer some basic questions about their businesses.  I decided to write this post to help entrepreneurs ensure they have their bases are covered when speaking to angel investors, seed firms, or venture capitalists.

Note that Silicon Valley is the land of strong convictions, weakly held so it’s better in most cases to be clear, confident, and wrong than it is to waffle, equivocate, and be right.  I often have to remind people of this — particularly founders recently out of PhD programs — because Sand Hill Road is about the dead opposite of graduate school when it comes to this philosophy [2].

Here are ten questions that early-stage founder/CEOs should be able to answer clearly, succinctly, and confidently — along with a few tips on how to best answer them.

1. Who is the target customer?  Be precise, ideally right down to a specific job title in an organization.  It’s great if the answer will broaden over time as the company grows and its strategy naturally expands, but up-front I’d name the people you are targeting today.  Wrong:  “The Office of the CIO in IT organizations in F5000 enterprises around the world.”  Right:  “VPs of financial planning and analysis in 250-1000 employee Services firms in North America.” 

I’m admittedly fanatical about this, but I want to know what it says on the target buyer’s business card [3] .  I can’t tell you the number of times that I’ve heard “we sell to the CIO,” only to be introduced to someone whose business card said “director of data warehousing.”  If you don’t know who you’re selling to, you’re going to have trouble targeting them.

2. What problem do you solve for them?  When you meet one of these people, what do you tell them?  Right:  “We sell a solution that prevents spear phishing.”  Wrong:  “We sell a way to improve security culture at your organization” [4]. The latter answer is wrong because while an improvement in security culture may be a by-product of using your solution, it is not the primary benefit

First-order benefit:  our solution stops spear phishing.  Second-order benefit:  that means you avoid data breaches and/or save millions in ransomware and other breach-related costs.  Third-order benefit:  that means you protect your company’s reputation and your valuable brand.  Fourth-order benefit:  using our solution ends up increasing security culture and awareness.  People generally go shopping for the first-order benefit — they may buy into higher-order benefits, they may say they like your company’s approach and/or vision — but budgets and shopping lists get made on the first-order.  Don’t be selling security culture when customers are buying anti-spear-phishing.

3. How do they solve that problem today?  The majority of startups solve a problem that is already being solved in some way today.  Be realistic about this. Unless you are solving a brand-new problem (e.g., orchestrating containers at the dawn of the container revolution), then somehow the problem is either being solved today (e.g., in Excel, a legacy app, a homegrown system) or the buyer has deliberately decided not to solve it, likely because they think it’s unsolvable (e.g., baldness cures [5]).

If they are already solving the problem in some way, your new solution more likely represents an optimization than a breakthrough.  And even breakthrough companies, such as VMware [6], solved very practical problems early on (e.g., providing multiple environments on a laptop without having to physically change hard drives). 

As another example: even if you’re using advanced machine learning technology to automate trouble ticket resolution and — technically speaking, customers aren’t doing that today — they certainly are handling trouble tickets and the alternative to automatic resolution is generally a combination of human work and case deflection.

4. Why is your solution superior to the status quoOnce you can clearly describe how customers solve the problem today, then you should be able to clearly answer why your solution is superior to the status quo.  Note that I’m not asking how your technology works or why it’s superior — I’m asking why it provides a better solution for the customer. Sticking with the trouble ticket example:  “our solution is superior to human resolution because it’s faster (often by days if not hours), cuts ticket resolution cost by 90%, and results in greatly superior end-user satisfaction ratings.”  That’s a benefits-driven explanation of why it’s superior. 

5. Why is your technology different from that offered by other suppliers? Marketers call this differentiation and it’s not really just about why your technology is different from alternatives, it’s about why it’s better. The important part here is not to deep dive into how the technology works. That’s not the question; the question is why is your technology is better than the alternatives. The most common incorrect answer to this question is a long speech about how the technology works. (See this post for tips on how to build a feature, function, benefit marketing message.)

Example 1: traditional databases were built for and work well at storing structured data, but they have little or no capability for handling unstructured data. Unlike traditional databases, our technology is built using a hybrid of database and search engine technology and thus provides excellent capabilities for storing, indexing, and rapidly querying both structured and unstructured data.

Example 2: many planning systems require you to throw out the tool that most people use for planning today — Excel. Unlike those systems, our product integrates and leverages Excel as part of the solution; we use Excel formula language, Excel formatting conventions, and provide an Excel add-in interface that preserves and leverages your existing Excel knowledge. We don’t throw the baby out with the bathwater.

6. How many target customers have you spoken to — and what was their reaction to your presentation?  First, you means you, the founder/CEO.  It doesn’t mean your salesperson or co-founder.  The answer to the first part of the question is best measured in scores; investors want to know that you are in the market, talking with customers, and listening to their feedback.  They assume that you can sell the technology [7], the strategic question for later is the transferability of that skill.  They also want to know how target customers react to your presentation and how many of them convert into trials or purchases. 

7. Who’s using your product and why did they select it? It’s not hard to sell government labs and commercial advanced research divisions one of pretty much anything. It’s also not hard, in brand new categories, to sell your software to people who probably shouldn’t have purchased it — i.e., people not knowing all their options in the nascent market picked the wrong one. And that’s not to mention the other customers you can get for the wrong reason — because a board member had a friend on the executive staff, because someone was a big donor, etc. Customers “buy” (and I use air quotes become sometimes these early “customers” didn’t pay anything at all) the wrong software all the time, particularly in the early days of a market.

So the question isn’t who downloaded or tried your product, the question is who’s using it — and when they selected it did they know all their options and still choose you? Put differently, the question is “who’s not an accidental customer” and why did that set of non-accidental customers pick you over the alternative? So don’t give a list of company brand names who may or may not be active users. Instead tell a few deep stories of active customers (who they could ask to call), why they picked the software, and how it’s benefiting them.

8.  What is the TAM for solving this problem?   There are a lot of great posts about how to build a total available market (TAM) analysis, so I won’t explain how to do it here. I will say you should have a model that calculates an answer and be able to explain the hopefully simple assumptions behind that model. While I’m sure in b-school every VC undoubtedly said that “getting 1% of a $10B market is a bad strategy,” when they got into the workplace something changed. They all love big TAMs [8]. Telling a VC you’re aiming for 50% of an $800M TAM will not get you very far. Your TAM better be in the billions if not the tens of them.

9.  Why are you and your team the best people to invest in? Most interesting ideas attract several startups so, odds are, you have fairly direct competitors pretty much from inception. And, particularly if you’re talking with a VC at a larger firm, they have probably researched every company in the nascent space and met most of them [9]. So the question here is: (of all the teams I’ve met in this space) why you are the folks who are going to win?

I’d expect most startups in your space have smart people with strong educations, with great backgrounds at the right companies. That’s become the table stakes. The real question is thus why is your team of smart, well educated, and appropriately experienced people better than the others [10]:

  • A lot of this is confidence: “of course, we’re the right folks, because we’re the ones who are going to win.” Some people feel like they’re doing a homework assignment while others feel like they’re building a winning company. Be the latter. We know the stakes, we know the second prize is a set of steak knives, and we are going to win or die trying. #swagger
  • Drivers vs. passengers. Big successful enterprise software companies have definitionally employed a lot of people. So if you’re doing a sales-related category it’s not hard to companies full of ex-Siebel and ex-Salesforce people. The real question thus becomes: what did your people do at those prior companies? Were they drivers (who drove what) or were they passengers just along for the ride. If they drove, emphasize the amazing things they did, not just the brand names of where they worked.
  • Completeness. Some startups have relatively complete teams while others have only a CEO and CTO and a few functional directors. The best answer is a fairly complete team that’s worked together before. That takes a lot of hiring and on-boarding risk off the table. Think: give us money and we can start executing right away.
  • Prior exactly-relevant experience. Saying Mary was VP of ProductX Sales carrying a $500M number at BigCo is quite different from saying Mary just scaled sales at her last startup from $10M to $100M and is ready to do the exact same thing here. The smaller the gap between what people just did and what you’re asking them to do, the better.
  • Finally, and this is somewhat tongue in cheek, remember my concentric circles of fundraising from this post. How VCs see founders and entrepreneurs:

10.  If I give you money what are going to do with it? The quantitative part of this answer should already be in the three-year financial model you’ve built so don’t be afraid to reference that to remind people that your plan and financial model are aligned [11]. But then drill down and give the detail on where the money is planned to be spent. For extra credit, talk about milestone- or ARR-based spend triggers instead of dates. For example, say once we have 3 sales reps hitting their numbers we will go out and hire two more. The financial plan has that happening in July, but if July comes and we haven’t passed that milestone we won’t pull the trigger. Ditto for most hiring across the company. And ditto for marketing: e.g., we’ve got a big increase in programs budget in the second half of next year but we won’t release that money until we’re sure we’ve correctly identified the right marketing programs in which to invest.

It’s also very important that demonstrate knowledge of a key truth of VC-backed startups: each round is about teeing-up the next one. So the key goal of the Series A round should be to put the company in a position to successfully raise a Series B. And so on. Discuss the milestones you’re aiming to achieve that should support that tee-up process. And don’t forget the SaaStr napkin for getting a rough idea of what typical rounds look like by series.

Bonus: origin story. If I were to add one question it would be: tell me how you came to found your company? Or, using the more modern vernacular: tell me about your origin story? If yours is good and your founders are personable and videogenic, then I’d even make it into a short video, like the founders of Hashicorp did. You’re going to get asked this question a lot, so why not work on building the optimal answer and then videoing it.

# # #

Notes

[1] My, how things have changed.  The net result is that the new choke-point is series A (prediction 9).  Seed and angel money seems pretty easy to raise; A-rounds seem pretty hard — if you’ve already raised and spent $2M in seed capital then you should have something to show for it. 

[2] Most of the graduate student types I meet tend to be quite circumspect in their replies.  “Well, it could be this, but we don’t really know so it could be that.  Here are some arguments in favor of this and some against.”  In business, it’s better to be seen as decisive and take a clear stand.  As long as you are also perceived as open-minded and responsive to data, you can always change your mind later.  But you don’t want to be seen as fence-sitter, endlessly equivocating, and waiting for more data before making a decision.

[3] Or the more modern equivalent: an email footer or LinkedIn profile.

[4] Unless a company is shopping for training to improve security culture.  In which case, it’s a first-order benefit.

[5] Reminder that I have moral authority to talk about this :-). This type of problem is often called “latent pain” in sales, because it’s a pain the buyer is unaware they have because they don’t believe there is a solution. Ergo, they just get used to it. Thus, the first job of sales and marketing is to awaken the buyer to this latent pain.

[6] Yes, I know that virtual machines predate VMware considerably, particularly IBM’s VM/CMS operating system, so it wasn’t the creation of the virtual machine that I’d call a breakthrough, but using it to virtualize Microsoft and later Linux servers.

[7] If you can’t, it’s hard to assume that someone else will be able to.  Perhaps you’re not a natural-born seller, but if you were passionate enough about your idea to quit your job and found a company that should generally compensate.  Authenticity works.

[8] Most probably on the logic that they don’t want 1% of a $5B market, they want 40%. That is, they want both: big share and big TAM. And, if you mess up, there’s probably a safer landing net in the $5B market than the $500M one. Quoting the VC adage: great markets make great companies.

[9] This is the big difference between angels and funds. Angels typically meet one team with one idea, evaluate both and make a decision. Early-stage funds meet a company then research every company in the space and then pick a winner.

[10] I’m doing this in the abstract; it’s much easier in the concrete if you make a table and line up some key attributes of your team members vs. those of the competition. You use that table to come up with the arguments, but you don’t ever use that table externally with investors and others.

[11] I’m surprised how many folks dive into answer this question completely ignoring the fact that you’ve likely already put a three-year financial model in front of them that provides the high-level allocation of spend already. While it doesn’t seem to slow down some entrepreneurs, I think it far better to be a founder who refers to his plan a bit too much than a founder acts as if the financial plan doesn’t even exist.

The Evolution of Software Marketing: Hey Marketing, Go Get [This]!

As loyal readers know, I’m a reductionist, always trying to find the shortest, simplest way of saying things even if some degree of precision gets lost in the process and even if things end up more subtle than they initially appear.

For example, my marketing mission statement of “makes sales easier” is sometimes misinterpreted as relegating marketing to a purely tactical role, when it actually encompasses far more than that.  Yes, marketing can make sales easier through tactical means like lead generation and sales support, but marketing can also makes sales easier through more leveraged means such as competitive analysis and sales enablement or even more leveraged means such as influencer relations and solutions development or the most leveraged means of picking which markets the company competes in and (with product management) designing products to be easily salable within them.

“Make sales easier” does not just mean lead generation and tactical sales support.

So, in this reductionist spirit, I thought I’d do a historical review of the evolution of enterprise software marketing by looking at its top objective during the thirty-odd years (or should I say thirty odd years) of my career, cast through a fill-in-the-blank lens of, “Hey Marketing, go get [this].”

Hey Marketing, Go Get Leads

In the old days, leads were the focus.  They were tracked on paper and the goal was a big a pile as possible.  These were the days of tradeshow models and free beer:  do anything to get people come by the booth – regardless of whether they have any interest in or ability to buy the software.  Students, consultants, who cares?  Run their card and throw them in the pile.  We’ll celebrate the depth of the pile at the end of the show.

Hey Marketing, Go Get Qualified Leads

Then somebody figured out that all those students and consultants and self-employed people who worked at companies way outside the company’s target customer size range and couldn’t actually buy our software.  So the focus changed to get qualified leads.  Qualified first basically meant not unqualified:

  • It couldn’t be garbage, illegible, or duplicate
  • It couldn’t be self-employed, students, or consultants
  • It couldn’t be other people who clearly can’t buy the software (e.g., in the wrong country, at too small a company, in a non-applicable industry)

Then people realized that not all not-unqualified leads were the same. 

Enter lead scoring.  The first systems were manual and arbitrarily defined:  e.g., let’s give 10 points for target companies, 10 points for a VP title, and 15 points if they checked buying-within-6-months on the lead form.  Later systems got considerably more sophisticated adding both firmographic and behavioral criteria (e.g., downloaded the Evaluation Guide).  They’d even have decay functions where downloading a white paper got you 10 points, but you’d lose a point every week since if there you had no further activity. 

The problem was, of course, that no one ever did any regressions to see if A leads actually were more likely to close than B leads and so on.  At one company I ran, our single largest customer was initially scored a D lead because the contact downloaded a white paper using his Yahoo email address.  Given such stories and a general lack of faith in the scoring system, operationally nobody ever treated an A lead differently from a D lead – they’d all get “6×6’ed” (6 emails and 6 calls) anyway by the sales development reps (SDRs).  If the score didn’t differentiate the likelihood of closing and the SDR process was score-invariant, what good was scoring? The answer: not much.

Hey Marketing, Go Get Pipeline

Since it was seemingly too hard to figure out what a qualified lead was, the emphasis shifted.  Instead of “go get leads” it became, “go get pipeline.”  After all, regardless of score, the only leads we care about are those that turn into pipeline.  So, go get that.

Marketing shifted emphasis from leads to pipeline as salesforce automation (SFA) systems were increasingly in place that made pipeline easier to track.  The problem was that nobody put really good gates on what it took to get into the pipeline.  Worse yet, incentives backfired as SDRs, who were at the time almost always mapped directly to quota-carrying reps (QCRs), were paid incentives when leads were accepted as opportunities.  “Heck,” thinks the QCR, “I’ll scratch my SDR’s back in order to make sure he/she keeps scratching mine:  I’ll accept a bunch of unqualified opportunities, my SDR will get paid a $200 bonus on each, and in a few months I’ll just mark them no decision.  No harm, no foul. “Except the pipeline ends up full of junk and the 3x self-fulfilling pipeline coverage prophecy is developed.  Unless you have 3x coverage, your sales manager will beat you up, so go get 3x coverage regardless of whether it’s real or not.  So QCRs stuff bad opportunities into the pipeline which in turn converts at a lower rate which in turn increases the coverage goal – i.e., “heck, we’re only converting pipeline at 25%, so now we need 4x coverage!”  And so on.

At one point in my career I actually met a company with 100x pipeline coverage and 1% conversion rates. 

Hey Marketing, Go Get Qualified Opportunities (SQLs)

Enter the sales qualified lead (SQL). Companies realize they need to put real emphasis on someone, somewhere in the process defining what’s real and what not.  That someone ends up the QCR and it’s now their job to qualify opportunities as they are passed over and only accept those that both look real and meet documented criteria.  Management is now focused on SQLs.  SQL-based metrics, such as cost-per-SQL or SQL-to-close-rate, are created and benchmarked.  QCRs can no longer just accept everything and no-decision it later and, in fact, there’s less incentive to anyway as SDRs are no longer basically working for the QCRs, but instead for “the process” and they’re increasingly reporting into marketing to boot.  Yes, SDRs will be paid on SQLs accepted by sales, but sales is going to be held highly accountable for what happens to the SQLs they accept. 

Hey Marketing, Go Get Qualified Opportunities Efficiently

At this point we’ve got marketing focused on SQL generation and we’ve built a metrics-driven inbound SDR team to process all leads. We’ve eliminated the cracks between sales and marketing and, if we’re good, we’ve got metrics and reporting in place such that we can easily see if leads or opportunities are getting stuck in the pipeline. Operationally, we’re tight.

But are we efficient? This is also the era of SaaS metrics and companies are increasingly focused not just on growth, but growth efficiency.  Customer acquisition cost (CAC) becomes a key industry metric which puts pressure on both sales and marketing to improve efficiency.  Sales responds by staffing up sales enablement and sales productivity functions. Marketing responds with attribution as a way to try and measure the relative effectiveness of different campaigns.

Until now, campaign efficiency tended to be measured a last-touch attribution basis. So when marketers tried to calculate the effectiveness of various marketing campaigns, they’d get a list of closed deals, and allocate the resultant sales to campaigns by looking at the last thing someone did before buying. The predictable result: down-funnel campaigns and tools got all of the credit and up-funnel campaigns (e.g., advertising) got none.

People pretty quickly realized this was a flawed way to look at things so, happily, marketers didn’t shoot the propellers off their marketing planes by immediately stopping all top-of-funnel activity. Instead, they kept trying to find better means of attribution.

Attribution systems, like Bizible, came along which tried to capture the full richness of enterprise sales. That meant modeling many different contacts over a long period of time interacting with the company via various mechanisms and campaigns. In some ways attribution became like search: it wasn’t whether you got the one right answer, it was whether search engine A helped you find relevant documents better than search engine B. Right was kind of out the question. I feel the same way about attribution. Some folks feel it doesn’t work at all. My instinct is that there is no “right” answer but with a good attribution system you can do better at assessing relative campaign efficiency than you can with the alternatives (e.g., first- or last-touch attribution).

After all, it’s called the marketing mix for a reason.

Hey Marketing, Go Get Qualified Opportunities That Close

After the quixotic dalliance with campaign efficiency, sales got marketing focused back on what mattered most to them. Sales knew that while the bar for becoming a SQL was now standardized, that not all SQLs that cleared it were created equal. Some SQLs closed bigger, faster, and at higher rates than others. So, hey marketing, figure out which ones those are and go get more like them.

Thus was born the ideal customer profile (ICP). In seed-stage startups the ICP is something the founders imagine — based on the product and target market they have in mind, here’s who we should sell to. In growth-stage startups, say $10M in ARR and up, it’s no longer about vision, it’s about math.

Companies in this size range should have enough data to be able to say “who are our most successful customers” and “what do they have in common.” This involves doing a regression between various attributes of customers (e.g., vertical industry, size, number of employees, related systems, contract size, …) and some success criteria. I’d note that choosing the success criteria to regress against is harder than meetings the eye: when we say we find to find prospects most like our successful customers, how are we defining success?

  • Where we closed a big deal? (But what if it came at really high cost?)
  • Where we closed a deal quickly? (But what if they never implemented?)
  • Where they implemented successfully? (But what if they didn’t renew?)
  • Where they renewed once? (But what if they didn’t renew because of uncontrollable factor such as being acquired?)
  • Where they gave us a high NPS score? (But what if, despite that, they didn’t renew?)

The Devil really is in the detail here. I’ll dig deeper into this and other ICP-related issues one day in a subsequent post. Meantime, TOPO has some great posts that you can read.

Once you determine what an ideal customer looks like, you can then build a target list of them and enter into the world of account-based marketing (ABM).

Hey Marketing, Go Get Opportunities that Turn into Customers Who Renew

While sales may be focused simply on opportunities that close bigger and faster than the rest, what the company actually wants is happy customers (to spread positive word of mouth) who renew. Sales is typically compensated on new orders, but the company builds value by building its ARR base. A $100M ARR company with a CAC ratio of 1.5 and churn rate of 20% needs to spend $30M on sales and marketing just to refill the $20M lost to churn. (I love to multiply dollar-churn by the CAC ratio to figure out the real cost of churn.)

What the company wants is customers who don’t churn, i.e., those that have a high lifetime value (LTV). So marketing should orient its ICP (i.e., define success in terms of) not just likelihood to {close, close big, close fast} but around likelihood to renew, and potentially not just once. Defining different success criteria may well produce a different ICP.

Hey Marketing, Go Get Opportunities that Turn into Customers Who Expand

In the end, the company doesn’t just want customers who renew, even if for a long time. To really the build the value of the ARR base, the company wants customers who (1) are relatively easily won (win rate) and relatively quickly (average sales cycle) sold, (2) who not only renew multiple times, but who (3) expand their contracts over time.

Enter net dollar expansion rate (NDER), the metric that is quickly replacing churn and LTV, particularly with public SaaS companies. In my upcoming SaaStr 2020 talk, Churn is Dead, Love Live Net Dollar Expansion Rate, I’ll go into why this happening and why companies should increasingly focus on this metric when it comes to thinking about the long-term value of their ARR base.

In reality, the ultimate ICP is built around customers who meet the three above criteria: we can sell them fairly easily, they renew, and they expand. That’s what marketing needs to go get!

Kellblog's 10 Predictions for 2020

As I’ve been doing every year since 2014, I thought I’d take some time to write some predictions for 2020, but not without first doing a review of my predictions for 2019.  Lest you take any of these too seriously, I suggest you look at my batting average and disclaimers.

Kellblog 2019 Predictions Review

1.  Fred Wilson is right, Trump will not be president at the end of 2019.  PARTIAL.  He did get impeached after all, but that’s a long way from removed or resigned. 

2.  The Democratic Party will continue to bungle the playing of its relatively simple hand.  HIT.  This is obviously subjective and while I think they got some things right (e.g., delaying impeachment), they got others quite wrong (e.g., Mueller Report messaging), and continue to play more left than center which I believe is a mistake.

3.  2019 will be a rough year for the financial markets.  MISS.  The Dow was up 22% and the NASDAQ was up 35%.  Financially, maybe the only thing that didn’t work in 2019 were over-hyped IPOs.  Note to self:  avoid quantitative predictions if you don’t want to risk ending up very wrong.  I am a big believer in regression to the mean, but nailing timing is the critical (and virtually impossible) part.  Nevertheless, I do use tables like these to try and eyeball situations where it seems a correction is needed.  Take your own crack at it.

4.  VC tightens.  MISS.  Instead of tightening, VC financing hit a new record.  The interesting question here is whether mean reversion is relevant.  I’d argue it’s not – the markets have changed structurally such that companies are staying private far longer and thus living off venture capital (and/or growth-stage private equity) in ways not previously seen.  Mark Suster did a great presentation on this, Is VC Still a Thing, where he explains these and other changes in VC.  A must read.

5. Social media companies get regulated.  PARTIAL.  While “history may tell us the social media regulation is inevitable,” it didn’t happen in 2019.  However, the movement continued to gather steam with many Democratic presidential candidates calling for reform and, more notably, none other than Facebook investor Roger McNamee launching his attack on social media via his book Zucked: Waking Up To The Facebook Catastrophe.  As McNamee says, “it’s an issue of ‘right vs. wrong,’ not ‘right vs. left.’”

 

6. Ethics make a comeback.  HIT.  Ethics have certainly been more discussed than ever and related to the two reasons I cited:  the current administration and artificial intelligence.  The former forces ethics into the spotlight on a daily basis; the later provokes a slew of interesting questions, from questions of accidental bias to the trolley car problem.  Business schools continue to increase emphasis on ethics.  Mark Benioff has led a personal crusade calling for what he calls a new capitalism.

7.  Blockchain, as an enterprise technology, fades away.  HIT.  While I hate to my find myself on the other side of Ray Wang, I’m personally not seeing much traction for blockchain in the enterprise.  Maybe I’m running with the wrong crowd.  I have always felt that blockchain was designed for one purpose (to support cybercurrency), hijacked to another, and ergo became a vendor-led technology in search of a business problem.  McKinsey has a written a sort of pre-obituary, Blockchain’s Occam Problem, which was McKinsey Quarterly’s second most-read article of the year.  The 2019 Blockchain Opportunity Summit’s theme was Is Blockchain Dead?  No. Industry Experts Join Together to Share How We Might Not be Using it Right which also seems to support my argument. 

8.  Oracle enters decline phase and is increasingly seen as a legacy vendor.  HIT.  Again, this is highly subjective and some people probably concluded it years ago.  My favorite support point comes from a recent financial analyst note:  “we believe Oracle can sustain ~2% constant currency revenue growth, but we are dubious that Oracle can improve revenue growth rates.”  That pretty much says it all.

9.  ServiceNow and/or Splunk get acquired.  MISS.  While they’re both great businesses and attractive targets, they are both so expensive only a few could make the move – and no one did.  Today, Splunk is worth $24B and ServiceNow a whopping $55B.

10.  Workday succeeds with its Adaptive Insights agenda.  HIT.  Changing general ledgers is a heart transplant while changing planning systems is a knee replacement.  By acquiring Adaptive, Workday gave itself another option – and a far easier entry point – to get into corporate finance departments.  While most everyone I knew scratched their head at the enterprise-focused Workday acquiring a more SMB-focused Adaptive, Workday has done a good job simultaneously leaving Adaptive alone-enough to not disturb its core business while working to get the technology more enterprise-ready for its customers.  Whether that continues I don’t know, but for the first 18 months at least, they haven’t blown it.  This remains high visibility to Workday as evidenced by the Adaptive former CEO (and now Workday EVP of Planning) Tom Bogan’s continued attendance on Workday’s quarterly earnings calls.

With the dubious distinction of having charitably self-scored a 6.0 on my 2019 predictions, let’s fearlessly roll out some new predictions for 2020.

Kellblog 2020 Predictions

1.  Ongoing social unrest. The increasingly likely trial in the Senate will be highly contentious, only to be followed by an election that will be highly contentious as well.  Beyond that, one can’t help but wonder if a defeated Trump would even concede, which could lead to a Constitutional Crisis of the next level. Add to all that the possibility of a war with Iran.  Frankly, I am amazed that the Washington, DC continuous distraction machine hasn’t yet materially damaged the economy.  Like many in Silicon Valley, I’d like Washington to quietly go do its job and let the rest of us get back to doing ours.  The reality TV show in Washington is getting old and, happily, I think many folks are starting to lose interest and want to change the channel.

2.  A desire for re-unification.  I remain fundamentally optimistic that your average American – Republican, Democrat, or the completely under-discussed 38% who are Independents — wants to feel part of a unified, not a divided, America.  While politicians often try to leverage the most divisive issues to turn people into single-issue voters, the reality is that far more things unite us as Americans than divide us.  Per this recent Economist/YouGov wide-ranging poll, your average American looks a lot more balanced and reasonable than our political party leaders.  I believe the country is tired of division, wants unification, and will therefore elect someone who will be seen as able to bring people together.  We are stronger together.

3.  Climate change becomes the new moonshot.  NASA’s space missions didn’t just get us to the moon; they produced over 2,000 spin-off technologies that improve our lives every day – from emergency “space” blankets to scratch-resistant lenses to Teflon-coated fabrics.  Instead of seeing climate change as a hopeless threat, I believe in 2020 we will start to reframe it as the great opportunity it presents.  When we mobilize our best and brightest against a problem, we will not only solve it, but we will create scores to hundreds of spin-off technologies that will benefit our everyday lives in the process.  See this article for information on 10 startups fighting climate change, this infographic for an overview of the kinds of technologies that could alleviate it, or this article for a less sanguine view on the commitment required and extent to which we actually can de-carbonize the air. Or check out this startup which makes “trees” that consume the pollution of 275 regular trees.

4.  The strategic chief data officer (CDO).  I’m not a huge believer in throwing an “O” at every problem that comes along, but the CDO role is steadily becoming mainstream – in 2012 just 12% of F1000 companies reported having a CDO; in 2018 that’s up to 68%.  While some of that growth was driven by defensive motivations (e.g., compliance), increasingly I believe that organizations will define the CDO more strategically, more broadly, and holistically as someone who focuses on data, its cleanliness, where to find it, where it came from, its compliance with regulations as to its usage, its value, and how to leverage it for operational and strategic advantage.   These issues are thorny, technical, and often detail-oriented and the CIO is simply too busy with broader concerns (e.g., digital transformation, security, disruption).  Ergo, we need a new generation of chief data officers who want to play both offense and defense, focused not just tactically on compliance and documentation, but strategically on analytics and the creation of business value for the enterprise. This is not a role for the meek; only half of CDOs succeed and their average tenure is 2.4 years.  A recent Gartner CDO study suggests that those who are successful take a more strategic orientation, invest in a more hands-on model of supporting data and analytics, and measure the business value of their work.

5.  The ongoing rise of DevOps.   Just as agile broke down barriers between product management and development so has DevOps broken down walls between development and operations.  The cloud has driven DevOps to become one of the hottest areas of software in recent years with big public company successes (e.g., Atlassian, Splunk), major M&A (e.g., Microsoft acquiring GitHub), and private high-flyers (e.g., HashiCorp, Puppet, CloudBees).  A plethora of tools, from configuration management to testing to automation to integration to deployment to multi-cloud to performance monitoring are required to do DevOps well.  All this should make for a $24B DevOps TAM by 2023 per a recent Cowen & Company report.  Ironically though, each step forward in deployment is often a step backward in developer experience, why is one reason why I decided to work with Kelda in 2019.

6. Database proliferation slows.  While 2014 Turning Award winner Mike Stonebraker was right over a decade ago when he argued in favor of database specialization (One Size Fits All:  An Idea Whose Time Has Come and Gone), I think we may now too much of a good thing.   DB Engines now lists 350 different database systems of 14 different types (e.g., relational, graph, time series, key-value). Crunchbase lists 274 database (and database-related) startups.  I believe the database market is headed for consolidation.  One of the first big indicators of a resurgence in database sanity was the failure of the (Hadoop-based) data lake, which happened in 2018-2019 and was the closest thing I’ve seen to déjà vu in my professional career – it was as if we learned nothing from the Field of Dreams enterprise data warehouse of the 1990s (“build it and they will come”).  Moreover, after a decade of developer-led database selection, developers and now re-realizing what database people knew along – that a lot of the early NoSQL movement was akin to throwing out the ACID transaction baby with the tabular schema bathwater.

7.  A new, data-layer approach to data loss prevention (DLP).  I always thought DLP was a great idea, especially the P for prevention.  After all, who wants tools that can help with forensics after a breach if you could prevent one from happening at all — or at least limit one in progress?  But DLP doesn’t seem to work:  why is it that data breaches always seem to be measured not in rows, but in millions of rows?  For example, Equifax was 143M and Marriott was 500M.  DLP has many known limitations.  It’s perimeter-oriented in a hybrid cloud world of dissolving perimeters and it’s generally offline, scanning file systems and database logs to find “misplaced data.”  Wouldn’t a better approach be to have real-time security monitored and enforced at the data layer, just the same way as it works at the network and application layer?  Then you could use machine learning to understand normal behavior, detect anomalous behavior, and either report it — or stop it — in real time.  I think we’ll see such approaches come to market in 2020, especially as cloud services like Snowflake, RDS, and BigQuery become increasingly critical components of the data layer.

8. AI/ML continue to see success in highly focused applications.  I remain skeptical of vendors with broad claims around “enterprise AI” and remain highly supportive of vendors applying AI/ML to specific problems (e.g., Moveworks and Astound who both provide AI/ML-based trouble-ticket resolution).  In the end, AI and ML are features, not apps, and while both technologies can be used to build smart applications, they are not applications unto themselves.  In terms of specificity, the No Free Lunch Theorem reminds us that any two optimization techniques perform equivalently when averaged across all possible problems – meaning that no one modeling technique can solve everything and thus that AI/ML is going to be about lots of companies applying different techniques to different problems.   Think of AI/ML more as a toolbox than a platform.  There will not be one big winner in enterprise AI as there was in enterprise applications or databases.  Instead, there will be lots of winners each tackling specific problems.  The more interesting battles will those between systems of intelligence (e.g., Moveworks) and systems of record (e.g., ServiceNow) with the systems-of-intelligence vendors running Trojan Horse strategies against systems-of-record vendors (first complementing but eventually replacing them) while the system-of-record vendors try to either build or acquire systems of intelligence alongside their current offerings. 

9.  Series A rounds remain hard.  I think many founders are surprised by the difficulty of raising A rounds these days.  Here’s the problem in a nutshell:

  • Seed capital is readily available via pre-seed and seed-stage investments from angel investors, traditional early-stage VCs, and increasingly, seed funds.  Simply put, it’s not that hard to raise seed money.
  • Companies are staying in the seed stage longer (a median of 1.6 years), increasingly extending seed rounds, and ergo raising more money during seed stage (e.g., $2M to $4M).
  • Such that, companies are now expected to really have achieved something in order to raise a Series A.  After all, if you have been working for 2 years and spent $3M you better have an MVP product, a handful of early customers, and some ARR to show for it – not just a slide deck talking about a great opportunity.

Moreover, you should be making progress roughly in line with what you said at the outset and, if you took seed capital from a traditional VC, then they better be prepared to lead your round otherwise you will face signaling risk that could imperil your Series A.

Simply put, Series A is the new chokepoint.  Or, as Suster likes to say, the Series A and B funnel hasn’t really changed – we’ve just inserted a new seed funnel atop it that is 3 times larger than it used to be.

10.  Autonomy’s former CEO gets extradited.  Silicon Valley is generally not a place of long memories, but I saw the unusual news last month that the US government is trying to extradite Autonomy founder and former CEO Mike Lynch from the UK to face charges.  You might recall that HP, in the brief era under Leo Apotheker, acquired enterprise search vendor Autonomy in August, 2011 for a whopping $11B only to write off about $8.8B under subsequent CEO Meg Whitman a little more than a year later in November, 2012.  Computerworld provides a timeline of the saga here, including a subsequent PR war, US Department of Justice probe, UK Serious Fraud Office investigation (later dropped), shareholder lawsuits, proposed settlements, more lawsuits including Lynch’s suing HP for $150M for reputation damages, and HP’s spinning-off the Autonomy assets.  Subsequent to Computerworld’s timeline, this past May Autonomy’s former CFO was sentenced to five years in prison.  This past March, the US added criminal charges of securities fraud, wire fraud, and conspiracy against Lynch.  Lynch continues to deny all wrongdoing, blames the failed acquisition on HP, and even maintains a website to present his point of view on the issues.  I don’t have any special legal knowledge or specific knowledge of this case, but I do believe that if the US government is still fighting this case, still adding charges, and now seeking extradition, that they aren’t going to give up lightly, so my hunch is that Lynch does come to the US and face these charges. 

More broadly, regardless of how this particular case works out, in a place so prone to excess, where so much money can be made so quickly, frauds will periodically happen and it’s probably the most under-reported class of story in Silicon Valley.  Even this potentially huge headline case – the proposed extradition of a British billionaire tech mogul —  never seems to make page one news.  Hey, let’s talk about something positive like Loft’s $175M Series C instead.

To finish this up, I’ll add a bonus prediction:  Dave doesn’t get a traditional job in 2020.  While I continue to look at VC-backed startup and/or PE-backed CEO opportunities, I am quite enjoying my work doing a mix of boards, advisory relationships, and consulting gigs.  While I remain interested in looking at great CEO opportunities, I am also interested in adding a few more boards to my roster, working on stimulating consulting projects, and a few more advisory relationships as well.

I wish everyone a happy, healthy, and above-plan 2020.

Why I'm Advising Kelda

A few months ago I signed up to be an advisor to Kelda, and I thought I’d do a quick post to talk about the company and why I decided to sign up.

What is Kelda?

Kelda provides developer sandboxes in a customer’s cloud within their Kubernetes cluster. Why does this matter?

  • The world is moving to cloud computing at a rapid place.
  • Cloud computing is moving away from virtual machines as the unit of abstraction and towards containers, microservices, and serverless architectures.
  • The exact technologies that make microservices powerful in production environments have made the development experience worse.

In short, nobody was thinking much about developers when they started migrating to these new architectures.

Think for a minute about being a developer building a microservices-based application. Then think about testing it. Your code has dependencies on scores or hundreds of microservices which in turn have dependencies on other microservices. Any or all of these microservices are themselves changing over time. How you are you supposed to find a stable test-bed on which to test your code?

Unlike production environments, run by DevOps teams with a sophisticated CI/CD platform, development environments are often primitive by comparison. Tools for collecting dependencies are not robust. Developers often have to test on their own laptops, running all the required microservices locally, which elongates test cycles because of slow performance. Moreover, debugging is potentially complicated by non-deterministic interactions among microservices.

Kelda solves all that by effectively spinning up a private, stable, server-based Kubernetes cluster where developers can test their code. If that sounds pretty practical, well it is. If that sounds pedestrian, remember that one of VMware’s top early use-case was … stable test environments for QA teams across different version of operating systems, middleware, and databases. Pragmatic solutions often generalize way beyond their initial landing point.

For more technical information on Kelda, here’s a link where you can download their white paper. And here’s an excerpt that sums things up quite nicely:

Why Did I Sign Up to Advise Kelda?

There are always many reasons behind such a decision, so in no particular order:

  • The awesome founder, Ethan Jackson, who put his Berkeley computer science PhD on the back burner in order create the company. I like that this isn’t his first corporate rodeo (he worked at Nicira –> VMware) for 5 years. I also like the burn-the-ships level of commitment.
  • The practical logic behind the product idea. Remember the famous William Gibson quote: “the future is already here — it’s just not very evenly distributed.” When you’re working at the cutting edge, the next step looks kind of obvious. So while this looks very high-tech to me, it looks pretty obvious to Ethan and, in my humble opinion, a lot of people have been very successful doing the next pretty-obvious thing (e.g., from PeopleSoft building apps atop Oracle to NetSuite taking financials to the cloud to Palo Alto Networks doing application-based firewalls).
  • The trends driving the company. Kelda is dead center of the movement to containers and microservices-based architectures in the cloud. The technology elite can use all these technologies today. Kelda makes them more accessible to the typical corporate development shop.

Should SDRs Report to Sales or Marketing?

Slowly and steadily, over the past decade, the industry has evolved from a mentality of “all salesreps must do everything” – including some percent of their time prospecting — to one of specialization.  We, with the help of books like Predictable Revenue, have collectively decided that in-bound lead processing is different from outbound lead prospecting is different from low-end, velocity sales is different from high-end, enterprise sales.

Despite the old-school, almost-character-building emphasis on prospecting, we have collectively realized that having our top hunters dialing for dollars and digging through inbound leads isn’t, well, the best use of their time.

Industrialization typically involves specialization and the industrialization of once purely artisanal software sales has been no exception.  As part of this specialization the sales development representative (SDR) role has risen to prominence.  In this post, we’ll do a quick review of what SDRs typically do and discuss the relative merits of having them report into sales vs. marketing.

“Everyone under 25 in San Francisco is an SDR.” – Anonymous startup CEO

SDRs Bridge the Two Departments

SDRs typically form the bridge between sales and marketing.  A typical SDR job is take inbound leads from marketing, perform some basic BANT-style [1] qualification on them, and then pass them to sales if indicated. While SDRs typically have activity quotas (e.g., 50 calls/day) they should be primarily measured on the number of opportunities they create per week. In enterprise software, typically that quota is 2-3 oppties/week. 

As companies get bigger they tend to separate SDRs into two groups:

  • Inbound SDRs, those who only process in-bound leads, and
  • Outbound SDRs, those who primarily do targeted outreach over the phone or email

Being an SDR is a hard job.  Typical SDR challenges include:

  • Adhering to service-level agreements for all leads (i.e., touches with timeframes)
  • Contacting prospects in an increasingly spam-hostile, call-hostile environment
  • Figuring out which leads to work on the hardest (e.g., which merit homework to customize the message and which don’t)
  • Remembering that their job is to sell meetings and not product [2]
  • Supporting multiple salespeople with often conflicting priorities [3]
  • Managing the conflict between supporting salespeople and executing the process
  • Getting salespeople to show-up at the hand-off meeting [4]
  • Avoiding burnout in a high-pressure environment

To Which Department Should SDRs Report:  Sales or Marketing?

Historically, SDRs reported to sales.  That’s probably because sales first decided to fund SDR teams as a way getting inbound lead management out of the hands of salespeople [5].  Doing so would:

  • Enable the company to consistently respond in a timely manner to all inquiries
  • Free up sales to spend more time on selling
  • Avoid the problem of individual reps not processing new leads once they are “full up” on opportunities [6]

The problem is that most enterprise software sales VPs are not particularly process-oriented [7], because they grew up in a pre-industrialized era of sales [8].  In fact, nothing drives me crazier than an old-school, artisanal, deal-person CRO insisting on owning the SDR organization despite the total inability to manage it.  They rationalize:  “Oh, I can hire someone process-oriented to manage it.”  And I think:  “but what can that person learn from you [9] about how to manage it?”  And the answer is nothing.  Your desire to own it is either pure ego or simply a ploy to enrich your resume.

I’ll say again because it drives me crazy:  do not be the VP of Sales who insists on owning the SDR organization in the annual planning meeting but then shows zero interest in it for the rest of the year.  You’re not helping anyone!

As mentioned in a footnote in a prior post, I greatly prefer SDRs reporting to marketing versus sales.  Why?

  • Marketing leadgen and nurture people are metrics- and process-oriented animals, naturally suited to manage a process-oriented department.
  • It provides a simple, clear conceptual model:  marketing is the opportunity creation factory and sales is the opportunity closing machine.

In short, marketing’s job is to make opportunities.  Sales’ job is to close them.

# # #

Notes

[1] BANT = budget, authority, need, time-frame.

[2] Most early- and mid-stage startups put SDRs in their regular sales training sessions which I think does them a disservice.  Normal sales training is about selling products/solutions.  SDRs “sell” meetings.  They should not attempt to build business value or differentiation. Training them to do so tempts them to do – even when it is not their job.

[3] A typical QCR:SDR ratio is 3-4:1, though I’ve seen as low as 1:1 and as high as 6:1

[4] Believe it or not, this sometimes happens (typically when your reps are already carrying a lot of oppties).  Few things reflect worse on the company than a last-minute rescheduling of the meet-your-salesperson call. You don’t get a second chance to make a firm impression.

[5] Although most early models had wide bypass rules  – e.g.,  “leads with VP title at this list of key accounts will get passed directly to reps for qualification” – reflecting a lack of trust in marketing beyond dropping leaflets from airplanes.

[6] That problem could still exist at hand-off (i.e., opportunity creation) time but at least we have combed through the leads to find the good ones, and reports can easily identify overloaded reps.

[7] While they may be process-oriented when it comes to the sales process for a deal moving across stages during a quarter, that is not quite the same thing as a velocity mentality driven by daily or weekly goals with tracking metrics.  If you will, there’s process-oriented and Process-Oriented.

[8] One simple test:  if your sales org doesn’t have monthly cadence (e.g., goals, forecasts) then your sales VP is probably not capital P process-oriented.

[9] On the theory you should always build organizations where people can learn from their managers.

A Historical Perspective on Why SAL and SQL Appear to be Defined Backwards

Most startups today use some variation on the now fairly standard terms SAL (sales accepted lead) and SQL (sales qualified lead).  Below see the classic [1] lead funnel model from marketing bellwether Sirius Decisions that defines this.

One great thing about working as an independent board member and consultant is that you get to work with lots of companies. In doing this, I’ve noticed that while virtually everyone uses the terminology SQL and SAL, that some people define SQL before SAL and others define SAL before SQL.

Why’s that?  I think the terminology was poorly chosen and is confusing.  After all, what sounds like it comes first:  sales accepting a lead or sales qualifying a lead?  A lot of folks would say, “well you need to accept it before you can qualify it.”  But others would say “you need to qualify it before you can accept it.”  And therein lies the problem.

The correct answer, as seen above, is that SAL comes before SQL.  I have a simple way of remembering this:  A comes before Q in the alphabet, and SAL comes before SQL in the funnel. Until I came up with that I was perpetually confused.

More importantly, I think I also have a way of explaining it.  Start by remembering two things:

  • This model was defined at a time when sales development reps (SDRs) generally reported to sales, not marketing [2].
  • This model was defined from the point of view of marketing.

Thus, sales accepting the lead didn’t mean a quota-carrying rep (QCR) accepted the lead – it meant an SDR, who works in the sales department, accepted the lead.  So it’s sales accepting the lead in the sense that the sales department accepted it.  Think: we, marketing, passed it to sales.

After the SDR worked on the lead, if they decided to pass it to a QCR, the QCR would do an initial qualification call, and then the QCR would decide whether to accept it.  So it’s a sales qualified lead, in the sense that a salesperson has qualified it and decided to accept it as an opportunity.

Think: accepted by an SDR, qualified by a salesrep.

Personally, I prefer avoid the semantic swamp and just say “stage 1 opportunity” and “stage 2 opportunity” in order to keep things simple and clear.

# # #

Notes

[1] This model has since been replaced with a newer demand unit waterfall model that nevertheless still uses the term SQL but seems to abandon SAL.

[2] I greatly prefer SDRs reporting to marketing for two reasons:  [a] unless you are running a pure velocity sales model, your sales leadership is more likely to deal-people than process-people – and running the SDRs is a process-oriented job and [b] it eliminates a potential crack in the funnel by passing leads to sales “too early”.  When SDRs report to marketing, you have a clean conceptual model:  marketing is the opportunity creation factory and sales is the opportunity closing factory.