Category Archives: demandgen

Why Every Startup Needs an Inverted Demand Generation Funnel, Part III

In part I of this three-part series I introduced the idea of an inverted funnel whereby marketing can derive a required demand generation budget using the sales target and historical conversion rates.  In order to focus on the funnel itself, I made the simplifying assumption that the company’s new ARR target was constant each quarter. 

In part II, I made things more realistic both by quarterizing the model (with increasing quarterly targets) and accounting for the phase lag between opportunity generation and closing that’s more commonly known as “the sales cycle.”  We modeled that phase lag using the average sales cycle length.  For example, if your average sales cycle is 90 days, then opportunities generated in 1Q19 will be modeled  as closing in 2Q19 [1].

There are two things I dislike about this approach:

  • Using the average sales cycle loses information contained in the underlying distribution.  While deals on average may close in 90 days, some deals close in 30 while others may close in 180. 
  • Focusing only on the average often leads marketing to a sense of helplessness. I can’t count the number of times I have heard, “well, it’s week 2 and the pipeline’s light but with a 90-day sales cycle there is nothing we can do to help.”  That’s wrong.  Some deals close more quickly than others (e.g., upsell) so what can we do to find more of them, fast [2].

As a reminder, time-based close rates come from doing a cohort analysis where we take opportunities created in a given quarter and then track not only what percentage of them eventually close, but when they close, by quarter after their creation. 

This allows us to calculate average close rates for opportunities in different periods (e.g., in-quarter, in 2 quarters, or cumulative within 3 quarters) as well an overall (in this case, six-quarter) close rate, i.e., the cumulative sum.  In this example, you can see an overall close rate of 18.7% meaning that, on average, within 6 quarters we close 18.7% of the opportunities that sales accepts.  This is well within what I consider the standard range of 15 to 22%.

Previously, I argued this technique can be quite useful for forecasting; it can also be quite useful in planning.  At the risk of over-engineering, let’s use the concept of time-based close rates  to build an inverted funnel for our 2020 marketing demand generation plan.

To walk through the model, we start with our sales targets and average sales price (ASP) assumptions in order to calculate how many closed opportunities we will need per quarter.  We then drop to the opportunity sourcing section where we use historical opportunity generation and historical time-based close rates to estimate how many closed opportunities we can expect from the existing (and aging) pipeline that we have already generated.  Then we can plug our opportunity generation targets from our demand generation plan into the model (i.e., the orange cells).  The model then calculates a surplus or (gap) between the number of closed opportunities we need and those the model predicts. 

I didn’t do it in the spreadsheet, but to turn that opportunity creation gap into ARR dollars just multiply by the ASP.  For example, in 2Q20 this model says we are 1.1 opportunities short, and thus we’d forecast coming in $137.5K (1.1 * $125K) short of the new ARR plan number.  This helps you figure out if you have the right opportunity generation plan, not just overall, but with respect to timing and historical close rates.

When you discover a gap there are lots of ways to fix it.  For example, in the above model, while we are generating enough opportunities in the early part of the year to largely achieve those targets, we are not generating enough opportunities to support the big uptick in 4Q20.  The model shows us coming in 10.8 opportunities short in 4Q20 – i.e., anticipating a new ARR shortfall of more than $1.3M.  That’s not good enough.  In order to achieve the 4Q20 target we are going to need to generate more opportunities earlier in the year.

I played with the drivers above to do just that, generating an extra 275 opportunities across the year generating surpluses in 1Q20 and 3Q20 that more than offset the small gaps in 2Q20 and 4Q20.  If everything happened exactly according to the model we’d get ahead of plan and 1Q20 and 3Q20 and then fall back to it in 2Q20 and 4Q20 though, in reality, the company would likely backlog deals in some way [3] if it found itself ahead of plan nearing the end of one quarter with a slightly light pipeline the next. 

In concluding this three-part series, I should be clear that while I often refer to “the funnel” as if it’s the only one in the company, most companies don’t have just one inverted funnel.   The VP of Americas marketing will be building and managing one funnel that may look quite different from the VP of EMEA marketing.  Within the Americas, the VP may need to break sales into two funnels:  one for inside/corporate sales (with faster cycles and smaller ASPs) and one for field sales with slower sales cycles, higher ASPS, and often higher close rates.  In large companies, General Managers of product lines (e.g., the Service Cloud GM at Salesforce) will need to manage their own product-specific inverted funnel that cuts across geographies and channels. There’s a funnel for every key sales target in a company and they need to manage them all.

You can download the spreadsheet used in this post, here.

Notes

[1] Most would argue there are two phase lags: the one from new lead to opportunity and the one from opportunity (SQL) creation to close. The latter is the sales cycle.

[2] As another example, inside sales deals tend to close faster than field sales deals.

[3] Doing this could range from taking (e.g., co-signing) the deal one day late to, if policy allows, refusing to accept the order to, if policy enables, taking payment terms that require pushing the deal one quarter back.  The only thing you don’t want to is to have the customer fail to sign the contract because you never know if your sponsor quits (or gets fired) on the first day of the next quarter.  If a deal is on the table, take it.  Work with sales and finance management to figure out how to book it.

The Evolution of Software Marketing: Hey Marketing, Go Get [This]!

As loyal readers know, I’m a reductionist, always trying to find the shortest, simplest way of saying things even if some degree of precision gets lost in the process and even if things end up more subtle than they initially appear.

For example, my marketing mission statement of “makes sales easier” is sometimes misinterpreted as relegating marketing to a purely tactical role, when it actually encompasses far more than that.  Yes, marketing can make sales easier through tactical means like lead generation and sales support, but marketing can also makes sales easier through more leveraged means such as competitive analysis and sales enablement or even more leveraged means such as influencer relations and solutions development or the most leveraged means of picking which markets the company competes in and (with product management) designing products to be easily salable within them.

“Make sales easier” does not just mean lead generation and tactical sales support.

So, in this reductionist spirit, I thought I’d do a historical review of the evolution of enterprise software marketing by looking at its top objective during the thirty-odd years (or should I say thirty odd years) of my career, cast through a fill-in-the-blank lens of, “Hey Marketing, go get [this].”

Hey Marketing, Go Get Leads

In the old days, leads were the focus.  They were tracked on paper and the goal was a big a pile as possible.  These were the days of tradeshow models and free beer:  do anything to get people come by the booth – regardless of whether they have any interest in or ability to buy the software.  Students, consultants, who cares?  Run their card and throw them in the pile.  We’ll celebrate the depth of the pile at the end of the show.

Hey Marketing, Go Get Qualified Leads

Then somebody figured out that all those students and consultants and self-employed people who worked at companies way outside the company’s target customer size range and couldn’t actually buy our software.  So the focus changed to get qualified leads.  Qualified first basically meant not unqualified:

  • It couldn’t be garbage, illegible, or duplicate
  • It couldn’t be self-employed, students, or consultants
  • It couldn’t be other people who clearly can’t buy the software (e.g., in the wrong country, at too small a company, in a non-applicable industry)

Then people realized that not all not-unqualified leads were the same. 

Enter lead scoring.  The first systems were manual and arbitrarily defined:  e.g., let’s give 10 points for target companies, 10 points for a VP title, and 15 points if they checked buying-within-6-months on the lead form.  Later systems got considerably more sophisticated adding both firmographic and behavioral criteria (e.g., downloaded the Evaluation Guide).  They’d even have decay functions where downloading a white paper got you 10 points, but you’d lose a point every week since if there you had no further activity. 

The problem was, of course, that no one ever did any regressions to see if A leads actually were more likely to close than B leads and so on.  At one company I ran, our single largest customer was initially scored a D lead because the contact downloaded a white paper using his Yahoo email address.  Given such stories and a general lack of faith in the scoring system, operationally nobody ever treated an A lead differently from a D lead – they’d all get “6×6’ed” (6 emails and 6 calls) anyway by the sales development reps (SDRs).  If the score didn’t differentiate the likelihood of closing and the SDR process was score-invariant, what good was scoring? The answer: not much.

Hey Marketing, Go Get Pipeline

Since it was seemingly too hard to figure out what a qualified lead was, the emphasis shifted.  Instead of “go get leads” it became, “go get pipeline.”  After all, regardless of score, the only leads we care about are those that turn into pipeline.  So, go get that.

Marketing shifted emphasis from leads to pipeline as salesforce automation (SFA) systems were increasingly in place that made pipeline easier to track.  The problem was that nobody put really good gates on what it took to get into the pipeline.  Worse yet, incentives backfired as SDRs, who were at the time almost always mapped directly to quota-carrying reps (QCRs), were paid incentives when leads were accepted as opportunities.  “Heck,” thinks the QCR, “I’ll scratch my SDR’s back in order to make sure he/she keeps scratching mine:  I’ll accept a bunch of unqualified opportunities, my SDR will get paid a $200 bonus on each, and in a few months I’ll just mark them no decision.  No harm, no foul. “Except the pipeline ends up full of junk and the 3x self-fulfilling pipeline coverage prophecy is developed.  Unless you have 3x coverage, your sales manager will beat you up, so go get 3x coverage regardless of whether it’s real or not.  So QCRs stuff bad opportunities into the pipeline which in turn converts at a lower rate which in turn increases the coverage goal – i.e., “heck, we’re only converting pipeline at 25%, so now we need 4x coverage!”  And so on.

At one point in my career I actually met a company with 100x pipeline coverage and 1% conversion rates. 

Hey Marketing, Go Get Qualified Opportunities (SQLs)

Enter the sales qualified lead (SQL). Companies realize they need to put real emphasis on someone, somewhere in the process defining what’s real and what not.  That someone ends up the QCR and it’s now their job to qualify opportunities as they are passed over and only accept those that both look real and meet documented criteria.  Management is now focused on SQLs.  SQL-based metrics, such as cost-per-SQL or SQL-to-close-rate, are created and benchmarked.  QCRs can no longer just accept everything and no-decision it later and, in fact, there’s less incentive to anyway as SDRs are no longer basically working for the QCRs, but instead for “the process” and they’re increasingly reporting into marketing to boot.  Yes, SDRs will be paid on SQLs accepted by sales, but sales is going to be held highly accountable for what happens to the SQLs they accept. 

Hey Marketing, Go Get Qualified Opportunities Efficiently

At this point we’ve got marketing focused on SQL generation and we’ve built a metrics-driven inbound SDR team to process all leads. We’ve eliminated the cracks between sales and marketing and, if we’re good, we’ve got metrics and reporting in place such that we can easily see if leads or opportunities are getting stuck in the pipeline. Operationally, we’re tight.

But are we efficient? This is also the era of SaaS metrics and companies are increasingly focused not just on growth, but growth efficiency.  Customer acquisition cost (CAC) becomes a key industry metric which puts pressure on both sales and marketing to improve efficiency.  Sales responds by staffing up sales enablement and sales productivity functions. Marketing responds with attribution as a way to try and measure the relative effectiveness of different campaigns.

Until now, campaign efficiency tended to be measured a last-touch attribution basis. So when marketers tried to calculate the effectiveness of various marketing campaigns, they’d get a list of closed deals, and allocate the resultant sales to campaigns by looking at the last thing someone did before buying. The predictable result: down-funnel campaigns and tools got all of the credit and up-funnel campaigns (e.g., advertising) got none.

People pretty quickly realized this was a flawed way to look at things so, happily, marketers didn’t shoot the propellers off their marketing planes by immediately stopping all top-of-funnel activity. Instead, they kept trying to find better means of attribution.

Attribution systems, like Bizible, came along which tried to capture the full richness of enterprise sales. That meant modeling many different contacts over a long period of time interacting with the company via various mechanisms and campaigns. In some ways attribution became like search: it wasn’t whether you got the one right answer, it was whether search engine A helped you find relevant documents better than search engine B. Right was kind of out the question. I feel the same way about attribution. Some folks feel it doesn’t work at all. My instinct is that there is no “right” answer but with a good attribution system you can do better at assessing relative campaign efficiency than you can with the alternatives (e.g., first- or last-touch attribution).

After all, it’s called the marketing mix for a reason.

Hey Marketing, Go Get Qualified Opportunities That Close

After the quixotic dalliance with campaign efficiency, sales got marketing focused back on what mattered most to them. Sales knew that while the bar for becoming a SQL was now standardized, that not all SQLs that cleared it were created equal. Some SQLs closed bigger, faster, and at higher rates than others. So, hey marketing, figure out which ones those are and go get more like them.

Thus was born the ideal customer profile (ICP). In seed-stage startups the ICP is something the founders imagine — based on the product and target market they have in mind, here’s who we should sell to. In growth-stage startups, say $10M in ARR and up, it’s no longer about vision, it’s about math.

Companies in this size range should have enough data to be able to say “who are our most successful customers” and “what do they have in common.” This involves doing a regression between various attributes of customers (e.g., vertical industry, size, number of employees, related systems, contract size, …) and some success criteria. I’d note that choosing the success criteria to regress against is harder than meetings the eye: when we say we find to find prospects most like our successful customers, how are we defining success?

  • Where we closed a big deal? (But what if it came at really high cost?)
  • Where we closed a deal quickly? (But what if they never implemented?)
  • Where they implemented successfully? (But what if they didn’t renew?)
  • Where they renewed once? (But what if they didn’t renew because of uncontrollable factor such as being acquired?)
  • Where they gave us a high NPS score? (But what if, despite that, they didn’t renew?)

The Devil really is in the detail here. I’ll dig deeper into this and other ICP-related issues one day in a subsequent post. Meantime, TOPO has some great posts that you can read.

Once you determine what an ideal customer looks like, you can then build a target list of them and enter into the world of account-based marketing (ABM).

Hey Marketing, Go Get Opportunities that Turn into Customers Who Renew

While sales may be focused simply on opportunities that close bigger and faster than the rest, what the company actually wants is happy customers (to spread positive word of mouth) who renew. Sales is typically compensated on new orders, but the company builds value by building its ARR base. A $100M ARR company with a CAC ratio of 1.5 and churn rate of 20% needs to spend $30M on sales and marketing just to refill the $20M lost to churn. (I love to multiply dollar-churn by the CAC ratio to figure out the real cost of churn.)

What the company wants is customers who don’t churn, i.e., those that have a high lifetime value (LTV). So marketing should orient its ICP (i.e., define success in terms of) not just likelihood to {close, close big, close fast} but around likelihood to renew, and potentially not just once. Defining different success criteria may well produce a different ICP.

Hey Marketing, Go Get Opportunities that Turn into Customers Who Expand

In the end, the company doesn’t just want customers who renew, even if for a long time. To really the build the value of the ARR base, the company wants customers who (1) are relatively easily won (win rate) and relatively quickly (average sales cycle) sold, (2) who not only renew multiple times, but who (3) expand their contracts over time.

Enter net dollar expansion rate (NDER), the metric that is quickly replacing churn and LTV, particularly with public SaaS companies. In my upcoming SaaStr 2020 talk, Churn is Dead, Love Live Net Dollar Expansion Rate, I’ll go into why this happening and why companies should increasingly focus on this metric when it comes to thinking about the long-term value of their ARR base.

In reality, the ultimate ICP is built around customers who meet the three above criteria: we can sell them fairly easily, they renew, and they expand. That’s what marketing needs to go get!

Why Every Startup Needs an Inverted Demand Generation Funnel, Part II

In the previous post, I introduced the idea of an inverted demand generation (demandgen) funnel which we can use to calculate a marketing demandgen budget based given a sales target, an average sales price (ASP), and a set of conversion rates along the funnel. This is a handy tool, isn’t hard to make, and will force you into the very good habit of measuring (and presumably improving) a set of conversion rates along your demand funnel.

In the previous post, as a simplifying assumption, we assumed a steady-state situation where a company had a $2M new ARR target every quarter. The steady-state assumption allowed us to ignore two very real factors that we are going to address today:

  • Time. There are two phase-lags along the funnel. MQLs might take a quarter to turn into SALs and SALs might take two quarters to turn into closed deals. So any MQL we generate now won’t likely become a closed deal until 3 quarters from now.
  • Growth. No SaaS company wants to operate at steady state; sales targets go up every year. Thus if we generate only enough MQLs to hit this-quarter’s target we will invariably come up short because those MQLs are working to support a (presumably larger) target 3 quarters in the future.

In order to solve these problems we will start with the inverted funnel model from the previous post and do three things:

  • Quarter-ize it. Instead of just showing one steady-state quarter (or a single year), we are going to stretch the model out across quarters.
  • Phase shift it. If SALs take two quarters to close and MQLs take 1 quarter to become SALS we will reflect this in the model, by saying 4Q20 deals need come from SALs generated in 2Q20 which in turn come from MQLs generated in 1Q20.
  • Extend it. Because of the three-quarter phase shift, the vast majority of the MQLs we’ll be generating 2020 are actually to support 2021 business, so we need to extend the model in 2021 (with a growth assumption) in order to determine how big of a business we need to support.

Here’s what the model looks like when you do this:

You can see that this model generates a varying demandgen budget based on the future sales targets and if you play with the drivers, you can see the impact of growth. At 50% new ARR growth, we need a $1.47M demandgen budget in 2020, at 0% we’d need $1.09M, and at 100% we’d need $1.85M.

Rather than walk through the phase-shifting with words, let me activate Excel’s trace-precedents feature so you can see how things flow:

With these corrections, we have transformed the inverted funnel into a pretty realistic tool for modeling MQL requirements of the company’s future growth plan.

Other Considerations

In reality, your business may consist of multiple funnels with different assumption sets.

  • Partner-sourced deals are likely to have smaller deal sizes (due to margin given to the channel) but faster conversion timeframes and higher conversion rates. (Because we will learn about deals later in the cycle, hear only about the good ones, and the partner may expedite the evaluation process.)
  • Upsell business will almost certainly have smaller deal sizes, faster conversion timeframes, and much higher conversion rates than business to entirely new customers.
  • Corporate (or inside) sales is likely to have a materially different funnel from enterprise sales. Using a single funnel that averages the two might work, provided your mix isn’t changing, but it is likely to leave corporate sales starving for opportunities (since they do much smaller deals, they need many more opportunities).

How many of these funnels you need is up to you. Because the model is particularly sensitive to deal size (given a constant set of conversion rates) I would say that if a certain type of business has a very different ASP from the main business, then it likely needs its own funnel. So instead of building one funnel that averages everything across your company, you might be three — e.g.,

  • A new business funnel
  • An upsell funnel
  • A channel funnel

In part III of this series, we’ll discuss how to combine the idea of the inverted funnel with time-based close rates to create an even more accurate model of your demand funnel.

The spreadsheet I made for this series of posts is available here.

Why Every Startup Needs an Inverted Demand Generation Funnel, Part I

Does my company spend too much on marketing? Too little? How I do know? What is the right level of marketing spend at an enterprise software startup? I get asked these questions all the time by startup CEOs, CMOs, marketing VPs, and marketing directors.

You can turn to financial benchmarks, like the KeyBanc Annual SaaS Survey for some great high-level answers. You can subscribe to SiriusDecisions for best practices and survey data. Or you can buy detailed benchmark data [1] from OPEXEngine. These are all great sources and I recommend them heartily to anyone who can afford them.

But, in addition to sometimes being too high-level [2], there is one key problem with all these forms of benchmark data: they’re not about you. They’re not based on your operating history. While I certainly recommend that executives know their relevant financial benchmarks, there’s a difference between knowing what’s typical for the industry and what’s typical for you.

So, if you want to know if your company is spending enough on marketing [3], the first thing you should do is to make an inverted demand generation (aka, demandgen) funnel to figure out if you’re spending enough on demandgen. It’s quite simple and I’m frankly surprised how few folks take the time to do it.

Here’s an inverted demandgen funnel in its simplest form:

Inverted demandgen funnel

Let’s walk through the model. Note that all orange cells are drivers (inputs) and the white cells are calculations (outputs). This model assumes a steady-state situation [4] where the company’s new ARR target is $2,000,000 each quarter. From there, we simply walk up the funnel using historical deal sizes and conversion rates [5].

  • With an average sales price (ASP) of $75,000, the company needs to close 27 opportunities each quarter.
  • With a 20% sales qualified lead (SQL) to close rate we will need 133 SQLs per quarter.
  • If marketing is responsible for generating 80% of the sales pipeline, then marketing will need to generate 107 of those SQLs.
  • If our sales development representatives (SDRs) can output 2.5 opportunities per week then we will need 5 SDRs (rounding up).
  • With an 80% SAL to SQL conversion rate we will need 133 SALs per quarter.
  • With a 10% MQL to SAL conversion rate we will need 1,333 MQLs per quarter.
  • With a cost of $250 per MQL, we will need a demandgen budget [6] of $333,333 per quarter.

The world’s simplest way to calculate the overall marketing budget at this point would be to annualize demandgen to $1.3M and then double it, assuming the traditional 50/50 people/programs ratio [7].

Not accounting for phase lag or growth (which will be the subjects of part II and part III of this post), let’s improve our inverted funnel by adding benchmark and historical data.

Let’s look at what’s changed. I’ve added two columns, one with 2019 actuals and one with benchmark data from our favorite source. I’ve left the $2M target in both columns because I want to compare funnels to see what it would take to generate $2M using either last year’s or our benchmark’s conversion rates. Because I didn’t want to change the orange indicators (of driver cells) in the left column, when we have deviations from the benchmark I color-coded the benchmark column instead. While our projected 20% SQL-to-close rate is an improvement from the 18% rate in 2019, we are still well below the benchmark figure of 25% — hence I coded the benchmark red to indicate a problem in this row. Our 10% MQL-to-SQL conversion rate in the 2020 budget is a little below the benchmark figure of 12%, so I coded it yellow. Our $250 cost/MQL is well below the benchmark figure of $325 so I coded it green.

Finally, I added a row to show the relative efficiency improvement of the proposed 2020 budget compared to last year’s actuals and the benchmark. This is critical — this is the proof that marketing is raising the bar on itself and committed to efficiency improvement in the coming year. While our proposed funnel is overall 13% more efficient than the 2019 funnel, we still have work to do over the next few years because we are 23% less efficient than we would be if we were at the benchmark on all rates.

However, because we can’t count on fixing everything at once, we are taking a conservative approach where we show material improvement over last year’s actuals, but not overnight convergence to the benchmark — which could take us from kaizen-land to fantasy-land and result in a critical pipeline shortage downstream.

Moreover because this approach shows not only a 13% overall efficiency improvement but precisely where you expect it to come from, the CEO can challenge sales and marketing leadership:

  • Why are we expecting to increase our ASP by $5K to $75K?
  • Why do you think we can improve the SQL-to-close rate from 18% to 20% — and what you are doing to drive that improvement? [8]
  • What are we doing to improve the MQL-to-SAL conversion rate?
  • How are we going to improve our already excellent cost per MQL by $25?

In part II and part III of this post, we’ll discuss two ways of modeling phase-lag, modeling growth, and the separation of the new business and upsell funnels.

You can download my spreadsheet for this post, here.

Notes

[1] For marketing or virtually anything else.

[2] i.e., looking at either S&M aggregated or even marketing overall.

[3] The other two pillars of marketing are product marketing and communications. The high-level benchmarks can help you analyze spend on these two areas by subtracting your calculated demandgen budget from the total marketing budget suggested by a benchmark to see “what’s left” for the other two pillars. Caution: sometimes that result is negative!

[4] The astute reader will instantly see two problems: (a) phase-lag introduced by both the lead maturation (name to MQL) and sales (SQL to close) cycles and (b) growth. That is, in a normal high-growth startup, you need enough leads not to generate this quarter’s new ARR target but the target 3-4 quarters out, which is likely to be significantly larger. Assuming a steady-state situation gets rid of both these problems and simplifies the model. See part II and part III of this post for how I like to manage that added real-world complexity.

[5] Hint: if you’re not tracking these rates, the first good thing about this model is that it will force you to do so.

[6] When I say demandgen budget, I mean money spent on generating leads through marketing campaigns. Sometimes that very directly (e.g., adwords). Other times it’s a bit indirectly (e.g., an SEO program). I do not include demandgen staff because I am trying to calculate the marginal cost of generating an extra MQL. That is, I’m not trying to calculate what the company spends, in total, on demandgen activities (which would include salary, benefits, stock-based comp, etc. for demandgen staff) but instead the marketing programs cost to generate a lead (e.g., in case we need to figure out how much to budget to generate 200 more of them).

[7] In an increasingly tech-heavy world where marketing needs to invest a lot in infrastructure as well, I have adapted the traditional 50/50 people/programs rule to a more modern 45/45/10 people/programs/infrastructure rule, or even an infrastructure-heavy split of 40/40/20.

[8] Better closing tools, an ROI calculator, or a new sales training program could all be valid explanations for assuming an improved close rate.

A Historical Perspective on Why SAL and SQL Appear to be Defined Backwards

Most startups today use some variation on the now fairly standard terms SAL (sales accepted lead) and SQL (sales qualified lead).  Below see the classic [1] lead funnel model from marketing bellwether Sirius Decisions that defines this.

One great thing about working as an independent board member and consultant is that you get to work with lots of companies. In doing this, I’ve noticed that while virtually everyone uses the terminology SQL and SAL, that some people define SQL before SAL and others define SAL before SQL.

Why’s that?  I think the terminology was poorly chosen and is confusing.  After all, what sounds like it comes first:  sales accepting a lead or sales qualifying a lead?  A lot of folks would say, “well you need to accept it before you can qualify it.”  But others would say “you need to qualify it before you can accept it.”  And therein lies the problem.

The correct answer, as seen above, is that SAL comes before SQL.  I have a simple way of remembering this:  A comes before Q in the alphabet, and SAL comes before SQL in the funnel. Until I came up with that I was perpetually confused.

More importantly, I think I also have a way of explaining it.  Start by remembering two things:

  • This model was defined at a time when sales development reps (SDRs) generally reported to sales, not marketing [2].
  • This model was defined from the point of view of marketing.

Thus, sales accepting the lead didn’t mean a quota-carrying rep (QCR) accepted the lead – it meant an SDR, who works in the sales department, accepted the lead.  So it’s sales accepting the lead in the sense that the sales department accepted it.  Think: we, marketing, passed it to sales.

After the SDR worked on the lead, if they decided to pass it to a QCR, the QCR would do an initial qualification call, and then the QCR would decide whether to accept it.  So it’s a sales qualified lead, in the sense that a salesperson has qualified it and decided to accept it as an opportunity.

Think: accepted by an SDR, qualified by a salesrep.

Personally, I prefer avoid the semantic swamp and just say “stage 1 opportunity” and “stage 2 opportunity” in order to keep things simple and clear.

# # #

Notes

[1] This model has since been replaced with a newer demand unit waterfall model that nevertheless still uses the term SQL but seems to abandon SAL.

[2] I greatly prefer SDRs reporting to marketing for two reasons:  [a] unless you are running a pure velocity sales model, your sales leadership is more likely to deal-people than process-people – and running the SDRs is a process-oriented job and [b] it eliminates a potential crack in the funnel by passing leads to sales “too early”.  When SDRs report to marketing, you have a clean conceptual model:  marketing is the opportunity creation factory and sales is the opportunity closing factory.

I’ve Got a Crazy Idea:  How About We Focus on Next-Quarter’s Pipeline?

I’m frankly shocked by how many startups treat pipeline as a monolith.

Sample CMO:  “we’re in great shape because we have a total pipeline of $32M covering a forward-four-quarter (F4Q) sales target of $10M, so 3.2x coverage.  Next slide, please.”

Regardless of your view on the appropriate magic pipeline coverage number (e.g., 2x, 3x, 4x), I’ve got a slew of serious problems with this.  What do I think when someone says this?

“Wait, hang on.  How is that pipeline distributed by quarter?  By stage?  By forecast category?  By salesrep?  You can’t just look at it as a giant lump and declare that you’re in great shape because you have 3x the F4Q coverage.  That’s lazy thinking.  And, by the way, you probably don’t even need 3x  the F4Q target, but you sure as hell need 3x this quarter’s coverage [1] and better be building to start next quarter with 3x as well.  You do understand that sales can starve to death and we can go out of business – the whole time with 3x pipeline coverage — if it’s all pipeline that’s 3 and 4 quarters, out?”

I’ve got a crazy idea.  How about as a first step, we stop looking at annual pipeline [2] and start looking at this-quarter pipeline and, most importantly, next-quarter pipeline?

What people tell me when I say this:  “No, no, Dave.  We can’t do that.  That’s myopic.  You need to look further out.  You can’t drive looking at the hood ornament.  Plus, with a 90-day average sales cycle (ASC) there’s nothing we can do anyway about the short term.  You need to think big picture.”

I then imagine the CMO talking to the head of demandgen:  “Yep, it’s week 1 and we only have 2.1x pipeline coverage.  But with a 90-day sales cycle, there’s nothing we can do.  Looks like we’re going to hit the iceberg.  At least we made our 3x coverage OKR on a rolling basis.  Hey, let’s go grab a flat white.”

I loathe this attitude for several reasons:

  • It’s parochial. The purpose of marketing OKRs is to enable sales to hit sales OKRs.  Who cares if marketing hit its pipeline OKR but sales is nevertheless flying off a cliff?  Marketing just had a poorly chosen OKR.
  • It’s defeatist. If “when the going gets tough, the tough get a flat white” is your motto, you shouldn’t work in startup marketing.
  • It’s wrong. The A in ASC stands for average.  Your average sales cycle.  It’s not your minimum sales cycle.  If your average sales cycle is 90 days [3] then you have lots of deals that close faster than 90 days, so instead of getting a flat white marketing should be focused on finding a bunch of those, pronto [4].

Here’s my crazy idea.  Never look at rolling F4Q pipeline again.  It doesn’t matter.  What you really need to do is start every quarter with 3.0x [5] pipeline.  After all, if you started every quarter with 3.0x pipeline coverage wouldn’t that mean you are teed up for success every quarter?  Instead of focusing on the long-term and hoping the short-term works out, let’s continually focus on the short-term and know the long-term will work out.

This brings to mind Kellogg’s fourth law of startups:  you have to survive short-term in order to exist long-term.

This-Quarter Pipeline
This process starts by looking at the this-quarter (aka, current-quarter) pipeline.  While it’s true that in many companies marketing will have a limited ability to impact the current-quarter pipeline — especially once you’re 5-6 weeks in — you should nevertheless always be looking at current-quarter pipeline and current-quarter pipeline coverage calculated on a to-go basis.  You don’t need 3x the plan number every single week; you need 3x coverage of the to-go number to get to plan.  To-go pipeline coverage provides an indicator of confidence in your forecast (think “just how lucky to do we have to get”) and over time the ratio can be used as an alternative forecasting mechanism [6].

this qtr togo

In the above example, we can see a few interesting patterns.

  • We start the quarter with high coverage, but it quickly becomes clear that’s because the pipeline has not yet been cleaned up. Because salespeople are usually “animals that think in 90-day increments” [7], next quarter is effectively eternity from the point of view of most salesreps, so they tend to dump troubled deals in next-quarter [8] regardless of whether they actually have a next-quarter natural close date.
  • Between weeks 1 and 3, we see $2,250K of current-quarter pipeline vaporize as part of sales’ cleanup. Note that $250K was closed – the best way for dollars to exit the pipeline!  I always do my snapshot pipeline analytics in week 3 to provide enough time for sales to clean up before trying to analyze the data.  (And if it’s not clean by week 3, then you have a different conversation with sales [9].)
  • Going forward, we burn off more pipeline to fall into the 2.6 to 2.8 coverage range but from weeks 5 to 9 we are generally closing and burning off pipeline [10] at the same rate – hence the coverage ratio is running in a stable, if somewhat tight, range.

Next-Quarter Pipeline
Let’s now look at next-quarter pipeline.  While I think sales needs to be focused on this-quarter pipeline and closing it, marketing needs to be primarily focused on next-quarter pipeline and generating it.  Let’s look at an example:

next qtr pipe

Now we can see that next-quarter plan is $3,250K and we start this quarter with $3,500K in next-quarter pipeline or 1.1x coverage.  The 1.1x is nominally scary but do recall we have 12 weeks to generate more next-quarter pipeline before we want to start next quarter with 3x coverage, or a total pipeline of $9,750K.  Once you start tracking this way and build some history, you’ll know what your company’s requirements are.  In my experience, 1.5x next-quarter coverage in week 3 is tight but works [11].

The primary point here is that given:

  • Your knowledge of history and your pipeline coverage requirements
  • Your marketing plans for the current quarter
  • The trends you’re seeing in the data
  • Normal spillover patterns

That marketing should be able to forecast next quarter’s starting pipeline coverage.  So, pipeline coverage isn’t just an iceberg that marketing thinks we’ll hit or miss.  It’s something can marketing can forecast.  And if you can forecast it, then you adjust your plans accordingly to do something about it.

Let’s stick with our example and make a forecast for next-quarter starting pipeline [12]

  • Note that we are generating about $250K of net next-quarter pipeline per week from weeks 4 to 9.
  • Assume that we are continuing at steady-state the programs generating that pipeline and ergo we can assume that over the next four weeks we’ll generate another $1M.
  • Assume we are doing a big webinar that we think will generate another $750K in next-quarter pipeline.
  • Assume that 35% of the surplus this-quarter pipeline slips to next-quarter [13]

If you do this in a spreadsheet, you get the following.  Note that in this example we are forecasting a shortfall of $93K in starting next-quarter pipeline coverage.  Were we forecasting a significant gap, we might divert marketing money into demand generation in order to close the gap.

fc next qtr

All-Quarters Pipeline
Finally, let’s close with how I think about all-quarters pipeline.

all qtr

While I don’t think it’s the primary pipeline metric, I do think it’s worth tracking for several reasons:

  • So you can see if pipeline is evaporating or sloshing. When a $1M forecast deal is lost, it comes out of both current-quarter and all-quarters pipeline.  When it slips, however, current-quarter goes down by $1M but all-quarters stays the same.  By looking at current-quarter, next-quarter, and all-quarters at the same time in a compact space you can get sense for what is happening overall to your pipeline.  There’s nowhere to hide when you’re looking at all-quarters pipeline.
  • So you can get a sense for the size of opportunities in your pipeline.  Note that if you create opportunities with a placeholder value then there’s not much  purpose in doing this (which is just one reason why I don’t recommend creating opportunities with a placeholder value) [14].
  • So you can get a sense of your salesreps’ capacity. The very first number I look at when a company is missing its numbers is opportunities/rep.  In my experience, a typical rep can handle 8-12 current-quarter and 15-20 all-quarters opportunities [15].  If your reps are carrying only 5 opportunities each, I don’t know how they can make their numbers.  If they’re carrying 50, I think either your definition of opportunity is wrong or you need to transfer some budget from marketing to sales and hire more reps.

The spreadsheet I used in this post is available for download here.

# # #

Notes

[1] Assuming you’re in the first few weeks of the quarter, for now.

[2] Which is usually done using forward four quarters.

[3] And ASC follows a normal distribution.

[4] Typically, they are smaller deals, or deals at smaller companies, or upsells to existing customers.  But they’re out there.

[5] Or, whatever your favorite coverage ratio is.  Debating that is not the point of this post.

[6] Once you build up some history you can use coverage ratios to predict sales as a way of triangulating on the forecast.

[7] As a former board member always told me — a quote that rivals “think of salespeople as single-celled organisms driven by their comp plan” in terms of pith.

[8] Or sometimes, fourth-quarter which is another popular pipeline dumping ground.  (As is first-quarter next year for the truly crafty.)

[9] That is, one about how they are going to get their shit together and manage the pipeline better, the first piece of which is getting it clean by week 3, often best accomplished by one or more pipeline scrub meetings in weeks 1 and 2.

[10] Burning off takes one of three forms:  closed/won, lost or no-decision, or slipping to a subsequent quarter.  It’s only really “burned off” from the perspective of the current-quarter in the last case.

[11] This depends massively on your specific business (and sales cycle length) so you really need to build up your own history.

[12] Technically speaking, I’m making a forecast for day-1 pipeline, not week-3 pipeline.  Once you get this down you can use any patterns you want to correct it for week 3, if desired.  In reality, I’d rather uplift from week 3 to get day-1 so I can keep marketing focused on generating pipeline for day-1, even though I know a lot will be burned off before I snapshot my analytics in week 3.

[13] Surplus in the sense that it’s leftover after we use what we need to get to plan.  Such surplus pipeline goes three places:  lost/no-decision, next-quarter, or some future quarter.  I often assume 1/3rd  goes to each as a rule of thumb.

[14] As a matter of principle I don’t think an opportunity should have a value associated with it until a salesrep has socialized a price point with the customer.  (Think:  “you do know it cost about $150K per year to subscribe to this software, right?”)  Perversely, some folks create opportunities in stage 1 with a placeholder value only to later exclude stage 1 opportunities in all pipeline analytics. Doing so gets the same result analytically but is an inferior sales process in my opinion.

[15] Once you’re looking at opportunities/rep, you need to not stop with the average but make a histogram.  An 80-opportunity world where 10 reps have 8 opportunities each is a very different world from one where 2 reps have 30 opportunities each and the other 8 have an average of 2.5.

Ten Ways to Get the Most out of Conferences

I can’t tell you the number of times, as we were tearing down our booth after having had an epic show, that we overheard the guy next door calling back to corporate saying that the show was a “total waste of time” and that the company shouldn’t do it again next year.  Of course, he didn’t say that he:

  • Staffed the booth only during scheduled breaks and went into the hallway to take calls at other times.
  • Sat inside the booth, safely protected from conference attendees by a desk.
  • Spent most of his time looking down at his phone, even during the breaks when attendees were out and about.
  • Didn’t use his pass to attend a single session.
  • Measured the show solely by qualified leads for his territory, discounting company visibility and leads for other territories to zero.

slack boothDoes this actually happen, you think?  Absolutely

All the time.  (And it makes you think twice when you’re on the other end of that phone call – was the show bad or did we execute it poorly?) 

I’m a huge believer in live events and an even bigger believer that you get back what you put into them.  The difference between a great show and a bad show is often, in a word, execution.  In this post, I’ll offer up 10 tips to ensure you get the best out of the conferences you attend.

Ten Ways to Get the Most out of Conferences and Tradeshows

1. Send the right people.  Send folks who can answer questions at the audience’s level or one level above.  Send folks who are impressive.  Send folks who are either naturally extroverts or who can “game face” it for the duration of the show.  Send folks who want to be there either because they’re true believers who want to evangelize the product or because they believe in karma [1].  Send senior people (e.g., founders, C-level) [2] so they can both continue to refine the message and interact with potential customers discussing it.

2. Speak.  Build your baseline credibility in the space by blogging and speaking at lesser conferences.  Then, do your homework on the target event and what the organizers are looking for, and submit a great speaking proposal.  Then push for it to be accepted.  Once it’s accepted, study the audience hard and then give the speech of your life to ensure you get invited back next year.  There’s nothing like being on the program (or possibly even a keynote) to build credibility for you and your company.  And the best part is that speaking a conference is, unlike most everything else, free.

3. If you can afford a booth/stand, get one.  Don’t get fancy here.  Get the cheapest one and then push hard for good placement [3].  While I included a picture of Slack’s Dreamforce booth, which is very fancy for most early-stage startup situations, imagine what Slack could have spent if they wanted to.  For Slack, at Dreamforce, that’s a pretty barebones booth.  (And that’s good — you’re going to get leads and engage with people in your market, not win a design competition.)

4. Stand in front of your booth, not in it.  Expand like an alfresco restaurant onto the sidewalk in spring.  This effectively doubles your booth space.

5. Think guerilla marketing.  What can make the biggest impact at the lowest cost?  I love stickers for this because a clever sticker can get attention and end up on the outside of someone’s laptop generating ongoing visibility.  At Host Analytics, we had great success with many stickers, including this one, which finance people (our audience) simply loved [4].

I LOVE EBITDA

While I love guerilla marketing, remember my definition:  things that get maximum impact at minimum cost.  Staging fake protests or flying airplanes with banners over the show may impress others in the industry, but they’re both expensive and I don’t think they impress customers who are primarily interested not in vendor politics, but in solving business problems.

6. Work the speakers.  Don’t just work the booth (during and outside of scheduled breaks), go to sessions.  Ask questions that highlight your issues (but not specifically your company).  Talk to speakers after their sessions to tee-up a subsequent follow-up call.  Talk to consultant speakers to try and build partnerships and/or fish to referrals.  Perhaps try to convince the speakers to include parts of your message into their speech [5].

7. Avoid “Free Beer Here” Stunts.  If you give away free beer in your booth you’ll get a huge list of leads from the show.  However, this is dumb marketing because you not only buy free beer for lots of unqualified people but worse yet generate a giant haystack of leads that you need to dig through to find the qualified ones — so you end up paying twice for your mistake.  While it’s tempting to want to leave the show with the most card swipes, always remember you’re there to generate visibility, have great conversations, and leave with the most qualified leads — not, not, not the longest list of names.

8. Host a Birds of a Feather (BoF).  Many conferences use BoFs (or equivalents) as a way for people with common interests to meet informally.  Set up via either an online or old-fashioned cork message board, anyone can organize a BoF by posting a note that says “Attention:  All People Interested in Deploying Kubernetes at Large Scale — Let’s Meet in Room 27 at 3PM.”  If your conference doesn’t have BoFs either ask the organizers to start them, or call a BoF anyway if they have any general messaging facility.

9. Everybody works. If you’re big enough to have an events person or contractor, make sure you define their role properly.  They don’t just set up the booth and go back to their room all day.  Everybody works.  If your events person self-limits him/herself by saying “I don’t do content,” then I’d suggest finding another events person.

10.  No whining.  Whenever two anglers pass along a river and one says “how’s the fishing?” the universal response is “good.”  Not so good that they’re going to ask where you’ve been fishing, and not so bad that they’re going to ask what you’ve been using.  Just good.  Be the same way with conferences.  If asked, how it’s going, say “good.”  Ban all discussion and/or whining about the conference until after the conference.  If it’s not going well, whining about isn’t going to help.  If it is going well, you should be out executing, not talking about how great the conference is.  From curtain-up until curtain-down all you should care about is execution.  Once the curtain’s down, then you can debrief — and do so more intelligently having complete information.

Notes

[1] In the sense that, “if I spend time developing leads that might land in other reps’ territories today, that what goes around comes around tomorrow.”

[2] In order to avoid title intimidation or questions about “why is your CEO working the booth” you can have a technical cofounder say “I’m one of the architects of the system” or your CEO say “I’m on the leadership team.”

[3] Build a relationship with the organizers.  Do favors for them and help them if they need you.  Politely ask if anyone has moved, upgraded, or canceled their space.

[4] Again note where execution matters — if the Host Analytics logo were much larger on the sticker, I doubt it would have been so successful.  It’s the sticker’s payload, so the logo has to be there.  Too small and it’s illegible, but too big and no one puts the sticker on their laptop because it feels like a vendor ad and not a clever sticker.

[5] Not in the sense of a free ad, but as genuine content.  Imagine you work at Splunk back in the day and a speaker just gave a talk on using log files for debugging.  Wouldn’t it be great if you could convince her next time to say, “and while there is clearly a lot of value in using log files for debugging, I should mention there is also a potential goldmine of information in log files for general analytics that basically no one is exploiting, and that certain startups, like Splunk, are starting to explore that new and exciting use case.”