Category Archives: salesops

The Four Sources of Pipeline and The Balance Across Them

I’ve mentioned this idea a few times of late (e.g., my previous post, my SaaStock EMEA presentation) [1] and I’ve had some follow-up questions from readers, so I thought I’d do a quick post on the subject.

Back in the day at Salesforce, we called pipeline sources “horsemen,” a flawed term both for its embedded gender pronoun and its apocalyptic connotation.  Nevertheless, for me it did serve one purpose — I always remembered there were four of them.

Today, I call them “pipeline sources” but I’ve also heard them referred to as “pipegen sources” (as in pipeline generation) and even “revenue engines” which I think is an over-reach, if not a well intentioned one [2].

While you can define them in different ways, I think a pretty standard way of defining the pipeline sources is as follows:

  • Marketing, also known as “marketing/inbound.”  Opportunities generated as a result of people responding to marketing campaigns [3].
  • SDRs, also known as “SDR/outbound,” to differentiate these truly SDR-generated oppties from marketing/inbound oppties that are also processed by SDRs, but not generated by them [4].
  • Alliances [5].  Opportunities referred to the company by partners, for example, when a regional system integrator brings the company into a deal as a solution for one of its customers.
  • Sales, also known as “sales/outbound,” when a quota-carrying salesrep does their own prospecting, typically found in named-account territory models, and develops an opportunity themselves.

Product-led growth (PLG) companies should probably have a fifth source, product, but I won’t drill into PLG in this post [5A].

Attribution issues (i.e., who gets credit when an opportunity is developed through multiple touches with multiple contacts over multiple quarters [6] [7]) are undoubtedly complex.  See note [8] not for the answer to the attribution riddle, but for my advice on best dealing with the fact that it’s unanswerable.

Now, for the money question:  what’s the right allocation across sources?  I think the following are reasonable targets for a circa $50M enterprise SaaS company for mix of oppties generated by each source (all targets are plus-or-minus 10%):

  • Marketing:  60%
  • SDR/outbound:  10%
  • Alliances:  20%
  • Sales/outbound:  10%

Now, let’s be clear.  This can vary widely.  I’ve seen companies where marketing generates 95% of the pipeline and those where it generates almost none.  SDR/outbound makes the most sense in a named-account sales model, so I personally wouldn’t recommend doing outbound for outbound’s sake [9] [10].  Alliances is often under 20%, because the CEO doesn’t give them a concrete oppty-generation goal (or because they’re focused more on managing technology alliances).  Sales/outbound only makes sense for sellers with named-account territories, despite old-school sales managers’ tendency to want everyone prospecting as a character-building exercise.

And let’s not get so focused on the mix that we forget about the point:  cost-effective opportunity generation (ultimately revealed in the CAC ratio) with broad reach into the target market.

Now, for a few pro tips:

  • Assign the goal as a number of oppties, not a percentage.  For example, if you want 60% from marketing and have an overall goal of 100 oppties, do not set marketing’s goal at 60%, tell them you want 60 oppties.  Why?  Because if the company only generates 50 oppties during the quarter and marketing generates 35 of those, then marketing is popping champagne for generating 70% of the oppties (beating the 60% goal), while they are 15 oppties short of what the company actually needed.
  • Use overallocation when spinning up new pipeline sources.  Say you’ve just created an RSI alliances team and want them generating 10% of oppties.  By default, you’ll drop marketing’s target from 70% to 60% and marketing will build a budget to generate 60% (of say 100) oppties, so 60 oppties.  If they need $3K worth of marketing to generate an oppty, then they’ll ask for $180K of demandgen budget.  But what if alliances flames out?  Far better to tell marketing to generate 70 oppties, give them $210K in budget to do so and effectively over-assign oppty generation to an overall goal of 110 when you need 100.  This way, you’re covered when the new and presumably unpredictable pipeline generation source is coming online [11].

# # #

Notes

[1] Video forthcoming if I can get access to it.

[2]  The good intentions are to keep everyone focused on revenue.  The over-reach is they’re not really engines, more fuel sources.  I am a big believer in the concept of “revenue engines,” but I use the term to refer to independent business units that have an incremental revenue target and succeed or fail in either an uncoupled or loosely coupled manner.  For example, I’d say that geographic units (e.g., Americas, EMEA), channels (e.g., OEM, VAR, enterprise sales, corporate sales), or even product lines (depending on the org) are revenue engines.  The point of having revenue engines is diversification, as with airplanes, they can sputter (or flame-out) independently.  (As one aviation pioneer was reputed to have said:  “why do I only fly four-engine planes across the Atlantic?  Because they don’t make five-engine planes.”)

[3]  I will resist the temptation to deep dive into the rabbit hole of attribution and say two things:  (a) you likely have an attribution mechanism in place today and (b) that system is invariably imperfect so you should make sure you understand how it works and understand its limitations to avoid making myopic decisions.  For example, if an oppty is created after several people downloaded a white paper, a few attended a webinar, an SDR had been doing outreach in the account, the salesperson met a contact on the train, and a  partner was trying to win business in the account, who gets the credit?  It’s not obvious how to do this correctly and if your system is “one oppty, one source” (as I’d usually recommend over some point allocation system), there will invariably be internal jockeying for the credit.

[4]  SDRs are often split inbound vs. outbound not only to ease the tracking but because the nature of the work is fundamentally different.  Hybrid SDR roles are difficult for this reason, particularly in inbound-heavy environments where there is always more inbound work to do.

[5]  My taxonomy is that there are two types of “partners” — “channels” who sell our software and “alliances” who do not.  In this case (where we’re talking about pipeline generation for our direct salesforce), I am speaking of alliance partners, who typically work in a co-sell relationship and bring the company into oppties as a result.  In the case of channels, the question is one of visibility:  are the channels giving us visibility into their oppties (e.g., in our CRM) as you might find with RSIs or are they simply forecasting a number and mailing us a royalty check as you might find with OEMs.

[5A]  Product meaning trials (or downloads in open source land), which effectively become the majority top-of-funnel lead source for PLG companies.  This begs the question:  who drives people to do those trials (typically marketing and/or word of mouth)

[6]  One simple, common example:  a person downloads a white paper they found via through a search advertisement five quarters ago, ends up in our database, receives our periodic newsletter, and then is developed by an SDR through an outreach sequence.  Who gets the credit for the opportunity?  Marketing (for finding them in the first place and providing a baseline nurture program via the newsletter) or SDR/outbound (for developing them into an oppty)?   Most folks would say SDR in this case, but if your company practices “management by reductio ad absurdum” then someone might want to shut down search advertising because it’s “not producing” whereas the SDRs are.  Add some corporate politics where perhaps sales is trying to win points for showing how great they are at managing SDRs after having taken them from marketing and things can get … pretty icky.

[7] Another favorite example:  marketing sponsors a booth at the Snowflake user conference and we find a lead that develops into an opportunity.  Does marketing get the credit (because it’s a marketing program) or alliances (because Snowflake’s a partner).  Add some politics where the alliances team has been seen as underperforming and really needs the credit, and things can get again yucky and confusing, leading you away from the semi-obvious right answer:  marketing, because they ran a tradeshow booth and got a lead.  If you don’t credit marketing here, you are disincenting them from spending money at partner conferences (all I, no RO.)  The full answer here is, IMHO, to credit marketing with being the source of oppty, to track influence ARR by partner so we know how much of our business happens with which partners, and to not incent the technology alliances group with opportunity creation targets.  (Oppty creation, however, should be an important goal for the regional and/or global system integrator alliances teams.)

[8]  My recommended solution here is two-fold:  (a) use whatever attribution mechanism you want, ensuring you understand its limitations, and (b) perform a win-touch analysis at every QBR where a reasonably neutral party like salesops presents the full touch history for a set of representative deals (and/or large) deals won in the prior quarter.  This pulls everyone’s heads of our their spreadsheets and back into reality — and should ease political tensions as well.

[9]  Having an SDR convince someone to take a meeting usually results in a higher no-show rate and a lower overall conversion rate than setting up meetings with people who have engaged with our marketing or our partners already.

[10]  Put differently, you should stalk customers only when you’re quite sure they should buy from you, but they haven’t figured that out yet.

[11] And yes there’s no free lunch here.  Your CAC will increase because you’re paying to generate 110 oppties when you only need 100.  But far better to have the CAC kick up a bit when you’re starting a new program than to miss the number because the pipeline was insufficient.

Fortella Webinar: Crisis Mode — I Need More Pipeline Now!

Please join me and Fortella founder Rahul Sachdev for a webinar this Thursday (6/24/21) at 10am Pacific entitled Crisis Mode — I Need More Pipeline Now!

Fortella, which I’ve served as an advisor over the past year or so, makes a revenue intelligence platform.  The company recently published an interesting survey report entitled The State of B2B Marketing:  What Sets the Best Marketers Apart?  Rahul is super passionate about marketing accountability for revenue and the use of AI and advanced analytics in so doing, which is what drew me to want to work with him the first place.  He’s also an avid Kellblog reader, to the point where he often reminds me of things I’ve said but forgotten!

In this webinar we’ll drive a discussion primarily related to two Kellblog posts:

Among other things, I expect we’ll discuss:

  • That pipeline isn’t a monolith and that we need to look inside the pipeline to see things by opportunity type (e.g., new vs. expansion), customer type (e.g., size segment, industry segment) and by source (e.g., inbound vs. partners).  We also need to remember that certain figures we burn into our heads (e.g., sales cycle length) are merely the averages of a distribution and not impenetrable hard walls.
  • By decomposing pipeline we can identity that some types close faster (and/or at a higher conversion rate) than others, and ergo focus on those types when we are in a pinch.
  • How to think about pipeline coverage ratios, including to-go coverage, the target coverage ratio, and remembering to look not just at ARR dollar coverage but opportunities/rep.
  • The types of campaigns one can and should run when you are in a pipeline pinch
  • How we can avoid getting into pipeline pinches through planning (e.g., an inverted funnel model) and forecasting (e.g., next quarter pipeline).

I hope to see you there.  Register here.

Using This/Next/All-Quarter Analysis To Understand Your Pipeline

This is the third in a three-post series focused on forecasting and pipeline.  Part I examined triangulation forecasts to improve forecast accuracy and enable better conversations about the forecast.  After a review of pipeline management fundamentals, part II discussed the use of to-go pipeline coverage to provide clarity on how your pipeline is evolving across the weeks of the quarter.  In this, part III, we’ll introduce what I call this/next/all-quarter pipeline analysis as a way of looking at the entire pipeline that is superior to annual or rolling four-quarter pipeline analysis.

Let’s start by unveiling the last block on the sheet we’ve been using the previous two posts.  Here’s the whole thing:

You’ll see two new sections added:  next-quarter pipeline and all-quarters [1] pipeline.  Here’s what we can do when we see all three of them, taken together:

  • We can see slips.  For example, in week 3 while this-quarter pipeline dropped by $3,275K, next-quarter pipeline increased by $2,000K and all-quarters only dropped by $500K.  While there are many moving parts [2], this says to me that pipeline is likely sloshing around between quarters and not being lost.
  • We can see losses.  Similarly, when this-quarter drops, next-quarter is flat, and all-quarters drop, we are probably looking at deals lost from the pipeline [3].
  • We can see wins.  When you add a row at the bottom with quarter-to-date booked new ARR, if that increases, this-quarter pipeline decreases, next-quarter pipeline stays flat, and all-quarters pipeline decreases, we are likely looking at the best way of reducing pipeline:  by winning deals!
  • We can see how we’re building next-quarter’s pipeline.  This keeps us focused on what matters [4].  If you start every quarter with 3.0x coverage you will be fine in the long run without the risk of a tantalizing four-quarter rolling pipeline where overall coverage looks sufficient, but all the closeable deals are always two to four quarters out [5].

Tantalus and his pipeline where all the closeable deals are always two quarters out

  • We can develop a sense how next-quarter pipeline coverage develops over time and get better at forecasting day-1 next-quarter pipeline coverage, which I believe marketing should habitually do [6].
  • We can look at whether we have enough total pipeline to keep our salesreps busy by not just looking at the total dollar volume, but the total count of oppties.  I think this is the simplest and most intuitive way to answer that question.  Typically 15 to 20 all-quarters oppties is the maximum any salesrep can possibly juggle.
  • Finally, there’s nowhere to hide.  Companies that only examine annual or rolling four-quarter pipeline inadvertently turn their 5+ quarter pipeline into a dumping ground full of fake deals, losses positioned as slips, long-term rolling hairballs [7], and oppties used for account squatting.

I hope you’ve enjoyed this three-part series on forecasting and pipeline.  The spreadsheet used in the examples is available here.

# # #

Notes

[1] Apologies for inconsistences in calling this all-quarter vs. all-quarters pipeline.  I may fix it at some point, but first things first.  Ditto for the inconsistency on this-quarter vs. current-quarter.

[2] You can and should have your salesops leader do the deeper analysis of inflows (including new pipegen) and outflows, but I love the first-order simplicity of saying, “this-quarter dropped by $800K, next-quarter increased by $800K and all-quarters was flat, ergo we are probably sloshing” or “this-quarter dropped by $1M, next-quarter was flat, and all-quarters dropped by $1M, so we probably lost $1M worth of deals.”

[3] Lost here in the broad sense meaning deal lost or no decision (aka, derail).  In the former case, someone else wins the deal; in the latter case, no one does.

[4] How do you make 32 quarters in row?  One at a time.

[5] Tantalus was a figure in Greek mythology, famous for his punishment:  standing for eternity in a pool of water below a fruit tree where each time he ducked to drink the water it would recede and each time he reached for a fruit it was just beyond his grasp.

[6] Even though most companies have four different pipeline sources (marketing/inbound, SDR/outbound, sales/outbound, and partners), marketing should, by default, consider themselves the quarterback of the pipeline as they are usually the majority pipeline source and the most able to take corrective actions.

[7] By my definition a normal rolling hairball always sits in this quarter’s pipeline and slips one quarter every quarter.  A long-term rolling hairball is thus one that sits just beyond your pipeline opportunity scrutiny window (e.g., 5 quarters out) and slips one quarter every quarter.

 

Using To-Go Coverage to Better Understand Pipeline and Improve Forecasting

This is the second in a three-part series focused on forecasting and pipeline.  In part I, we examined triangulation forecasts with a detailed example.  In this, part II, we’ll discuss to-go pipeline coverage, specifically using it in conjunction with what we covered in part I.  In part III, we’ll look at this/next/all-quarter pipeline analysis as a simple way to see what’s happening overall with your pipeline.

Pipeline coverage is a simple enough notion:  take the pipeline in play and divide it by the target and get a coverage ratio.  Most folks say it should be around 3.0, which isn’t a bad rule of thumb.

Before diving in further, let’s quickly remind ourselves of the definition of pipeline:

Pipeline for a period is the sum of the value of all opportunities with a close date in that period.

This begs questions around definitions for opportunity, value, and close date which I won’t review here, but you can find discussed here.  The most common mistakes I see thinking about the pipeline are:

  • Turning 3.0x into a self-fulfilling prophecy by bludgeoning reps until they have 3.0x coverage, instead of using coverage as an unmanaged indicator
  • Not periodically scrubbing the pipeline according to a defined process and rules, deluding yourself into thinking “we’re always scrubbing the pipeline” (which usually means you never are).
  • Applying hidden filters to the pipeline, such as “oh, sorry, when we say pipeline around here we mean stage-4+ pipeline.”  Thus executives often don’t even understand what they’re analyzing and upstream stages turn into pipeline landfills full of junk opportunities that are left unmanaged.
  • Pausing sales hiring until the pipeline builds, effectively confusing cause and effect in how the pipeline gets built [1].
  • Creating opportunities with placeholder values that pollute the pipeline with fake news [1A], instead of creating them with $0 value until a salesrep socializes price with the customer [2].
  • Conflating milestone-based and cohort-based conversion rates in analyzing the pipeline.
  • Doing analysis primarily on either an annual or rolling four-quarter pipeline, instead of focusing first on this-quarter and next-quarter pipeline.
  • Judging the size of the all-quarter pipeline by looking at dollar value instead of opportunity count and the distribution of oppties across reps [2A].

In this post, I’ll discuss another common mistake, which is not analyzing pipeline on a to-go basis within a quarter.

The idea is simple:

  • Many folks run around thinking, “we need 3.0x pipeline coverage at all times!”  This is ambiguous and begs the questions “of what?” and “when?” [3]
  • With a bit more rigor you can get people thinking, “we need to start the quarter with 3.0x pipeline coverage” which is not a bad rule of thumb.
  • With even a bit more rigor that you can get people thinking, “at all times during the quarter I’d like to have 3.0x coverage of what I have left to sell to hit plan.” [4]

And that is the concept of to-go pipeline coverage [5].  Let’s look at the spreadsheet in the prior post with a new to-go coverage block and see what else we can glean.

Looking at this, I observe:

  • We started this quarter with $12,500 in pipeline and a pretty healthy 3.2x coverage ratio.
  • We started last quarter in a tighter position at 2.8x and we are running behind plan on the year [6].
  • We have been bleeding off pipeline faster than we have been closing business.  To-go coverage has dropped from 3.2x to 2.2x during the first 9 weeks of the quarter.  Not good.  [7]
  • I can easily reverse engineer that we’ve sold only $750K in New ARR to date [8], which is also not good.
  • There was a big drop in the pipeline in week 3 which makes me start to wonder what the gray shading means.

The gray shading is there to remind us that sales management is supposed to scrub the pipeline in weeks 2, 5, and 8 so that the pipeline data presented in weeks 3, 6, and 9 is scrubbed.  The benefits of this are:

  • It eliminates the “always scrubbing means never scrubbing” problem.
  • It draws a deadline for how long sales has to clean up after the end of a quarter:  the end of week 2.  That’s enough time to close out the quarter, take a few days rest, and then get back at it.
  • It provides a basis for snapshotting analytics.  Because pipeline conversion rates vary by week things can get confusing fast.  Thus, to keep it simple I base a lot of my pipeline metrics on week 3 snapshots (e.g., week 3 pipeline conversion rate) [9]
  • It provides an easy way to see if the scrub was actually done.  If the pipeline is flat in weeks 3, 6, and 9, I’m wondering if anyone is scrubbing anything.
  • It lets you see how dirty things got.  In this example, things were pretty dirty:  we bled off $3,275K in pipeline during the week 2 scrub which I would not be happy about.

Thus far, while this quarter is not looking good for SaaSCo, I can’t tell what happened to all that pipeline and what that means for the future.  That’s the subject of the last post in this three-part series.

A link to the spreadsheet I used in the example is here.

# # #

Notes

[1]  In enterprise SaaS at least, you should look at it the other way around:  you don’t build pipeline and then hire reps to sell it, you hire reps and then they build the pipeline, as the linked post discusses.

[1A]  The same is true of close dates.  For example, if you create opportunities with a close date that is 18+ months out, they can always be moved into the more current pipeline.  If you create them 9 months out and automatically assign a $150K value to each, you can end up with a lot air (or fake news/data) in your pipeline.

[2]  For benchmarking purposes, this creates the need for “implied pipeline” which replaces the $0 with a segment-appropriate average sales price (ASP) as most people tend to create oppties with placeholder values.  I’d rather see the “real” pipeline and then inflate it to “implied pipeline” — plus it’s hard to know if $150K is assigned to an oppty as a placeholder that hasn’t been changed or if that’s the real value assigned by the salesrep.

[2A] If you create oppties with a placeholder value then dollar pipeline is a proxy for the oppty count, but a far less intuitive one — e.g., how much dollar volume of pipeline can a rep handle?  Dunno.  How many oppties can they work on effectively at one time?  Maybe 15-20, tops.

[3] “Of what” meaning of what number?  If you’re looking at all-quarters pipeline you may have oppties that are 4, 6, or 8+ quarters out (depending on your rules) and you most certainly don’t have an operating plan number that you’re trying to cover, nor is coverage even meaningful so far in advance.  “When” means when in the quarter?  3.0x plan coverage makes sense on day 1; it makes no sense on day 50.

[4] As it turns out, 3.0x to-go coverage is likely an excessively high bar as you get further into the quarter.  For example, by week 12, the only deals still forecast within the quarter should be very high quality.  So the rule of thumb is always 3.0x, but you can and should watch how it evolves at your firm as you get close to quarter’s end.

[5]  In times when the forecast is materially different from the plan, separating the concepts of to-go to forecast and to-go to plan can be useful.  But, by default, to-go should mean to-go to plan.

[6] I know this from the extra columns presented in the screenshot from the same sheet in the previous post.  We started this quarter at 96% of the ARR plan and while the never explicitly lists our prior-quarter plan performance, it seems a safe guess.

[7]  If to-go coverage increases, we are closing business faster than we are losing it.  If to-go coverage decreases we are “losing” (broadly defined as slip, lost, no decision) business faster than we are closing it.  If the ratio remains constant we are closing business at the same ratio as we started the quarter at.

[8]  A good sheet will list this explicitly, but you can calculate it pretty fast.  If you have a pipeline of $7,000, a plan of $3,900, and coverage of 2.2x then:  7,000/2.2 (rounded) = 3,150 to go, with a plan of 3,900 means you have sold 750.

[9] An important metric that can be used as an additional triangulation forecast and is New ARR / Week 3 Pipeline.

 

Using Triangulation Forecasts For Improved Forecast Accuracy and Better Conversations

Ever been in this meeting?

CEO:  What’s the forecast?
CRO:  Same as before, $3,400K.
Director 1:  How do you feel about it?
CRO:  Good.
Director 2:  Where will we really land?
CRO:  $3,400K.  That’s why that’s the forecast.
Director 1:  But best case, where do we land?
CRO:  Best case, $3,800K.
Director 2:  How do you define best case?
CRO:  If the stars align.

Not very productive, is it?

I’ve already blogged about one way to solve this problem:  encouraging your CRO think probabilistically about the forecast.  But that’s a big ask.  It’s not easy to change how sales leaders think, and it’s not always the right time to ask.  So, somewhat independent of that, in this series I’ll introduce three concepts that help ensure that we have better conversations about the forecast and ultimately forecast better as a result:  triangulation forecasts, to-go pipeline coverage, and this/next/all-quarter pipeline analysis.  In this post, we’ll cover triangulation forecasts.

Triangulation Forecasts

The simplest way to have better conversations about the forecast is to have more than one forecast to discuss.  Towards that end, much as we might take three or four bearings to triangulate our position when we’re lost in the backcountry, let’s look at three or four bearings to triangulate our position on the new annual recurring revenue (ARR) forecast for the quarter.

In this example [1] we track the forecast and its evolution along with some important context such as the plan and our actuals from the previous and year-ago quarters.  We’ve placed the New ARR forecast in its leaky bucket context [2], in bold so it stands out.  Just scanning across the New ARR row, we can see a few things:

  • We sold $3,000K in New ARR last quarter, $2,850K last year, and the plan for this quarter is $3,900K.
  • The CRO is currently forecasting $3,400K, or 87% of the New ARR plan.  This is not great.
  • The CRO’s forecast has been on a steady decline since week 3, from its high of $3,800K.  This makes me nervous.
  • The CRO is likely pressuring the VP of Customer Success to cut the churn forecast to protect Net New ARR [3].
  • Our growth is well below planned growth of 37% and decelerating [4].

I’m always impressed with how much information you can extract from that top block alone if you’re used to looking at it.  But can we make it better?  Can we enable much more interesting conversations?  Yes.  Look at the second block, which includes four rows:

  • The sum of the sales reps’ forecasts [5]
  • The sum of the sales managers’ forecasts [6]
  • The stage-weighted expected value (EV) of the pipeline [7] [8]
  • The forecast category-weighted expected value of the pipeline [9]

Each of these tells you something different.

  • The rep-level forecast tells you what you’d sell if every rep hit their current forecast.  It tends to be optimistic, as reps tend to be optimistic.
  • The manager-level forecast tells you how much we’d sell if every CRO direct report hit their forecast.  This tends to be the most accurate [10] in my experience.
  • The stage-weighted expected value tells you the value of pipeline when weighted by probabilities assigned to each stage. A $1M pipeline consisting of 10 stage 2 $100K oppties has a much lower EV than a $1M pipeline with 10 stage 5 $100K oppties — even though they are both “$1M pipelines.”
  • The forecast category-weighted expected value tells you the value of pipeline when weighted by probabilities assigned to each forecast category, such as commit, forecast, or upside.

These triangulation forecasts provide different bearings that can help you understand your pipeline better, know where to focus your efforts, and improve the accuracy of predicting where you’ll land.

For example, if the rep- and manager-level forecasts are well below the CRO’s, it’s often because the CRO knows about some big deal they can pull forward to make up any gap.  Or, more sinisterly, because the CRO’s expense budget is automatically cut to preserve a target operating margin and thus they are choosing to be “upside down” rather face an immediate expense cut [11].

If the stage-weighted forecast is much lower than the others, it indicates that while we may have the right volume of pipeline that it’s not far enough along in its evolution, and ergo we should focus on velocity.

Now, looking at our sample data, let’s make some observations about the state of the quarter at SaaSCo.

  • The reps are calling $3,400K vs. a $3,900K plan and their aggregate forecast has been fairly consistently deteriorating.  Not good.
  • The managers, who we might notice called last quarter nearly perfectly ($2,975K vs. $3,000K) have pretty consistently been calling $3,000K, or $900K below plan.  Worrisome.
  • The stage-weighted EV was pessimistic last quarter ($2,500K vs. $3,000K) and may need updated probabilities.  That said, it’s been consistently predicting around $2,600K which, if it’s 20% low (like it was last quarter), it suggests a result of $3,240K [12].
  • The forecast category-weighted expected value, which was a perfect predictor last quarter, is calling $2,950K.  Note that it’s jumped up from earlier in the quarter, which we’ll get to later.

Just by these numbers, if I were running SaaSCo I’d be thinking that we’re going to land between $2,800K and $3,200K [13].  But remember our goal here:  to have better conversations about the forecast.  What questions might I ask the CRO looking at this data?

  • Why are you upside-down relative to your manager’s forecast?
  • In other quarters was the manager-level forecast the most accurate, and if so, why you are not heeding it better now?
  • Why is the stage-weighted forecast calling such a low number?
  • What’s happened since week 5 such that the reps have dropped their aggregate forecast by over $600K?
  • Why is the churn forecast going down?  Was it too high to begin with, are we getting positive information on deals, or are we pressuring Customer Success to help close the gap?
  • What big/lumpy deals are in these numbers that could lead to large positive or negative surprises?
  • Why has your forecast been moving so much across the quarter?  Just 5 weeks ago you were calling $3,800K and how you’re calling $3,400K and headed in the wrong direction?
  • Have you cut your forecast sufficiently to handle additional bad news, or should I expect it to go down again next week?
  • If so, why are you not following the fairly standard rule that when you must cut your forecast you cut it deeply enough so your next move is up?  You’ve broken that rule four times this quarter.

In our next post in the series we’ll discuss to-go pipeline coverage.  A link to the spreadsheet used to the example is here.

# # #

Notes

[1] This is the top of the weekly sheet I recommend CEOs to start their weekly staff meeting.

[2] A SaaS company is conceptualized as a leaky bucket of ARR.

[3] I cheated and look one row down to see the churn forecast was $500K in weeks 1-6 and only started coming down (i.e., improving) as the CRO continued to cut their New ARR forecast.  This makes me suspicious, particularly if the VP of Customer Success reports to the CRO.

[4] I cheated and looked one row up to see starting ARR growing at 58% which is not going to sustain if New ARR is only growing at ~20%.  I also had to calculate planned growth (3900/2850 = 1.37) as it’s not done for me on the sheet.

[5] Assumes a world where managers do not forecast for their reps and/or otherwise cajole reps into forecasting what the manager thinks is appropriate, instead preferring for managers to make their own forecast, loosely coupling rep-level and the manager-level forecasts.

[6]  Typically, the sum of the forecasts from the CRO’s direct reports.  An equally, if perhaps not more, interesting measure would be the sum of the first-line managers’ forecasts.

[7] Expected value is math-speak for probability * value.  For example, if we had one $100K oppty with a 20% close probability, then its expected value would be $100K * 0.2 = $20K.

[8] A stage-weighted expected value of the (current quarter) pipeline is calculated by summing the expected value of each opportunity in the pipeline, using probabilities assigned to each stage.  For example, if we had only three stages (e.g., prospect, short-list, and vendor of choice) and assigned a probability to each (e.g., 10%, 30%, 70%) and then multiplied the new ARR value of each oppty by its corresponding probability and summed them, then we would have the stage-weighted expected value of the pipeline.  Note that in a more advanced world those probabilities are week-specific (and, due to quarterly seasonality, maybe week-within-quarter specific) but we’ll ignore that here for now.  Typically, one way I sidestep some of that hassle is to focus my quarterly analytics by snapshotting week 3, creating in effect week 3 conversion rates which I know will work better earlier in the quarter than later.  In the real world, these are often eyeballed initially and then calculated from regressions later on — i.e., in the last 8 quarters, what % of week 3, stage 2 oppties closed?

[9]  The forecast category-weighted expected value of the pipeline is the same the stage-weighted one, except instead of using stage we use forecast category as the basis for the calculation.  For example, if we have forecast categories of upside, forecast, commit we might assign probabilities of 0.3, 0.7, and 0.9 to each oppty in that respective category.

[10] Sometimes embarrassingly so for the CRO whose forecast thus ends up a mathematical negative value-add!

[11] This is not a great practice IMHO and thus CEOs should not inadvertently incent inflated forecasts by hard-coding expense cuts to the forecast.

[12] The point being there are two ways to fix this problem.  One is to revise the probabilities via regression.  The other is to apply a correction factor to the calculated result.  (Methods with consistent errors are good predictors that are just miscalibrated.)

[13]  In what I’d consider a 80% confidence interval — i.e., 10% chance we’re below $2,800K and 10% chance we’re above $3,200K.