# Using Triangulation Forecasts For Improved Forecast Accuracy and Better Conversations

Ever been in this meeting?

CEO:  What’s the forecast?
CRO:  Same as before, \$3,400K.
Director 1:  How do you feel about it?
CRO:  Good.
Director 2:  Where will we really land?
CRO:  \$3,400K.  That’s why that’s the forecast.
Director 1:  But best case, where do we land?
CRO:  Best case, \$3,800K.
Director 2:  How do you define best case?
CRO:  If the stars align.

Not very productive, is it?

I’ve already blogged about one way to solve this problem:  encouraging your CRO think probabilistically about the forecast.  But that’s a big ask.  It’s not easy to change how sales leaders think, and it’s not always the right time to ask.  So, somewhat independent of that, in this series I’ll introduce three concepts that help ensure that we have better conversations about the forecast and ultimately forecast better as a result:  triangulation forecasts, to-go pipeline coverage, and this/next/all-quarter pipeline analysis.  In this post, we’ll cover triangulation forecasts.

Triangulation Forecasts

The simplest way to have better conversations about the forecast is to have more than one forecast to discuss.  Towards that end, much as we might take three or four bearings to triangulate our position when we’re lost in the backcountry, let’s look at three or four bearings to triangulate our position on the new annual recurring revenue (ARR) forecast for the quarter.

In this example [1] we track the forecast and its evolution along with some important context such as the plan and our actuals from the previous and year-ago quarters.  We’ve placed the New ARR forecast in its leaky bucket context [2], in bold so it stands out.  Just scanning across the New ARR row, we can see a few things:

• We sold \$3,000K in New ARR last quarter, \$2,850K last year, and the plan for this quarter is \$3,900K.
• The CRO is currently forecasting \$3,400K, or 87% of the New ARR plan.  This is not great.
• The CRO’s forecast has been on a steady decline since week 3, from its high of \$3,800K.  This makes me nervous.
• The CRO is likely pressuring the VP of Customer Success to cut the churn forecast to protect Net New ARR [3].
• Our growth is well below planned growth of 37% and decelerating [4].

I’m always impressed with how much information you can extract from that top block alone if you’re used to looking at it.  But can we make it better?  Can we enable much more interesting conversations?  Yes.  Look at the second block, which includes four rows:

• The sum of the sales reps’ forecasts [5]
• The sum of the sales managers’ forecasts [6]
• The stage-weighted expected value (EV) of the pipeline [7] [8]
• The forecast category-weighted expected value of the pipeline [9]

Each of these tells you something different.

• The rep-level forecast tells you what you’d sell if every rep hit their current forecast.  It tends to be optimistic, as reps tend to be optimistic.
• The manager-level forecast tells you how much we’d sell if every CRO direct report hit their forecast.  This tends to be the most accurate [10] in my experience.
• The stage-weighted expected value tells you the value of pipeline when weighted by probabilities assigned to each stage. A \$1M pipeline consisting of 10 stage 2 \$100K oppties has a much lower EV than a \$1M pipeline with 10 stage 5 \$100K oppties — even though they are both “\$1M pipelines.”
• The forecast category-weighted expected value tells you the value of pipeline when weighted by probabilities assigned to each forecast category, such as commit, forecast, or upside.

These triangulation forecasts provide different bearings that can help you understand your pipeline better, know where to focus your efforts, and improve the accuracy of predicting where you’ll land.

For example, if the rep- and manager-level forecasts are well below the CRO’s, it’s often because the CRO knows about some big deal they can pull forward to make up any gap.  Or, more sinisterly, because the CRO’s expense budget is automatically cut to preserve a target operating margin and thus they are choosing to be “upside down” rather face an immediate expense cut [11].

If the stage-weighted forecast is much lower than the others, it indicates that while we may have the right volume of pipeline that it’s not far enough along in its evolution, and ergo we should focus on velocity.

Now, looking at our sample data, let’s make some observations about the state of the quarter at SaaSCo.

• The reps are calling \$3,400K vs. a \$3,900K plan and their aggregate forecast has been fairly consistently deteriorating.  Not good.
• The managers, who we might notice called last quarter nearly perfectly (\$2,975K vs. \$3,000K) have pretty consistently been calling \$3,000K, or \$900K below plan.  Worrisome.
• The stage-weighted EV was pessimistic last quarter (\$2,500K vs. \$3,000K) and may need updated probabilities.  That said, it’s been consistently predicting around \$2,600K which, if it’s 20% low (like it was last quarter), it suggests a result of \$3,240K [12].
• The forecast category-weighted expected value, which was a perfect predictor last quarter, is calling \$2,950K.  Note that it’s jumped up from earlier in the quarter, which we’ll get to later.

Just by these numbers, if I were running SaaSCo I’d be thinking that we’re going to land between \$2,800K and \$3,200K [13].  But remember our goal here:  to have better conversations about the forecast.  What questions might I ask the CRO looking at this data?

• Why are you upside-down relative to your manager’s forecast?
• In other quarters was the manager-level forecast the most accurate, and if so, why you are not heeding it better now?
• Why is the stage-weighted forecast calling such a low number?
• What’s happened since week 5 such that the reps have dropped their aggregate forecast by over \$600K?
• Why is the churn forecast going down?  Was it too high to begin with, are we getting positive information on deals, or are we pressuring Customer Success to help close the gap?
• What big/lumpy deals are in these numbers that could lead to large positive or negative surprises?
• Why has your forecast been moving so much across the quarter?  Just 5 weeks ago you were calling \$3,800K and how you’re calling \$3,400K and headed in the wrong direction?
• Have you cut your forecast sufficiently to handle additional bad news, or should I expect it to go down again next week?
• If so, why are you not following the fairly standard rule that when you must cut your forecast you cut it deeply enough so your next move is up?  You’ve broken that rule four times this quarter.

In our next post in the series we’ll discuss to-go pipeline coverage.  A link to the spreadsheet used to the example is here.

# # #

Notes

[1] This is the top of the weekly sheet I recommend CEOs to start their weekly staff meeting.

[2] A SaaS company is conceptualized as a leaky bucket of ARR.

[3] I cheated and look one row down to see the churn forecast was \$500K in weeks 1-6 and only started coming down (i.e., improving) as the CRO continued to cut their New ARR forecast.  This makes me suspicious, particularly if the VP of Customer Success reports to the CRO.

[4] I cheated and looked one row up to see starting ARR growing at 58% which is not going to sustain if New ARR is only growing at ~20%.  I also had to calculate planned growth (3900/2850 = 1.37) as it’s not done for me on the sheet.

[5] Assumes a world where managers do not forecast for their reps and/or otherwise cajole reps into forecasting what the manager thinks is appropriate, instead preferring for managers to make their own forecast, loosely coupling rep-level and the manager-level forecasts.

[6]  Typically, the sum of the forecasts from the CRO’s direct reports.  An equally, if perhaps not more, interesting measure would be the sum of the first-line managers’ forecasts.

[7] Expected value is math-speak for probability * value.  For example, if we had one \$100K oppty with a 20% close probability, then its expected value would be \$100K * 0.2 = \$20K.

[8] A stage-weighted expected value of the (current quarter) pipeline is calculated by summing the expected value of each opportunity in the pipeline, using probabilities assigned to each stage.  For example, if we had only three stages (e.g., prospect, short-list, and vendor of choice) and assigned a probability to each (e.g., 10%, 30%, 70%) and then multiplied the new ARR value of each oppty by its corresponding probability and summed them, then we would have the stage-weighted expected value of the pipeline.  Note that in a more advanced world those probabilities are week-specific (and, due to quarterly seasonality, maybe week-within-quarter specific) but we’ll ignore that here for now.  Typically, one way I sidestep some of that hassle is to focus my quarterly analytics by snapshotting week 3, creating in effect week 3 conversion rates which I know will work better earlier in the quarter than later.  In the real world, these are often eyeballed initially and then calculated from regressions later on — i.e., in the last 8 quarters, what % of week 3, stage 2 oppties closed?

[9]  The forecast category-weighted expected value of the pipeline is the same the stage-weighted one, except instead of using stage we use forecast category as the basis for the calculation.  For example, if we have forecast categories of upside, forecast, commit we might assign probabilities of 0.3, 0.7, and 0.9 to each oppty in that respective category.

[10] Sometimes embarrassingly so for the CRO whose forecast thus ends up a mathematical negative value-add!

[11] This is not a great practice IMHO and thus CEOs should not inadvertently incent inflated forecasts by hard-coding expense cuts to the forecast.

[12] The point being there are two ways to fix this problem.  One is to revise the probabilities via regression.  The other is to apply a correction factor to the calculated result.  (Methods with consistent errors are good predictors that are just miscalibrated.)

[13]  In what I’d consider a 80% confidence interval — i.e., 10% chance we’re below \$2,800K and 10% chance we’re above \$3,200K.

### 3 responses to “Using Triangulation Forecasts For Improved Forecast Accuracy and Better Conversations”

This site uses Akismet to reduce spam. Learn how your comment data is processed.