Category Archives: Uncategorized

Win Rates, Close Rates and Milestone vs. Flow Analysis

Hey, what’s your win rate?

It’s another seemingly simple question.  But, like most SaaS metrics, when you dig deeper you find it’s not.  In this post we’ll take a look at how to calculate win rates and use win rates to introduce the broader concept of milestone vs. flow analysis that applies to conversion rates across the entire sales funnel.

Let’s start with some assumptions.  Once an opportunity is accepted by sales (known as a sales-accepted opportunity, or SAL), it eventually will end up in one of three terminal states:

  • Won
  • Lost
  • Other (derailed, no decision)

Some people don’t like “other” and insist that opportunities should be exclusively either won or lost and that other is an unnecessary form of lost which should be tracked with a lost reason code as opposed to its own state.  I prefer to keep other, and call it derailed, because a competitive loss is conceptually different from a project cancellation, major delay, loss of sponsor, or a company acquisition that halts the project.  Whether you want to call it other, no decision, or derailed, I think having a third terminal state is warranted from first principles.  However, it can make things complicated.

For example, you’ll need to calculate win rates two ways:

  • Win rate, narrow = wins / (wins + losses)
  • Win rate, broad = wins / (wins + losses + derails)

Your narrow win rate tells you how good you are at beating the competition.  Your broad rates tells you how good you are at closing deals (that come to a terminal state).

Narrow win rate alone can be misleading.  If I told you a company had a 66% win rate, you might be tempted to say “time to add more salespeople and scale this thing up.”  If I told you they got the 66% win rate by derailing 94 out of every 100 opportunities it generated, won 4, and lost the other 2, then you’d say “not so fast.”  This, of course, would show up in the broad win rate of 4%.

This brings up the important question of timing.  Both these win rate calculations ignore deals that push out of a quarter.  So another degenerate case is a situation where you win 4, lose 2, derail 4, and push 90 opportunities.  In this case, narrow win rate = 66% and broad win rate = 40%.  Neither is shining a light on the problem (which, if it happens continuously, I call a rolling hairball problem.)

The issue here is thus far we’ve been performing what I call a milestone analysis.  In effect, we put observers by the side of the road at various milestones (created, won, lost, derailed) and ask them to count the number opportunities that pass by each quarter.  The issue, especially with companies that have long sales cycles, is that you have no idea of progression.  You don’t know if the opportunities that passed “win” this quarter came from the opportunities that passed “created” this quarter, or if they came from last quarter, the quarter before that, or even earlier.

Milestone analysis has two key advantages

  • It’s easy — you just need to count opportunities passing milestones
  • It’s instant — you don’t have to wait to see how things play out to generate answers

The big disadvantage is it can be misleading, because the opportunities hitting a terminal state this quarter were generated in many different time periods.  For a company with an average 9 month sales cycle, the opportunities hitting a terminal state in quarter N, were generated primarily in quarter N-3, but with some coming in quarters N-2 and N-1 and some coming in quarters N-4 and N-5.  Across that period very little was constant, for example, marketing programs and messages changed.  So a marketing effectiveness analysis would be very difficult when approached this way.

For those sorts of questions, I think it’s far better to do a cohort-based analysis, which I call a flow analysis.  Instead of looking at all the opportunities that hit a terminal state in a given time period, you go back in time, grab a cohort of opportunities (e.g., all those generated in 4Q16) and then see how they play out over time.  You go with the flow.

For marketing programs effectiveness, this is the only way to do it.  Instead of a time-based cohort, you’d take a programs-based cohort (e.g., all the opportunities generated by marketing program X), see how they play out, and then compare various programs in terms of effectiveness.

The big downside of flow analysis is you end up analyzing ancient history.  For example, if you have a 9 month average sales cycle with a wide distribution around the mean, you may need to wait 15-18 months before the vast majority of the opportunities hit a terminal state.  If you analyze too early, too many opportunities are still open.  But if you put off analysis then you may get important information, but too late.

You can compress the time window by analyzing programs effectiveness not to sales outcomes but to important steps along the funnel.  That way you could compare two programs on the basis of their ability to generate MQLs or SALs, but you still wouldn’t know whether and at what relative rate they generate actual customers.  So you could end up doubling down on a program that generates a lot of interest, but not a lot of deals.

Back to our original topic, the same concept comes up in analyzing win rates.  Regardless of which win rate you’re calculating, at most companies you’re calculating it on a milestone basis.  I find milestone-based win rates more volatile and less accurate that a flow-based SAL-to-close rate.  For example, if I were building a marketing funnel to determine how many deals I need to hit next year’s number, I’d want to use a SAL-to-close rate, not a win rate, to do so.  Why?  SAL-to-close rates:

  • Are less volatile because they’re damped by using long periods of time.
  • Are more accurate because they actually tracking what you care about — if I get 100 opportunities, how many close within a given time period.
  • Automatically factor in derails and slips (the former are ignored in the narrow win rate and the latter ignored in both the narrow and broad win rates).

Let’s look at an example.  Here’s a chart that tracks 20 opportunities, 10 generated in 1Q17 and 10 generated in 2Q17, through their entire lifetime to a terminal stage.

oppty tracking

In reality things are a lot more complicated than this picture because you have opportunities still being generated in 3Q17 through 4Q18 and you’ll have opportunities that are still in play generated in numerous quarters before 1Q17.  But to keep things simple, let’s just analyze this little slice of the world.  Let’s do a milestone-based win/loss analysis.

win-loss

First, you can see the milestone-based win/loss rates bounce around a lot.  Here it’s due in part due to law of small numbers, but I do see similar volatility in real life — in my experience win rates bounce within a fairly broad zone — so I think it’s a real issue.  Regardless of that, what’s indisputable is that in this example, this is how things will look to the milestone-based win/loss analyzer.  Not a very clear picture — and a lot to panic about in 4Q17.

Let’s look at what a flow-based cohort analysis produces.

cohort1

In this case, we analyze the cohort of opportunities generated in the year-ago quarter.  Since we only generate opportunities in two quarters, 1Q17 and 2Q17, we only have two cohorts to analyze, and we get only two sets of numbers.  The thin blue box shows in opportunity tracking chart shows the data summarized in the 1Q18 column and the thin orange box shows the data for the 2Q18 column.  Both boxes depict how 3 opportunities in each cohort are still open at the end of the analysis period (imagine you did the 1Q18 analysis in 1Q18) and haven’t come to final resolution.  The cohorts both produce a 50% narrow win rate, a 43% vs. 29% broad win rate, and a 30% vs. 20% close rate.  How good are these numbers?

Well, in our example, we have the luxury of finding the true rates by letting the six open opportunities close out over time.  By doing a flow-based analysis in 4Q18 of the 1H17 cohort, we can see that our true narrow win rate is 57%, our true broad win rate is 40%, and our close rate is also 40% (which, once everything has arrived at a terminal state, is definitionally identical to the broad win rate).

cohort7

Hopefully this post has helped you think about your funnel differently by introducing the concept of milestone- vs. flow-based analysis and by demonstrating how the same business situation results in a very different rates depending on both the choice of win rate and analysis type.

Please note that the math in this example backed me into a 40% close rate which is about double what I believe is the benchmark in enterprise software — I think 20 to 25% is a more normal range. 

 

Just Effing Demo

I remember one time reading a win/loss report that went something like this.

“We were interested in buying Host and it made our short list.  When we invited you in for a demo with our team and the CFO, things went wrong.  After 20 minutes, your sales team was still talking about the product so the CFO left the meeting and didn’t want to evaluate your solution anymore.”

Huh?  What!  We spend a few hundred dollars to get a lead, maybe a few thousand to get it converted to a sales opportunity, we give it to our sales team and then they ‘show up and throw up’ on a prospect, talking for so long that the key decision maker leaves?

Yes, salespeople love to talk, but this can’t happen.  I remember another time a prospect called me.

“Look, I’ve been using EPM systems for 25 years.  I’ve used Hyperion, Essbase, TM1, and BPC.  I’ve been in FP&A my entire career.  I have an MBA from Columbia.  I am fully capable of determining my own needs and don’t want to play Twenty Questions with some 20-something SDR and then play it again with some sales consultant before I can get a live demo of your software.  Can we make that happen or not?”

Ouch.  In this case, our well defined and valued sales process (which required “qualification” and then “discovery”) was getting in the way of what the eminently qualified prospect wanted.

In today’s world, prospects both have and want more control over the sales process than ever before.  Yes, we might want to understand your requirements so we can put proper emphasis on different parts of the demonstration, but when a prospect — who clearly knows both what they’re doing and what they want — asks us for a demo, what should we do?  One thing:

Just effing demo  —  and then ask about requirements along the way

Look, I’m not trying to undo all the wisdom of learning how to do deep discovery and give customized demos, espoused by world-class sales trainers like Barry Rhein or in books like Just F*ing Demo (from whose title I derived the title of this post [1]).  These are all great ideas.  They should be your standard procedure.

But you need to remember to be flexible.  I always say don’t be a slave to metrics.  Don’t be a slave to process, either.

Here’s what I’ve learned from these situations:

Avoid triple-qualifying prospects with an SDR, then a rep, then an SC. Make SDR qualification quick and light.  Combine rep and SC qualification/discovery whenever possible. Don’t make the prospect jump through hoops just to get things started.

Intelligently adapt your process. If the prospect says they’re an expert, wants to judge for themselves, and just wants a quick look at your standard demo, don’t try to force a deep discovery call so you can customize – even if that’s your standard process.  Recognize that you’re in a non-standard situation, and just show up and do what they want.

Set expectations appropriately. There is a difference between a “Product Overview” and “Demonstration.”  If you think the right meeting is 30 minutes of slides to frame things and then a 30-minute demo, tell the prospect that, get their feedback, and if everyone agrees, then write “Product Overview” (not “Demonstration”) on the agenda.

Don’t make them wait. If you say the presentation is a one-hour demo, you should be demoing software within the first 5-7 minutes.  While brief personnel introductions are fine, anything else you do up-front should tee-up the demo.  This is not the time to talk about your corporate values, venture investors, or where the founder went to school.  Do that later, if indeed at all.

# # #

[1] A great book, by the way.  My favorite quote:  “in short, I stopped trying to deliver the perfect demo for my product and starting trying to deliver the perfect demo for my audience.”

Don’t Let Product Management Turn Into “The Roadmap Guys”

At many enterprise software companies product management (PM) ends up defaulting into a role that I can’t stand:  The Roadmap Guys*.

Like a restaurant with one item on the menu, the company defaults into ordering one thing from product management:  a roadmap pitch.

  • “The VP of PM is in Boston and Providence this week, can she visit some customers and do a few roadmap presentations?”
  • “Hey, there’s a local user group in NY this week; can PM do a roadmap pitch?”
  • “There’s a big customer in the executive briefing center today; can the PM do a roadmap?”
  • “As part of our sales cycle with prospect X, we’d love to get PM in to discuss the roadmap.”
  • “We’ve got a SAS day with Gartner next week, can PM come in a present the roadmap?”

You hear it all the time.  And I hate it.  Why?

From a sales perspective, roadmap presentations are the anti-sales pitch:  a well organized presentation of all the things your products don’t do.  Great, let’s spend lots of time talking about that.

From a competitive perspective, you’re broadcasting your plans.  If you’re presenting roadmap to every prospect who comes through the briefing center and at every local user group meeting, your competition is going to learn your roadmap, and fast.  Then they can copy it and/or blunt it.

But what irks me the most is what happens from a product management perspective:  you turn PM into “the talking guys” instead of “the listening guys.”  Given enough time, PM starts to view itself as the folks who show up and pitch roadmaps.

But that’s not their job.

PM should be the listening folks, not the talking folks.  Just like sales, PM should remember the adage:  we have two ears and one mouth; use them in proportion.

Wouldn’t the world be a better place if we changed the five previous bullets as follows?

  • “The VP of PM is in Boston and Providence this week, can she visit some customers and observe how people actually use the product?”
  • “Hey, there’s a local user group in NY this week; can PM break off a small focus group to ask customers about how they use the product?”
  • “There’s a big customer in the executive briefing center today; can PM come in and interview them about their impressions on evaluating the product?”
  • “As part of our sales cycle with prospect X, we’d love to get PM in to discuss what specifically they are trying to accomplish and how the product can do that?”
  • “We’ve got a SAS day with Gartner next week, can PM come in and hear from Gartner about what they’re seeing in the market and in their interactions with customers?”

So every time you hear the word “roadmap” in the same sentence as “product management,” stop, pause, and think of a better way to use the PM team.  Sure, there are certainly times when a roadmap presentation is in order.  But don’t default to it.  Keep your PM team listening instead of talking.

# # #

* I’m using “guys” here in a gender-neutral sense like “folks.”

Can You Solution Sell without Selling Solutions?

Yes.  And for those who get the distinction, I’d might add, somewhat obviously.

But too many people don’t get it.  Too many folks equate “solution selling” with “selling solutions.”  In fact, they’re quite different.  So, in this post, we’ll try to make the world a better place by explaining the difference between selling solutions and solution selling [1].

What is Solution Selling?

First and foremost, Solution Selling is a book [2].  And it’s a book written by a guy, Michael Bosworth, who, if memory serves, was trying to sell Knowledge Management Software in the 1980s.  Never forget this.  Solution Selling wasn’t written by a guy selling easy-to-sell products in a hot category, such as (at the time) Oracle database or PeopleSoft applications.  Solution Selling was written by a guy trying to sell in a tough category. Look at the subtitle of the book:  “Creating Buyers in Difficult Selling Markets.”

Necessity, as they say, is the mother of invention.

When you’re selling in a hot category [3], this is what you hear from the market.

“Yes, we’re going to buy a business intelligence tool and Gartner tells us it should be one of Cognos, Brio, and you — so you’re going to need explain why we should pick you over the other two.”

Nothing about value.  Nothing about problems.  Nothing about ROI.  We’ve already decided we’re going to buy one and you need to convince us why to buy yours.  [4]

When you’re selling in a cold category, the conversation goes something like this:

“A what?  An XML database system?  Wait, didn’t Gartner call that ‘the market that never was’ about two years ago — why in the world would anyone ever buy one of those.” [5]

In the first case, the sales cycle is all about differentiation.  In the second case, it’s all about value.  In the first case, it’s why buy one from me.  In the second, it’s why buy one at all.

Solution selling is the process of identifying a business problem that the product solves, finding the business owner of that problem, and selling them on the value of solving that problem and your ability to do so.

To use my favorite marketing analogy [6], solution selling is the process of selling the value of a ¼” hole.  Product selling is talking all about the wonderful titanium that’s in the ¼” drill bit.

For example, at MarkLogic we sold the world’s finest XML database system and XQuery processing engine.  In terms of market interest, that plus $3 will get you a tall latte.  That is, no one cared.  You could call up IT people and database architects and database administrators all day and tell them you had the world’s finest XQuery engine and no would care.  They weren’t interested in the category.

Certain businesspeople, however, were quite interested in what you could do with it.

  • If you called the SVP of K-12 Education at Pearson and talked about solving the tricky problem of customizing textbooks to meet many and varied state regulations, you’d get a call back.
  • If you called an intelligence officer at your favorite three-letter agency and talked about gathering, enriching, and querying open source content to build next-generation OSINT systems, you’d get a call back.
  • If you called the SVP of Digital Strategy at McGraw-Hill and talked overall about how the industry needed to separate content from the container in building next-generation products in response to the massive threat to media caused by Google, you’d get a call back.

Simply put, if you called a person about an important problem that they needed to solve, they’d call you back.  Whether they’d buy from you would come down to the extent they believed you can solve the problem based on several factors including a technology assessment, conversations with reference customers for whom you’ve solved the problem before, the cost/benefit associated with the project, and whether they wanted to work with you. [7]

What is Selling Solutions?

Geoffrey Moore refers to an important concept called “whole product” in Crossing the Chasm.  And it’s the idea that you’re not just selling technology platform to your beachhead market, you’re selling the fact that you know how to solve problems with it. Solving those problems might require hundreds of hours of consulting services, integration with complementary third-party software packages, and data integration with existing core systems.

But nobody said the “whole product” had to be packaged up, for example, as a set of templates that you customize that help accelerate the process of solving the problem.  This is the zone of “solutions.”

Many companies, early in their lifecycle for focus reasons or late in their lifecycle to increase the size of a saturating market [8], decide they want to package up a solution after repeatedly solving a problem in a certain area.  This often starts out as leftover consulting-ware and over time can evolve into a set of full-blown applications.

At most software companies, particularly bigger ones, when you start talking about packaged solutions, this is what you mean:  the combination of know-how and leftover intellectual property (IP) from prior engagements not licensed as software product but nevertheless used to both accelerate the time it takes to build the solution and reduce the risk of failure in so doing.

For example, during my time at MarkLogic, we often debated whether and to what extent we should create a packaged custom publishing solution or simply think of custom publishing as a focus area, something that we had a lot of know-how in, and re-use whatever leftover IP we could from prior gigs without glorifying it as a packaged solution.  Because the assignments were so different (publishers used as the the platform to build their products) we never opted to do so.  Had we been selling a business-support application as opposed to do product development platform, we probably would have.

The Difference Between Solution Selling and Selling Solutions

Solution selling is an approach to (and a complete methodology for) the sales process.  Selling solutions means selling packaged, typically application-layer, know-how typically built into a series of templates and frameworks that help accelerate the process of solving a given problem.

They are different ideas.

You can solution sell without a single packaged solution in your product line.  To again answer the question posed by the title of this post:  Yes, you can solution sell without selling solutions.

Solution selling is simply an approach to how you sell your product.  Certainly it can be easier to solution sell when you are selling solutions.  But it is not required and one is not tantamount to the other.

# # #

Notes

[1] In fact, rather perversely, you can sell solutions without solution selling.  If your company built a custom-tailored solution to solve a specific business problem and if you sold it emphasizing the features of the solutions (i.e., “feeds and speeds”) without trying to understand the customer’s specific business problem and its impacts, then you’d be guilty of product-selling a solution.  See end of the post.

[2] Which has largely been replaced by the author’s next book, Customer Centric Selling, but which – like many classics – was better before it was “improved” in my humble opinion.

[3] Which leads to one of my favorite sayings:  “if you have to ask if you’re working in a hot category, you’re not.”  If you were, two things would be different:  first, you’d know and second, you’d be too busy to ask.  QED.

[4] Which results in what I call an “axe battle” sales process, reminiscent of knights in heavy armor swinging axes at each other where each is blow can be thought of as feature.  “We have aggregate awareness, boom.”  “We have dynamic microcubes, boom.”  And so on.

[5] Gartner did, in fact, say precisely this about this XML database market, but that didn’t stop us from building MarkLogic from $0 to an $80M revenue run-rate during my six years there.  It did, however, provide a huge clue that we needed to adopt a solution-selling methodology (and bowling-alley strategy) in so doing.

[6] “Purchasing agents buy ¼” holes, not ¼” bits.”  Theodore Levitt.

[7] Because a startup can only develop this fluency and experience in a small number of solutions, you should cross the chasm by focusing on an initial beachhead and then build out into other markets through adjacencies (aka, bowling alley strategy) as described in Inside the Tornado.  In many ways, the solution selling sales methodology goes hand in hand with these strategy books by Geoffrey Moore.

[8] Geoffrey Moore calls these +1 additions that help grow the market as the once-hot core technology market saturates and you need to switch back to a solution focus if you wish to increase the market size.

A Look at the Tintri S-1

Every now and then I take a dive into an S-1 to see what clears the current, ever-changing bar for going public.  After a somewhat rocky IPO process, Tintri went public June 30 after cutting the IPO offering price and has traded flat thus far since then.

Let’s read an excerpt from this Business Insider story before taking a look at the numbers.

Before going public, Tintri had raised $260 million from venture investors and was valued at $800 million.

With the performance of this IPO, the company is now valued at about about $231 million, based on $7.50 a share and its roughly 31 million outstanding shares, (if the IPO’s bankers don’t buy their optional, additional roughly 1.3 million shares.)

In other words, this IPO killed a good $570 million of the company’s value.

In other words, Tintri looks like a “down-round IPO” (or an “IPO of last resort“) — something that frankly almost never happened before the recent mid/late stage private valuation bubble of the past 4 years.

Let’s look at some numbers.

tintri p+l

Of note:

  • $125M in FY2017 revenue.  (They have scale, but this is not a SaaS company so the revenue is mostly non-recurring, making it easier to get to grow quickly and making the revenue is worth less because only the support/maintenance component of it renews each year.)
  • 45% YoY total revenue growth.  (On the low side, especially given that they have a traditional license/maintenance model and recognize revenue on shipment.)
  • 65% gross margins  (Low, but they do seem to sell flash memory hardware as part of their storage solutions.)
  • 87% of revenue spent on S&M (High, again particularly for a non-SaaS company.)
  • 43% of revenue spent on R&D  (High, but usually seen as a good thing if you view the R&D money as well spent.)
  • -81% operating margins (Low, particularly for a non-SaaS company.)
  • -$70.4M in cashflow from operating activities in 2017 ($17M average quarterly cash burn from operations)
  • Incremental S&M / incremental product revenue = 73%, so they’re buying $1 worth of incremental (YoY) revenue for an incremental 73 cents in S&M.  Expensive but better than some.

Overall, my impression is of an on-premises (and to a lesser extent, hardware) company in SaaS clothing — i.e., Tintri’s metrics look like a SaaS company, but they aren’t so they should look better.  SaaS company metrics typically look worse than traditional software companies for two reasons:  (1) revenue growth is depressed by the need to amortize revenue over the course of the subscription and (2) subscriptions companies are willing to spend more on S&M to acquire a customer because of the recurring nature of a subscription.

Concretely, if you compare two 100-unit customers, the SaaS customer is worth twice the license/maintenance customer over 5 years.

saas compare

Moreover, even if Tintri were a SaaS company, it is quite out of compliance with the Rule of 40, that says growth rate + operating margin >= 40%.  In Tintri’s case, we get -35%, 45% growth plus -81% operating margin, so they’re 75 points off the rule.

Other Notes

  • 1250+ customers
  • 21 of the Fortune 100
  • 527 employees as of 1/31/17
  • CEO 2017 cash compensation $525K
  • CFO 2017 cash compensation $330K
  • Issued special retention stock grants in May 2017 that vest in the two years following an IPO
  • Did option repricing in May 2017 to $2.28/share down from weighted average exercise price of $4.05.
  • $260M in capital raised prior to IPO
  • Loans to CFO and CEO to exercise stock options at 1.6% to 1.9% interest in 2013
  • NEA 22.7% ownership prior to opening
  • Lightspeed 14.5% ownership
  • Insight Venture Partners 20.2% ownership
  • Silver Lake 20.4% ownership
  • CEO 3.8% ownership
  • CFO 0.7% ownership
  • $48.9M in long-term debt
  • $13.8M in 2017 stock-based compensation expense

Overall, and see my disclaimers, but this is one that I’ll be passing on.

 

The Dogshit Bar: A Memorable Market Research Concept

I can’t tell you the number of times I’ve seen market research that suffers from one key problem.  It goes something like this:

  • What do you think of PRODUCT’s user interface?
  • Do you think PRODUCT should be part of suite or a standalone module?
  • Is the value of PRODUCT best measured per-user or per-bite?
  • Is the PRODUCT’s functionality best delivered as a native application or via a browser?
  • Would you like PRODUCT priced per-user or per-consumption?
  • Rank the importance of features 1-4 in PRODUCT?

The problem is, of course, that you’ve never asked the one question that actually matters — would you buy this product — and are pre-supposing the need for the product and that someone would pay something to fulfill that need.

So try this:  substitute “Dogshit Bar” (i.e., a candy bar made of dog shit) for every instance of PRODUCT in one of your market research surveys and see what happens.  Very quickly, you’ll realize that you’re asking questions equivalent to:

  • Should the Dogshit Bar be delivered in a paper or plastic wrapper?
  • Would you prefer to buy the Dogshit Bar in a 3, 6, or 9 oz size?
  • Should the Dogshit Bar be priced by ounce or some other metric?

So before drilling into all the details that product management can obsess over, step back, and ask some fundamental questions first.

  • Does the product solve a problem faced by your organization?
  • How high a priority is that problem?  (Perhaps ranked against a list of high-level priorities for the buyer.  It’s not enough that it solves a problem, it needs to solve an important problem.)
  • What would be the economic value of solving that problem?  (That is, how much value can this product provide.)
  • Would you be willing to pay for it and, if so, how much?  (Which starts to factor in not just  value but the relative cost of alternative solutions.)

So why do people make this mistake?

I believe there’s some feeling that it’s heretical to ask the basic questions about the startup’s core product or the big company’s new strategic initiatiave that the execs dreamed up at an offsite.  While the execs can dream up new product ideas all day long, there’s one thing they can’t do:  force people to buy them.

That’s why you need to ask the most basic, fundamental questions in market research first, before proceeding on to analyzing packaging, interface, feature trade-offs, platforms, etc.  You can generate lots of data to go analyze about whether people prefer paper or plastic packaging or the 3, 6, or 9 ounce size.  But none of it will matter.  Because no one’s going to buy a Dogshit Bar.

Now, before wrapping this up, we need to be careful of the Bradley Effect in market research, an important phenomenom in live research (as opposed to anonymous polls) and one of several reasons why pollsters generally called Trump vs. Clinton incorrectly in the 2016 Presidential election.

I’ll apply the Bradley Effect to product research as follows:  while there are certain exception categories where people will say they won’t buy something that they will (e.g., pornography), in general:

  • If someone says they won’t buy something, then they won’t
  • If someone says they will buy something, then they might

Why?  Perhaps they’re trying to be nice.  Perhaps they do see some value, but just not enough.  Perhaps there is a social stigma associated with saying no.

I first learned about this phenomenom reading Ogivly on Advertising, a classic marketing text by the father of advertising David Ogilvy.  Early in his career Ogilvy got lucky and learned an important lesson.  While working for George Gallup he was assigned to do polling about a movie entitled Abe Lincoln in Illinois.  While the research determined the movie was going to be a roaring success, the film ended up a flop.  Why?  The participants lied.  After all, who wants to sound unpatriotic and tell a pollster that you won’t go see a movie about Abe Lincoln?  Here’s a picture of Ogilvy doing that research.  Always remember it.

ogilvy

The Opportunity Cost of Debating Facts

I read this New York Times editorial this morning, How the Truth Got Hacked, and it reminded me of a situation at work, back when I first joined Host Analytics some four years ago.  This line, in particular, caught my attention:

Imagine the conversation we’d be having if we weren’t debating facts.

Back when I joined Host Analytics, we had an unfortunate but not terribly unusual dysfunction between product management (PM) and Engineering (ENG).  By the time the conflict got to my office, it went something like this:

PM:  “ENG said they’d deliver X, Y, and Z in the next release and now they’re only delivering X and half of Y.  I can’t believe this and what am I going to the customers and analysts who I told that we were delivering …”

ENG:  “PM is always asking us to deliver too much and we never actually committed to deliver all of Y and we certainly didn’t commit to deliver Z.”

(For extra fun, compound this somewhat normal level of dysfunction with American vs. Indian communication style differences –including a quite subtle way of saying “no” – and you’ll see the real picture.)

I quickly found myself in a series of “he said, she said” meetings that were completely unproductive.  “We don’t write down commitments because we’re agile,” was one refrain.  In fact, while I agree that the words “commitment” and “agile” generally don’t belong in the same sentence, we were anything but agile at the time, so I viewed the statement more as a convenient excuse than an expression of true ideological conflict.

But the thing that bugged me the most was that we had endless meetings where we couldn’t even agree on basic facts.  After all, we either had a planning problem, a delivery problem, or both and unless we could establish what we’d actually agreed to deliver, we couldn’t determine where to focus our efforts.  The meetings were a waste of time.  I had no way knowing who said what to whom, we didn’t have great tracking systems, and I had no interest in email forensics to try and figure it out.  Worse yet, it seemed that two people could leave the same meeting not even agreeing on what was decided.

Imagine the conversation we’d be having if we weren’t debating facts.

In the end, it was clear that we needed to overhaul the whole process, but that would take time.  The question was, in the short term, could we do something that would end the unproductive meetings so we take basic facts in evidence and then have a productive debate at the next level?  You know, to try and make some progress on solving our problems?

I created a document called the Release Scorecard and Commitments document that contained two tables, each structured like this.

release-scorecard

At the start of each release, we’d list the major stories that we were trying to include and we’d have Engineering score their confidence in delivering each one of them.  Then, at the end of every release, PM would score how the delivery went, and the team could provide a comment.  Thus, at every post-release roadmap review, we could review how we did on the prior release and agree on priorities for the next one.  Most importantly, when it came to reviewing the prior release, we had a baseline off which we could have productive discussions about what did or did not happen during the cycle.

Suddenly, by taking the basic facts out of question, the meetings changed overnight.  First, they became productive.  Then, after we fully transitioned to agile, they became unnecessary.  In fact, I’ve since repeatedly said that I don’t need the document anymore because it was a band-aid artifact of our pre-agile world.  Nevertheless, the team still likes producing it for the simple clarity it provides in assessing how we do at laying out priorities and then delivering against them.

So, if you find yourself in a series of unproductive, “he said, she said” meetings, learn this lesson:  do something to get basic facts into evidence so you can have a meaningful conversation at the next level.

Because there is a massive opportunity cost when all you do is debate what should be facts.