Handling Conflict with the “Disagree and Commit” and “New Information” Principles

In every executive team there are going to be times when people don’t agree on certain important strategic or operational decisions.  Some examples:

  • Should we split SDRs inbound vs outbound?
  • Should we map SCs to reps or pool them?
  • How should we split upsell vs new business focus in mid-market reps?
  • Should CSMs get paid on upsell or only renewals?
  • Should we put the new buzzword (e.g., AI, ML, social) into the release plan?
  • Should we change the company logo ?

The purpose of this post is to provide a framework to get decisions made and executed, without certain decisions becoming a form of weekly nagging at the e-staff meeting, a topic of discussion at every board meeting, or worst of all, a standing joke among the team.

The Disagree-and-Commit Principle

The first time I heard disagree-and-commit I thought it was corporate, doublespeak garbage.  What the heck did it mean?  I’m supposed to go to a meeting, say that I believe we should go left, get overrun by the group who eventually decides to go right, and then I’m supposed to say “sure, everybody, just kidding, let’s go right.”  How disingenuous — everybody knows I wanted to go left.  How controlling of the establishment.  How manipulative.  This is thought control!

“You may disagree, but you must conform … (wait, was that our outside voice) …  you must commit.”

(Recall my first professional job was as at a company we referred to as The People’s Republic of Ingres.)

Let’s just say I missed the point.  My older, wiser self now thinks it’s a great, but often misunderstood, rule.  (And that’s not just because now I am the establishment.)

Here’s a nice definition of disagree-and-commit from The Amazon Way via this blog post.

Leaders are obligated to respectfully challenge decisions when they disagree, even when doing so is uncomfortable or exhausting. Leaders have conviction and are tenacious. They do not compromise for the sake of social cohesion. Once a decision is determined, they commit wholly.

I always missed two things:

  • I took commit to mean change your mind (or “get your mind right” in the Cool Hand Luke sense). It actually means committing to execute the decision wholly, i.e., as if it were the one you had voted for.  You can’t undermine or sabotage the decision just to prove yourself right.  This is a great rule.  People aren’t always going to agree, but if you want to work at the company, you must execute our decisions wholeheartedly once they are made.  There is no other option.

 

  • The obligation to disagree.  I love this part because some people lack the courage to speak up in the meeting, and then want to passive-aggressively work against the decision and/or attempt a pocket veto by going to the person who was in charge of the meeting and saying, “well, I didn’t feel comfortable saying this in the meeting, but, ….” Such behavior creates a potential paradox for the executive in charge — particularly if she agrees with the pocket veto argument.  Does she overrule the group decision based on the new argument (and reward dysfunctional behavior) or does she stick with a decision she no longer prefers in order to avoid incenting pocket vetoes.  In my opinion, in 95% of the cases you want say, “Sorry Joe, I wish you’d said something in the meeting because that’s an interesting point, but the decision stands.” Worst case call another meeting.  Never, ever just overrule the decision.

Explicitly embracing the disagree-and-commit principle is one great way to end endless, nagging disagreements:  we met to discuss the issue, we came to a conclusion, I know you didn’t agree with it, but you need to commit to execute it wholeheartedly.  (Else we’re going to have a conversation about insubordination.)  We want a rational culture.  We debate ideas.  But we need to make and execute decisions, and you’re not going to agree with every one.

The New Information Principle

But what if the issue keeps coming up anyway?  Perhaps via periodic serious requests to reconsider the decision.  Perhaps through a series of objections coming from someone not responsible for executing the decision (so “commit” is less relevant) — but who just can’t stand the idea.  Or maybe someone has a personal ax to grind (e.g., I know we’ve talked about this before, but can we please relocate the office) and who just won’t take no for an answer.

The problem is if you always shut down these requests, then you risk creating a big problem with corporate agility.  On one hand you want to shut down the constant nagging about adding data mining capabilities from the data mining zealot. On the other hand, you don’t want to make the subject taboo because maybe your top competitor launched a new data-mining addition last month and it’s hurting you in sales.

So, the principle is simple:  if you want to re-open discussion on something we’ve already decided, do you have any new information that wasn’t available at the time we made the decision?

If the answer is no, we’re not re-opening it here, and we can do at either next quarter’s ops review or next year’s strategy offsite (pending prioritization against other topics).

If the answer is yes, find out what the new information is, and then decide if it warrants an immediate or deferred re-examination of the decision.

With this principle you can keep a firm hand against those who won’t give up on an issue while still being open to new information that might cause the need for a  valid re-examination of it.

Putting the A Back in FP&A with Automated, Integrated Planning

I was reading this blog post on Continuous Planning by Rob Kugel of Ventana Research the other day and it reminded me of one of my (and Rob’s) favorite sayings:

We need to put the A back in FP&A

This means that the financial planning and analysis (FP&A) team at many companies is so busy doing other things that it doesn’t have time to focus on what it does best and where it can add the most value:  analysis.

This begs the question:  where did the A go?  What are the other things that are taking up so much time?  The answer:  data prep and spreadsheet jockeying.  These functions suck time away and the soul from the FP&A function.

dataprep

Data-related tasks — such as finding, integrating, and preparing data — take up more than 2/3rds of FP&A’s time.  Put differently, FP&A spends twice as much time getting ready to analyze data than it does analyzing it.  It might even be worse, depending on whether periodic and ad hoc reporting is included in data-related task or further carved out of the 28% of time remaining for analytics, as I suspect it is.

spreadsheetsrule

It’s not just finance who loves spreadsheets.  The business does do:  salesops, marketingops, supply chain planners, professional services ops, and customer support all love spreadsheets, too.  When I worked at Salesforce, we had one of the most sophisticated sales strategy and planning teams I’ve ever seen.  Their tool of choice?  Excel.

This comes back to haunt finance in three ways:

  • Warring models, for example, when the salesops new bookings model doesn’t foot to the finance one because they make different ramping and turnover assumptions.  These waste time with potential endless fights.
  • Non-integrated models.  Say sales and finance finally agree on a bookings target and to hire 5 more salespeople to support it.  Now we need to call marketing to update their leadgen model to ensure there’s enough budget to support them, customer service to ensure we’re staffed to handle the incremental customers they sign, professional services to ensure we’re have adequate consulting resources, and on and on.  Forget any of these steps and you’ll start the year out of balance, with unattainable targets somewhere.
  • Excel inundation.   FP&A develops battle fatigue dealing with and integrating some many different versions of so many spreadsheets, often late and night and under deadline pressure.  Mistakes gets made.

So how can prevent FP&A from being run over by these forces?  The answer is to automate, automate, and integrate.

  • Automate data integration and preparation.  Let’s free up time by use software that lets you “set and forget” data refreshes.  You should be able to setup a connector to a data source one, and then have it automatically run at periodic intervals going forward.  No more mailing spreadsheets around.
  • Automate periodic FP&A tasks.  Use software where you can invest in building the perfect monthly board pack, monthly management reports, quarterly ops review decks, and quarterly board reports once, and then automatically refresh it every period through these templates.  This not only free up time and reduces drudgery; it eliminates plenty of mistakes as well.
  • Integrate planning across the organization.  Move to a cloud-based enterprise performance platform (like Host Analytics) that not only accomplishes the prior two goals, but also offers a modeling platform that can be used across the organization to put finance, salesops, marketingops, professional services, supply chain, HR, and everyone else across the organization on a common footing.

Since the obligatory groundwork in FP&A is always heavy, you’re not going to succeed in putting the A back in FP&A simply by working harder and later.  The only way to put the A back in FP&A is to create time.  And you can do that with two doses of automation and one of integration.

Using Pipeline Conversion Rates as Triangulation Forecasts

In this post we’ll examine how we to use pipeline conversion rates as early indicators of your business performance.

I call such indicators triangulation forecasts because they help the CEO and CFO get data points, in addition to the official VP of Sales forecast, that help triangulate where the company is going to land.  Here are some additional triangulation forecasts you can use.

  • Salesrep-level forecast (aggregate of every salesperson’s forecast)
  • Manager-level forecast (aggregate of the every sales manager’s forecast)
  • Stage-weighted expected value of the pipeline, which takes each opportunity and multiplies it by a stage- and ideally time-specific weight (e.g., week 6 stage 4 conversion rate)
  • Forecast-category-weighted expected value of the pipeline, which does the same thing relying on forecast category rather than stage (e.g., week 7 upside category conversion rate)

With these triangulation forecasts you can, as the old Russian proverb goes, trust but verify what the VP of sales is telling you.  (A good VP of sales uses them as part of making his/her forecast as well.)

Before looking at pipeline conversion rates, let me remind you that pipeline analysis is a castle built on a quicksand foundation if your pipeline is not built up from:

  • A consistent, documented, enforced set of rules for how opportunities are entered into the pipeline including, e.g., stage definitions and valuation rules.
  • A consistent, documented, enforced process for how that pipeline is periodically scrubbed to ensure its cleanliness. [1]

Once you have such a pipeline, the first thing you should do is to analyze how much of it you convert each quarter.

w3 tq

This helps you not only determine your ideal pipeline coverage ratio (the inverse of the conversion rate, or about 4.0x in this case), but also helps you get a triangulation forecast on the current quarter.  If we’re in 4Q17 and we had $25,000K in new ARR pipeline at week 3, then using our trailing seven quarter (T7Q) average conversion rate of 25%, we can forecast landing at $6,305K in new ARR.

Some folks use different conversion rates for forecasting — e.g., those in seasonal businesses with a lot of history might use the average of the last three year’s fourth-quarter conversion rate.  A company that brought in a new sales VP five quarters ago might use an average conversion rate, but only from the five quarters in her era.

This technique isn’t restricted to this quarter’s pipeline.  One great way to get sales focus on cleaning next quarter’s pipeline is to do the same analysis on next-quarter pipeline conversion as well.

w3 nq

This analysis suggests we’re teed up to do $6,818K in 1Q18, useful to know as an early indicator at week 3 of 4Q17 (i.e., mid/late October).

At most companies the $6,305K prediction for 4Q17 new ARR will be pretty accurate.  However, a strange thing happens at some companies:  while you end up closing around $6,300K in new ARR, a fairly large chunk of the closed deals can’t be found in the week 3 pipeline.  While some sales managers view this as normal, better ones view this as a sign of potentially large problem.  To understand the extent to which this is happening, you need perform this analysis:

cq pipe

In this example, you can see a pretty disturbing fact — while the company “converted” the week 3 ARR pipeline at the average rate, more than half of the opportunities that closed during the quarter (30 out of 56) were not present in the week 3 pipeline [2].  Of those, 5 were created after week 3 and closed during the quarter, which is presumably good.  However, 25 were pulled in from next quarter, or the quarter after that, which suggests that close dates are being sandbagged in the system.

Notes

[1] I am not a big believer in the some sales managers “always be scrubbing” philosophy for two reasons:  “always scrubbing” all too often translates to “never scrubbing” and “always scrubbing” can also translate to “randomly scrubbing” which makes it very hard to do analytics.  I believe sales should formally scrub the pipeline prior to weeks 3, 6, and 9.  This gives them enough time to clean up after the end of a quarter and provides three solid anchor points on which we can do analytics.

[2] Technically the first category, “closed already by week 3” won’t appear in the week 3 pipeline so there is an argument, particularly in companies where week 1-2 sales are highly volatile, to do the analysis on a to-go basis.

Using Time-Based Close Rates to Align Marketing Budgets with Sales Targets

This post builds on my prior post, Win Rates, Close Rates, and Milestone vs. Flow Analysis.  In it, I will take the ideas in that post, expand on them a bit, and then apply them to difficult problem of ensuring you have enough marketing demand generation budget to hit your sales targets.

Let’s pretend it’s 4Q17 and that we need to model 2018 sales based solely on marketing-generated SALs (sales accepted leads).  To do that, we need to decompose our close rate over time because knowing we eventually close 40% of SALs is less useful than knowing the typical timing in how they close over time.

decompose closed

In a perfect world, we’d have 6-8 cohorts, not two.  The goal is to produce the last line, the average of the in-quarter, first-quarter, second-quarter, and so on close rates for a SAL.

Using these time-based average close rates, we can build a waterfall that takes historical, forecast (for the current quarter), and planned 2018 SALs and converts them into deals.

waterfall

This analysis suggests that with the currently planned SALs you can support an ARR number of $16.35M.  If sales needs more than that, you either need to assume an improvement in close rates or an increase in SAL generation.

Once you’ve established the required number of SALs, you can then back into a total demand-generation budget by knowing your cost/SAL, and then building out a marketing mix of programs (each with their own cost/SAL) that generates the requisite SALs at the targeted overall cost.

Win Rates, Close Rates and Milestone vs. Flow Analysis

Hey, what’s your win rate?

It’s another seemingly simple question.  But, like most SaaS metrics, when you dig deeper you find it’s not.  In this post we’ll take a look at how to calculate win rates and use win rates to introduce the broader concept of milestone vs. flow analysis that applies to conversion rates across the entire sales funnel.

Let’s start with some assumptions.  Once an opportunity is accepted by sales (known as a sales-accepted opportunity, or SAL), it eventually will end up in one of three terminal states:

  • Won
  • Lost
  • Other (derailed, no decision)

Some people don’t like “other” and insist that opportunities should be exclusively either won or lost and that other is an unnecessary form of lost which should be tracked with a lost reason code as opposed to its own state.  I prefer to keep other, and call it derailed, because a competitive loss is conceptually different from a project cancellation, major delay, loss of sponsor, or a company acquisition that halts the project.  Whether you want to call it other, no decision, or derailed, I think having a third terminal state is warranted from first principles.  However, it can make things complicated.

For example, you’ll need to calculate win rates two ways:

  • Win rate, narrow = wins / (wins + losses)
  • Win rate, broad = wins / (wins + losses + derails)

Your narrow win rate tells you how good you are at beating the competition.  Your broad rates tells you how good you are at closing deals (that come to a terminal state).

Narrow win rate alone can be misleading.  If I told you a company had a 66% win rate, you might be tempted to say “time to add more salespeople and scale this thing up.”  If I told you they got the 66% win rate by derailing 94 out of every 100 opportunities it generated, won 4, and lost the other 2, then you’d say “not so fast.”  This, of course, would show up in the broad win rate of 4%.

This brings up the important question of timing.  Both these win rate calculations ignore deals that push out of a quarter.  So another degenerate case is a situation where you win 4, lose 2, derail 4, and push 90 opportunities.  In this case, narrow win rate = 66% and broad win rate = 40%.  Neither is shining a light on the problem (which, if it happens continuously, I call a rolling hairball problem.)

The issue here is thus far we’ve been performing what I call a milestone analysis.  In effect, we put observers by the side of the road at various milestones (created, won, lost, derailed) and ask them to count the number opportunities that pass by each quarter.  The issue, especially with companies that have long sales cycles, is that you have no idea of progression.  You don’t know if the opportunities that passed “win” this quarter came from the opportunities that passed “created” this quarter, or if they came from last quarter, the quarter before that, or even earlier.

Milestone analysis has two key advantages

  • It’s easy — you just need to count opportunities passing milestones
  • It’s instant — you don’t have to wait to see how things play out to generate answers

The big disadvantage is it can be misleading, because the opportunities hitting a terminal state this quarter were generated in many different time periods.  For a company with an average 9 month sales cycle, the opportunities hitting a terminal state in quarter N, were generated primarily in quarter N-3, but with some coming in quarters N-2 and N-1 and some coming in quarters N-4 and N-5.  Across that period very little was constant, for example, marketing programs and messages changed.  So a marketing effectiveness analysis would be very difficult when approached this way.

For those sorts of questions, I think it’s far better to do a cohort-based analysis, which I call a flow analysis.  Instead of looking at all the opportunities that hit a terminal state in a given time period, you go back in time, grab a cohort of opportunities (e.g., all those generated in 4Q16) and then see how they play out over time.  You go with the flow.

For marketing programs effectiveness, this is the only way to do it.  Instead of a time-based cohort, you’d take a programs-based cohort (e.g., all the opportunities generated by marketing program X), see how they play out, and then compare various programs in terms of effectiveness.

The big downside of flow analysis is you end up analyzing ancient history.  For example, if you have a 9 month average sales cycle with a wide distribution around the mean, you may need to wait 15-18 months before the vast majority of the opportunities hit a terminal state.  If you analyze too early, too many opportunities are still open.  But if you put off analysis then you may get important information, but too late.

You can compress the time window by analyzing programs effectiveness not to sales outcomes but to important steps along the funnel.  That way you could compare two programs on the basis of their ability to generate MQLs or SALs, but you still wouldn’t know whether and at what relative rate they generate actual customers.  So you could end up doubling down on a program that generates a lot of interest, but not a lot of deals.

Back to our original topic, the same concept comes up in analyzing win rates.  Regardless of which win rate you’re calculating, at most companies you’re calculating it on a milestone basis.  I find milestone-based win rates more volatile and less accurate that a flow-based SAL-to-close rate.  For example, if I were building a marketing funnel to determine how many deals I need to hit next year’s number, I’d want to use a SAL-to-close rate, not a win rate, to do so.  Why?  SAL-to-close rates:

  • Are less volatile because they’re damped by using long periods of time.
  • Are more accurate because they actually tracking what you care about — if I get 100 opportunities, how many close within a given time period.
  • Automatically factor in derails and slips (the former are ignored in the narrow win rate and the latter ignored in both the narrow and broad win rates).

Let’s look at an example.  Here’s a chart that tracks 20 opportunities, 10 generated in 1Q17 and 10 generated in 2Q17, through their entire lifetime to a terminal stage.

oppty tracking

In reality things are a lot more complicated than this picture because you have opportunities still being generated in 3Q17 through 4Q18 and you’ll have opportunities that are still in play generated in numerous quarters before 1Q17.  But to keep things simple, let’s just analyze this little slice of the world.  Let’s do a milestone-based win/loss analysis.

win-loss

First, you can see the milestone-based win/loss rates bounce around a lot.  Here it’s due in part due to law of small numbers, but I do see similar volatility in real life — in my experience win rates bounce within a fairly broad zone — so I think it’s a real issue.  Regardless of that, what’s indisputable is that in this example, this is how things will look to the milestone-based win/loss analyzer.  Not a very clear picture — and a lot to panic about in 4Q17.

Let’s look at what a flow-based cohort analysis produces.

cohort1

In this case, we analyze the cohort of opportunities generated in the year-ago quarter.  Since we only generate opportunities in two quarters, 1Q17 and 2Q17, we only have two cohorts to analyze, and we get only two sets of numbers.  The thin blue box shows in opportunity tracking chart shows the data summarized in the 1Q18 column and the thin orange box shows the data for the 2Q18 column.  Both boxes depict how 3 opportunities in each cohort are still open at the end of the analysis period (imagine you did the 1Q18 analysis in 1Q18) and haven’t come to final resolution.  The cohorts both produce a 50% narrow win rate, a 43% vs. 29% broad win rate, and a 30% vs. 20% close rate.  How good are these numbers?

Well, in our example, we have the luxury of finding the true rates by letting the six open opportunities close out over time.  By doing a flow-based analysis in 4Q18 of the 1H17 cohort, we can see that our true narrow win rate is 57%, our true broad win rate is 40%, and our close rate is also 40% (which, once everything has arrived at a terminal state, is definitionally identical to the broad win rate).

cohort7

Hopefully this post has helped you think about your funnel differently by introducing the concept of milestone- vs. flow-based analysis and by demonstrating how the same business situation results in a very different rates depending on both the choice of win rate and analysis type.

Please note that the math in this example backed me into a 40% close rate which is about double what I believe is the benchmark in enterprise software — I think 20 to 25% is a more normal range. 

 

Just Effing Demo

I remember one time reading a win/loss report that went something like this.

“We were interested in buying Host and it made our short list.  When we invited you in for a demo with our team and the CFO, things went wrong.  After 20 minutes, your sales team was still talking about the product so the CFO left the meeting and didn’t want to evaluate your solution anymore.”

Huh?  What!  We spend a few hundred dollars to get a lead, maybe a few thousand to get it converted to a sales opportunity, we give it to our sales team and then they ‘show up and throw up’ on a prospect, talking for so long that the key decision maker leaves?

Yes, salespeople love to talk, but this can’t happen.  I remember another time a prospect called me.

“Look, I’ve been using EPM systems for 25 years.  I’ve used Hyperion, Essbase, TM1, and BPC.  I’ve been in FP&A my entire career.  I have an MBA from Columbia.  I am fully capable of determining my own needs and don’t want to play Twenty Questions with some 20-something SDR and then play it again with some sales consultant before I can get a live demo of your software.  Can we make that happen or not?”

Ouch.  In this case, our well defined and valued sales process (which required “qualification” and then “discovery”) was getting in the way of what the eminently qualified prospect wanted.

In today’s world, prospects both have and want more control over the sales process than ever before.  Yes, we might want to understand your requirements so we can put proper emphasis on different parts of the demonstration, but when a prospect — who clearly knows both what they’re doing and what they want — asks us for a demo, what should we do?  One thing:

Just effing demo  —  and then ask about requirements along the way

Look, I’m not trying to undo all the wisdom of learning how to do deep discovery and give customized demos, espoused by world-class sales trainers like Barry Rhein or in books like Just F*ing Demo (from whose title I derived the title of this post [1]).  These are all great ideas.  They should be your standard procedure.

But you need to remember to be flexible.  I always say don’t be a slave to metrics.  Don’t be a slave to process, either.

Here’s what I’ve learned from these situations:

Avoid triple-qualifying prospects with an SDR, then a rep, then an SC. Make SDR qualification quick and light.  Combine rep and SC qualification/discovery whenever possible. Don’t make the prospect jump through hoops just to get things started.

Intelligently adapt your process. If the prospect says they’re an expert, wants to judge for themselves, and just wants a quick look at your standard demo, don’t try to force a deep discovery call so you can customize – even if that’s your standard process.  Recognize that you’re in a non-standard situation, and just show up and do what they want.

Set expectations appropriately. There is a difference between a “Product Overview” and “Demonstration.”  If you think the right meeting is 30 minutes of slides to frame things and then a 30-minute demo, tell the prospect that, get their feedback, and if everyone agrees, then write “Product Overview” (not “Demonstration”) on the agenda.

Don’t make them wait. If you say the presentation is a one-hour demo, you should be demoing software within the first 5-7 minutes.  While brief personnel introductions are fine, anything else you do up-front should tee-up the demo.  This is not the time to talk about your corporate values, venture investors, or where the founder went to school.  Do that later, if indeed at all.

# # #

[1] A great book, by the way.  My favorite quote:  “in short, I stopped trying to deliver the perfect demo for my product and starting trying to deliver the perfect demo for my audience.”

Don’t Let Product Management Turn Into “The Roadmap Guys”

At many enterprise software companies product management (PM) ends up defaulting into a role that I can’t stand:  The Roadmap Guys*.

Like a restaurant with one item on the menu, the company defaults into ordering one thing from product management:  a roadmap pitch.

  • “The VP of PM is in Boston and Providence this week, can she visit some customers and do a few roadmap presentations?”
  • “Hey, there’s a local user group in NY this week; can PM do a roadmap pitch?”
  • “There’s a big customer in the executive briefing center today; can the PM do a roadmap?”
  • “As part of our sales cycle with prospect X, we’d love to get PM in to discuss the roadmap.”
  • “We’ve got a SAS day with Gartner next week, can PM come in a present the roadmap?”

You hear it all the time.  And I hate it.  Why?

From a sales perspective, roadmap presentations are the anti-sales pitch:  a well organized presentation of all the things your products don’t do.  Great, let’s spend lots of time talking about that.

From a competitive perspective, you’re broadcasting your plans.  If you’re presenting roadmap to every prospect who comes through the briefing center and at every local user group meeting, your competition is going to learn your roadmap, and fast.  Then they can copy it and/or blunt it.

But what irks me the most is what happens from a product management perspective:  you turn PM into “the talking guys” instead of “the listening guys.”  Given enough time, PM starts to view itself as the folks who show up and pitch roadmaps.

But that’s not their job.

PM should be the listening folks, not the talking folks.  Just like sales, PM should remember the adage:  we have two ears and one mouth; use them in proportion.

Wouldn’t the world be a better place if we changed the five previous bullets as follows?

  • “The VP of PM is in Boston and Providence this week, can she visit some customers and observe how people actually use the product?”
  • “Hey, there’s a local user group in NY this week; can PM break off a small focus group to ask customers about how they use the product?”
  • “There’s a big customer in the executive briefing center today; can PM come in and interview them about their impressions on evaluating the product?”
  • “As part of our sales cycle with prospect X, we’d love to get PM in to discuss what specifically they are trying to accomplish and how the product can do that?”
  • “We’ve got a SAS day with Gartner next week, can PM come in and hear from Gartner about what they’re seeing in the market and in their interactions with customers?”

So every time you hear the word “roadmap” in the same sentence as “product management,” stop, pause, and think of a better way to use the PM team.  Sure, there are certainly times when a roadmap presentation is in order.  But don’t default to it.  Keep your PM team listening instead of talking.

# # #

* I’m using “guys” here in a gender-neutral sense like “folks.”