Does your competitive analyst introduce themselves like this?
If so, you’ve got a problem. Not only because his younger brother Big is a controversial employee over in DOGE but, more importantly, because old Harvey is not defining his job correctly.
Many competitive analysts effectively define their job as product comparison. In short, to make Harvey Balls that compare products on different features, usually on a 1-5 scale using a circular ideogram. You know, something like this:
Harvey Balls are a useful communication tool. And you can use them not only for product features, but for non-functional product attributes (e.g., useability, performance) and even company attributes (e.g., support, viability).
I’ve got no problem with Harvey Balls in theory. In practice, however, such charts often quickly fall apart because they are entirely subjective and lack any rigorous foundation for the underlying 1-5 scoring. If you’re going to make comparisons using Harvey Balls, you must strive to maintain credibility by documenting and footnoting your scoring system, so a reader can verify the basis on which you’re assigning scores.
Otherwise, Harvey Balls are simply opinion thinly disguised as fact.
So what’s the real problem if your competitive analyst thinks his name is Harvey Balls?
Well, old Harvey has missed the point.
Marketing teams shouldn’t pay competitive analysts to make product comparisons. They should pay them to win deals. To quote one of my favorite movie scenes:
“I know how you feel. You don’t believe me, but I do know. I’m going to tell you something that I learned when I was your age. I’d prepared a case and old man White said to me, “How did you do?” And, uh, I said, “Did my best.” And he said, “You’re not paid to do your best. You’re paid to win.” And that’s what pays for this office … pays for the pro bono work that we do for the poor … pays for the type of law that you want to practice … pays for my whiskey … pays for your clothes … pays for the leisure we have to sit back and discuss philosophy as we’re doing tonight. We’re paid to win the case. You finished your marriage. You wanted to come back and practice the law. You wanted to come back to the world. Welcome back.”
To reiterate for Harvey’s sake: you’re not paid to make product comparisons, you’re paid to win.
What does that mean?
- The goal of competitive is not to produce research for the sake of knowing, to support product management, or to tell sales so they can figure out how to use it.
- The goal is to win deals, which does require in-depth product and competitive knowledge.
- Competitive intelligence is an applied function — they must apply the knowledge gained from research into creating sales plays to win deals.
And note that the research need not be limited to product — it can and should include the competitions’ sales plays (i.e., what they plan to do to us and how to defeat it). And it can and should include company research (e.g., executive biographies to anticipate strategies).
Let’s elucidate this via an example. Let’s say your competitor sells a data analysis product that demos really well. Yours looks like a clunker by comparison, especially in quick reactions to end-user demos. Let’s also say that competitor has poor governance, administration, and security controls. Yours look great by comparison. Furthermore, let’s say you know your competitor is going to run a sales play called “the end run,” where they want to leverage the end-users’ love for the product to effectively ram it down the throat of a resistant central data team.
Let’s contrast three approaches:
- Product comparison: create Harvey Balls that show the relative strengths and weaknesses in useability vs. administration.
- Holistic research: include warning your sales team to expect the end-run as the competition’s standard sales play.
- Win-deals: use all that information to create a sales play called “the Heisman” where you leverage the central data team to anticipate and block the end users to avoid purchasing a system with insufficient security and administration. That includes reframing user sentiment from a selection criteria to a hygiene criteria (i.e., it needs to be “good enough”).
Don’t get me wrong. Good product knowledge is critical. But it’s simply the foundation of the win-deals approach which also factors in company- and sales-level intelligence and then applies everything to creating sales plays that win deals.
If you had to put a metric on all this, it would be win rate. Competitive’s job is not to produce reports. It’s to increase head-to-head win rate vs. chosen competitors. If they sign up for that, then the rest should follow.





As usual I totally agree but with this one I would add “Yes, And” to the article.
Great competitive teams help more than sales teams differentiate. There are two ways to differentiate. (Big D) Differentiation – Changing perceptions about what is valuable or possible (market shaping) and (Little D) differentiation – Changing how we present our products (how we message).
Great competitive teams help the product, marketing, and rest of the organization shape the market, not just deliver sales plays.
Agree with the yes and. I’d argue that all those also argue for the mission I stated: winning. From comparison charts to sales plays is step one. You’re describining step two. As always thanks for reading and weighing in.
This is but one form of competition, buy from someone else. Don’t forget the other forms: do nothing (likely the most common), do-it-myself, and of course, use the budget for something else.
The analysis above is valid, but to be complete you need to do it for every form of competition.
The other thing to recognize is that not all features of the competition will be valid for each situation and prospect. If you have five buying influences, some may favor one thing from the competition and another person something else.
Yes, I am familiar with the notion of multiple levels of competition. This post is about direct, “ring 0” competitors. Thanks for weighing in.