Please Give Me a 10:  Interpreting Customer Satisfaction Surveys in an Era of Bias

Say you’re considering going out to dinner in a city you’ve never visited before and there are two different surveys of local restaurants that you can use to help choose a place to eat:

  • Survey 1, which is taken by randomly asking customers leaving restaurants about their experience.
  • Survey 2, which was conducted by asking every restaurant to provide 25 customers to survey.

Which would you pick?  Survey 1, every time.  Right.  It’s obvious.

Why?  Because of what they measure:

  • Survey 1 measures customer satisfaction with the restaurant in an objective way and can be used to attempt to predict your experience if you eat there. In a perfect world, you could even slice the survey results by people-like-you (e.g., who liked the same restaurants or have similar food profiles) and then it would be an even-better predictor.
  • Survey 2 measures how well the restaurant can pick, prime, and potentially bribe (e.g., “three free meals if you take the survey and give us a 10”) its top customers. It has little predictive value.  It is more a measure of how well the restaurants play the survey game than the quality of the restaurant.

Would it surprise you to know that virtually every major survey in IT software is run like Survey 2?   From big-name analyst firms to respected boutiques, the vast majority of analysts run their customer satisfaction surveys like Survey 2.

Why would they do this, when it’s so obviously invalid?  Because it’s easier, particularly when you need to include a bunch of relatively small startups.  Finding a random list of Oracle and SAP customers isn’t that hard.  Try finding 20 customers of a startup that only has 50.  You can’t do it.

So you make do and ask vendors for a list of customers to survey.  You get a lot of data you can analyze and put into reports and/or awards.  More disturbingly, you can build your special two-by-two quadrant or concentric circle diagram leveraging the data your survey, lending it more legitimacy.  (Typically these diagrams have one more-objective and one more-subjective dimension and things like revenue/size/growth and customer satisfaction factor into the more-objective dimension.)

When people challenge your survey and the methodology behind it, you can typically defend yourself in one of several ways:

  • “The data is the data; I’ve got to work with what I have.” But the data is garbage because of the biased way in which it was collected and the first rule of data is you can’t analyze garbage – “garbage in, garbage out” as the maxim goes.
  • “It was a fair contest,” meaning that every vendor had the same opportunity to select, prime, coach, cajole, reward, and/or bribe the respondents.” While this may be an excellent stiff-arm for the vendors, end users don’t care if your survey was FAIR, they care if it is VALID – i.e., can it provide a reasonable prediction of their experience.  And, back to the vendors, are such contests even fair?  A low-end,vendor with 1000 small customers can cherry-pick its customer base more easily than an enterprise vendor with 75 big ones.
  • “The results are consistent with our general experience talking to customers.” This is a weak defense because it’s both subjective and certainly confirmation biased – what analyst wants to undermine his/her own survey?  It’s also problematic because the customers who call analysts are not random.  Some serve certain verticals or departments.  Some serve big IT groups.  An echo chamber is often created in that process.

In my opinion, the single best thing these surveys do is ferret out vendors that are marketing true vaporware (e.g., a mega-vendor with a new cloud product that they’ve given free to 300 customers in order to claim market success, but since no one is actually using it, they can’t even produce the 25 references).  For that these surveys work.  For everything else?  Not so much.

The whole situation reminds of buying a car where the dealership hits you mercilessly with:

  • “Is there anything I can do to make you more satisfied today?”
  • “Is there any reason you will not give me a 10 when corporate surveys you because my compensation will fall if I get anything other than a 10 and my lovely spouse and children (in the photo on my desk) will suffer greatly if that happens?
  • “You don’t want to hurt my children do you? So please give me a 10!”

Now I can guarantee you that the net promoter score (NPS) produced by that survey will not be valid.  So why do companies do it?  Because, like it or not, it does force a conversation where the dealership asks some important, uncomfortable questions that might highlight correctable problems.

If you’re trying to force a conversation between your organization and your customers, there is probably a role for the “please give me a 10” survey.  If, however, you are genuinely trying to measure satisfaction with your products, then there is not.

So what’s a buyer to do?  If you can’t trust these surveys, then what can you trust?  I think 3 things:

  • The vendor’s wheelhouse.  While most technology vendors attempt to position as everything to everyone, despite their misguided marketing they nevertheless develop a reputation for having a wheelhouse (i.e., an area or segment which is their real strength).  These reputations get built over time and are usually accurate, so ask people “what is vendor X’s wheelhouse?”  Not what do they say they do.  Not every area in which they have one customer — but their wheelhouse.  You should see a consistent pattern over time and you can then compare your needs to the vendor’s wheelhouse.
  • Reference customers. While I believe you can cajole someone into giving you a 10 on a survey, it’s much harder to cajole someone into bogus answers on a live reference call.  The key with reference checking is to find customers like you in terms of size, complexity, problem-solved, and general requirements.
  • Your own evaluation process. If you’ve run a good evaluation process, trust it.  Don’t let some survey up-end a process where you’ve determined that product X can solve problem Y after looking at demos, possibly do some sort of proof of concept, done vendor presentations and discovery sessions, built a statement of work, etc.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.