Think Nadia Comaneci at the 1976 Olympics or Bo Derek in the aptly named movie, Ten. It is supposed to represent someone’s judgment or perception of perfection. Right? But as Lee Corso would say on ESPN’s College Gameday, “Not so fast my friend!”
I recently leased a new car – my second with this dealership. The transaction went relatively smoothly. I have a decent relationship with the salesman. At the end of a 3 hour process as I prepared to get in my car to drive away, we shook hands and the salesman pulled me close to offer these parting words: “You’re going to get a web survey. If you don’t give us 10s on everything, we get hammered and you’ll get a follow up call.” Ouch!
That’s a great way to end a good experience on a sour note, especially if you’re in the business of analyzing customer experience. I’m sure it’s happened to nearly everyone reading this blog. Why do companies do this? I’m sure almost all customers either skip the survey or give all 10s, simply to avoid alienating the dealership, to say nothing of the hassle of having to participate in an additional interview about the experience. So what do they (the dealerships or the manufacturers) get out of it?
I can think of three reasons:
First, the company wants to use the high percentage of 10s in their marketing material to communicate how much customers love the experience. I’ve had direct experiences with marketers that have used one measure to communicate with customers and prospects and another to run the company.
- “Our customer satisfaction is 98%!”
- “We’re number one in customer experience!”
This approach may help sell new customers, but the danger is that it sets high expectations for what they should experience.
Second, the company believes this methodology identifies, with a fairly high level of confidence, the customers that truly had a poor experience. This rationale places a preference on minimizing Type II Errors – don’t focus any effort on a customer that is not truly weak.
Many organizations focus their customer intelligence resources into:
- Identifying weaker customers and then trying to recover the relationship, and/or
- Building a process to fix whatever made those customers weak in the first place.
In markets that experience high customer turnover, this can be a worthwhile endeavor. Further, if the cost of intervention is high, this is one way to ration out resources.
What’s the risk? By driving most customers to either not complete the survey or to give higher evaluations than true, it intentionally does not identify all weaker customers whose experiences could have been better. Why not just ask for an unbiased evaluation and focus efforts on those that are 6 or lower and not penalize dealerships for results that are over 8?
Which brings me to my third reason: At some point corporate made a commitment to this process i.e. perfection or nothing. And it became ingrained in all sorts of decisions.
One “commitment” that corporate makes is to be part of a marketing process involving rating agencies, like JD Power. These agencies use customer surveys to produce “Best of” lists. Companies use their position on the list as a marketing tool. Although the list can and is refined to make MANY participating firms/clients #1 in some category, you still have to produce scores. It becomes not about what the customer experience actually is, but how to make sure you report high scores.
There are stories in the auto industry about manufacturers punishing dealers by withholding marketing dollars and “hot” cars if their scores are below 95%. Sales people with lower evaluations get deductions from their commissions. As you might expect there are stories about dealerships and sales people “faking” surveys to make sure they reach the monthly target.
This research process is ugly and unproductive. It incentivizes bad information and bad behavior.
So what did I do? I gave a 10 to every question – without even reading the question. I’ve got to go back into that dealership for three scheduled service transactions!