Metrics_ How to Improve Key Business Results - Martin Klubeck [101]
My third and last argument was based on principles; measures should be meaningful to the organization before finding benchmarks. Benchmarks should only be used as an enhancement to the information—not act as the definer of it. The measures had to be meaningful on their own.
Figure 10-12 shows the average grade. Even though I argued against this as being less meaningful than other views, when you add the expectations, even this measure becomes useful.
Figure 10-12. Average Customer Satisfaction grade
I still wanted a better representation of the measures. We tried using promoters to detractors, but since the vendor wasn't going to change to a 10-point scale, we had to translate the 5-point scale to the methods Reichheld suggests for determining where a customer fell on the range of support. We ended up making 5s promoters, and 1, 2, or 3s detractors. This was an attempt to match Reichheld's 1s–6s (detractors), 7s and 8s (neutral), and 9s and 10s (promoters). While this was not perfect (or optimum) I believe it was valid and if anything we again were “erring on the side of excellence.” But showing this ratio (highly satisfied vs. not satisfied) proved problematic. While it was more meaningful than the average rating, it still was difficult for management to interpret.
The conversation would go something like the following:
“So, for every 1, 2, or 3 we received, we had twelve 5s?”
“Yes, your ratio of promoters to detractors was twelve to one.”
“What about the 4s? Why aren't we counting 4s?”
“Because 4s are being considered neutral. We can't tell how they'll ‘talk' about our service. They may say it was good or they may not.”
“I thought 3s were neutral?”
“Threes are in the middle, neither satisfied nor dissatisfied, and we believe that if someone can't say they were satisfied (4 or 5) then they will definitely talk badly about our service—they will detract from our reputation.”
“Well if we leave out 4s, we're missing data…so it's not a complete picture.”
Figure 10-13. Ratio of Promoters to Detractors
Compared to the Average Grade, where we “looked good,” this representation (see Figure 10-13) made the Service Desk look incredibly good! And the funny thing was, I believe this was a more accurate representation of just how good they were.
After two years of battling this argument, I acquiesced and found a different way to represent that data. I still believe the promoter-to-detractor story is a good one (and perhaps the best) one to tell. There is an established standard of what is good that can be used as a starting point. Where Reichheld uses that standard to determine potential for growth, it works fine as a benchmark of high quality. That said, the few departments that could conceptualize the promoter-to-detractor measure invariably raised that benchmark way above this standard. One client I worked with wanted a 90 to 1 ratio. As a provider of fitness classes, they felt highly satisfying their customer was their paramount duty and they expected that out of one hundred students, they receive 90 promoters, 9 neutral, and only one detractor (they changed their 5-point scale to a 10-point scale happily).
We ended up with a new measure—a new way of interpreting the data. We showed the percentage “satisfied” (Figure 10-14). This was the number of 4s and 5s compared to the total number of respondents. Definitely better than the average. The third-party vendor of the surveys had no problem representing the data (ours and for our industry) in this manner. They actually produced their reports in numerous forms, including this one.
Figure 10-14. Percentage of satisfied customers
At the time of