Metrics_ How to Improve Key Business Results - Martin Klubeck [102]
Weights and Measures
With a complete set of measures in hand, our next step was to pull them together into a “metric.” To be a Report Card, we needed to roll the data up as well as provide a deeper dive into the anomalies. So far our methodology afforded some rigor and some flexibility. Let's look at each in this light.
Rigor
Each metric used triangulation; each was made up of at least three categories of information (Delivery, Usage, and Customer Satisfaction)
Information was built from as many measures as the service provider saw fit
Each measure was qualified as exceeding, meeting, or failing to meet expectations
Each measure was from the customers' points of view and fit under the rules for Service/Product Health
Flexibility
Each measure was selected by the service provider (our Service Desk department)
Each data set was built into a measure per the service provider's choice
Expectations, while representative of the customers' wants and needs, were defined by the service provider and could be adjusted to reflect what the service provider wanted the customers' perception to be. For example, if the customer was happy with an abandoned rate lower than what the Service Desk thought was adequate, the expectations could be higher.
Another important area of flexibility for the service provider was the use of weights to apportion importance to the measures. This was first used in the Delivery category. Since Delivery, one of the three Effectiveness areas of measure, was made up of multiple measures—Availability, Speed, and Accuracy—the service provider could weight these sub-categories differently.
With an organization just beginning to use metrics, weighing these factors was not a trivial task. We made it easier by offering a recommendation. For support services we suggested that speed was the most important to the customer, and for non-support services availability reigned supreme. So for the Service Desk we offered the following weights:
Availability: 35%
Speed: 50%
Accuracy: 15%
These weights could be changed in any manner the service provider chose. The key to this (and the entire Report Card) was the ownership of the metric. Since the service provider “owned” the data, measures, information, and the metric—if they chased the data they would be “lying to themselves.” Many of the admonitions I offered preceding this chapter was to help you understand the need for accuracy, but more so for the need of honesty. As David Allen, the author of Getting Things Done (Viking, 2001), has said, “You can lie to everyone else, but you can't fool yourself.” If the service provider is to use the metrics for the right reasons and the right way, leadership can never abuse or misuse them.
Even though I had the greatest trust in this department, I still stressed the importance of “telling it like it is.” The department had to not only be “willing” to hear bad news; they had to want to hear it, if it was the way the story unfolded.
Weighing the factors can be an easy way to chase the data and make the measures tell the story you hoped for rather than what it is. One thing we do to make this less tempting is to determine the weights before looking at the data. Then, if we need to change them after seeing the results, it's much easier to self-regulate the temptation to change the weights simply to look better.
This is another great use of the annual survey.