Metrics_ How to Improve Key Business Results - Martin Klubeck [103]
Of course, you can weight these factors equally.
Along with weighting the components of Delivery, we can weight the three categories—Delivery, Usage, and Customer Satisfaction. Weights should be clearly communicated to those viewing the Report Card.
Let's look at how we roll up the performance measures into a Report Card.
Rolling Up Data into a Grade
It's time to take the components we've discussed—the Answer Key categories for effectiveness, triangulation, expectations, and the translation grid—to create a final “grade.” This includes a means for communicating quickly and clearly the customer's view of your performance, for the staff, managers, and leadership.
You'll need the translation grid (see Figure 10-15) as before but with neutral coloring so that it is less enticing to consider “exceeds” as inherently good.
Figure 10-15. Translation Grid
The values I'm using do not reflect the information shared earlier. I wanted to make sure it was clear how to roll up grades and aggregate the measures. Table 10-8 shows all of the measures, their expectations, their actual values and the translation of that value into a “letter grade.” These can be programmed into a spreadsheet so that you can have it calculate the grade for you.
In Table 10-8, the “grade” (shown in the Result column) has already been translated to a letter value—if the actual measure exceeded expectations, it earned an E, if it met expectations an M, and if it failed to meet expectations an O.
Within each item (information level), the total grade would be a result of an average using the translation grid. As mentioned, you can even use weights within the category. For example, abandoned calls that were less than 30 seconds could be given a weight of 85 percent, while the total number of abandoned calls could be weighted 15 percent. Another example is overall Customer Satisfaction can be given equal weight (50 percent) to the other three satisfaction question combined (16.6 percent each). For this example we'll go with those two weighting choices and all others will be of equal weight within their own information category. Table 10-9 shows the next step in the process of rolling the grades up toward a final Report Card. Note: a double asterisk beside the grade denotes an O grade at a lower level.
Let's look at the two weighted measures. If the Availability measures were of equal weight the total grade would simply be the average of the two grades, 10 and 5, giving a grade of 7.5. If we rounded up, then this would make it an E. But, since we always choose to err on the side of excellence, we don't round up. You have to fully achieve the grade to get credit for the letter grade. If the weighting were switched (Abandoned Total = 85% and Calls Abandoned in Less Than 30 Seconds was worth 15 percent of the grade), you'd have an overall E since the 10 for abandoneds would give you an 8.5 before you even looked at the abandoned calls in less than 30 seconds.
In the satisfaction ratings, we find that the grade is an E even though there is an M. If, instead of weighing overall satisfaction at 50 percent, we gave all three questions equal weight, the grade would simply be the average of the four grades, giving us an 8.75. Still an E, but a lower grade.
Notice that the Speed: Time to Respond came out as an M, Meets Expectations, but I added the asterisk to signify there was an O hidden beneath. The same is done for the Availability total grade because of the E hidden beneath. It helps the viewer of the Report Card quickly note if she should look deeper into the information. Buried Es and Os are anomalies that need to be identified.
Let's continue with these results. If we go with the weighting offered for Delivery (Speed at 50%, Availability at 35%, and Accuracy