Metrics_ How to Improve Key Business Results - Martin Klubeck [97]
Figure 10-1. Translation Grid
So looking at the Availability charts we added the expectations, so that visually we could tell where we were in terms of health of availability. This visual depiction happens at the measurement level, before we aggregate the grades with other measures of availability (to create a final grade for the category) and before we look to roll up grades into delivery. Figures 10-2 and 10-3 show the abandoned rate and calls abandoned in less than 30 seconds, with expectations.
Figure 10-2. Abandoned call rate with expectations
Figure 10-3. Percentage of calls abandoned in 30 seconds or less
Speed
Speed wasn't as simple as Availability. We could measure how many cases were responded to (or resolved) faster than expected, within expectations, and slower than expected. The problem was determining what that meant. What was good? We could have said any case that fell out of the Meets Expectations range (above or below) was an anomaly and should be investigated. That sounds logical, but since there were thousands of cases, it was not practical. And when I interviewed the department, it was clear that anomalies would happen from time to time. There were times when the Analysts would take longer to respond than expected. And other times they would pick up on the first ring. This was a natural byproduct of the nature of the work and the environmental factors that influenced performance as well as workload.
So for these cases, we decided to determine what was expected by the customer at a second level. We asked the following:
What percentage of cases does the customer feel should exceed expectations?
What percentage of cases does the customer feel should meet expectations?
What percentage of cases does the customer feel is acceptable to fail to meet expectations?
So we looked to define the expectations in the form of length of time to respond and the time to resolve.
Time to Respond
Exceed: Responds in less than 5 seconds
Meets: Responds in 6 to 30 seconds
Opportunity for Improvement: Responds in greater than 30 seconds
Time to Resolve
Exceed: Resolved in one hour or less
Meets: Resolved in 24 hours or less
Opportunity for Improvement: Resolved in five days or less
For each we needed to determine the customers' expectations. What percentage of the cases would the customer expect to fall into each of the categories listed, as shown in Table 10-2.
Figure 10-4 shows percentage of cases resolved in less than one hour. While this is a good measure, all three are necessary to get the full picture. Looking at only this measure would give a skewed view of how healthy the service was (in terms of speed).
Figure 10-4. Percentage of calls resolved in less than one hour
This second level of expectations allowed us to use percentages, and allowed us to look at anomalies only when they added up to a significant (as defined by the expectations) amount of cases. It's worthwhile to note that the third measure for Speed: Time to Respond, moves in the opposite direction of the other measures. This will also be the case with rework, where less is better.
Accuracy
Rework turned out to be the best measure of accuracy for the service desk. Figure 10-5 shows Rework in the form of percentage of cases.
Figure 10-5. Percentage of Rework
It may be worth noting that the picture or impression the viewer of your metric gets can be affected by the way you present it. Let's look quickly at a couple of different representations of the exact same data for Rework. Figures 10-6 and 10-7 have the same data as 10-5, but I've changed the coloring on the first and the scale on the second.
Figure 10-6. Percentage of Rework background colors reversed
Figure