Metrics_ How to Improve Key Business Results - Martin Klubeck [86]
Like Availability, the service provider decided that there was part of the story missing. Besides the time it takes to complete the work, the customer also cared about how long it took before they were able to talk to a living, breathing analyst instead of listening to the recording. So, we needed to collect data on time to respond. Table 9-5 shows the breakdown for Time to Respond. It's a good example of a measure that requires multiple data points to build.
During the definition of this measure, it became clear that there were other measures of Time to Respond. Besides the length of time before the analyst picked up the phone, there were also call backs for customers who left voicemail. Since the Service Desk was not open on weekends or after hours, customers leaving voicemail was a common occurrence. So, the Time to Respond needed to include the time it took to call the customer back (and make contact). The expectations were based on work hours (not purely the time the message was left). If the customer left a message on a Friday at 5:15 p.m., the expectation wasn't that he'd be called back 8:00 a.m. on Monday. The expectation, as always, would be a range—that the Service Desk would attempt to call him back within three work hours for example. The tricky part was to determine if the call back had to be successful or if leaving a message on the customer's voicemail constituted contact, and therefore a response.
Time to Respond was an addition much later in the development of the metric. It wasn't used the first year. Flexibility is one of the keys to a meaningful metric program.
Flexibility is one of the keys to a meaningful metric program.
Accuracy
The defects produced in and by the system are traditionally examined. Those caught before distribution are part of “efficiency.” Those that reach the customer (and which they are aware of) are part of the measures we were developing. We needed ways to represent faulty production or service delivery. In the case of the Service Desk, I looked for a simple measure of accuracy.
Rework was an easy choice. Using the trouble-call tracking system, we could track the number of cases that were reopened after the customer thought it was resolved. As long as the customer saw the reopening of the case as rework, it would be counted. We found that occasionally cases were prematurely closed—by the service desk or by second-level support. Later the customer would call the Service Desk with the same problem that was believed to have been resolved. The analysts were doing an admirable job of reopening the case (rather than open a new case making their total cases-handled numbers look better and keeping accuracy from taking a hit). This honest accounting allowed the Service Desk to see themselves as their customers saw them. Table 9-6 shows the breakdown for Rework. You should notice that the analysis is very simple in all of these cases. In the appendix on tools I will discuss briefly the place statistics plays.
Figure 9-4 shows Rework. You may have noticed by now that the charts all look somewhat alike. If you look closer, you'll see they look exactly alike. The only difference so far has been the data (values) and the titles. This consistency should benefit you inasmuch as those reviewing the measures get used to the presentation method and how to read them.
Figure 9-4. Percentage of cases reopened
Now is as good a time as any to point out how triangulation helps deter “chasing data.” Organizations often find themselves chasing data to try and put a positive spin on everything, to have only good news. The analysts could have taken every legitimate rework call back and logged them in as a new case. Doing this could nearly (if not totally) eliminate rework. But it would artificially increase other data. The number of calls handled would increase. The number of cases worked would also increase. And customer satisfaction with the knowledge of the analyst would likely drop. If the customer