Metrics_ How to Improve Key Business Results - Martin Klubeck [137]
“And find three, right?”
“Right!” I could tell he was happy with his clear direction.
I did as asked (I wanted to keep my job). I looked at the consulting firm's data, without any specific question in mind, trying to find three we could use as a benchmark. Unfortunately the data they had didn't relate to our problem areas. We had specific areas we were trying to improve. These were not unique problems, but they weren't issues that the consulting firm had previously researched. They had researched the general areas of interest, but these weren't the areas we were having trouble with. The three I found to use—the availability of servers, abandoned call rates, and customer satisfaction ratios—were defined differently than we would have defined them.
For example, they defined availability on a 24 hours a day, 7-days-a-week basis—while we had scheduled downtimes during low-usage timeframes (our customers did not expect 24/7 availability).
Abandoned calls did not take into account the time before the caller hung up. We always had a message you had to listen to before a technician would pick up. This message was updated daily (at a minimum) and informed the caller of any current issues. For example, if we had a current problem with e-mail, the message may say, “We are experiencing e-mail connectivity issues; we hope to have it resolved by 1:00 p.m.” If the purpose of the call was to let us know about this issue, the caller could hang up, confident that we already knew about the issue and were working it. This call, in our opinion, shouldn't be counted as abandoned. The research didn't differentiate the amount of time spent on the line before the caller disconnected, so it didn't actually match our information.
It was easy to see the disconnect in the customer satisfaction measures. The consultant's research had a four-point scale; we had used a five-point scale for the last three years. Their questions were also not an exact match. Finally, they gave a value of “1” for “very satisfied” and a “4” for “very dissatisfied.” We used the scale in the other direction.
In a forced effort to have comparison data, my boss made me use the data anyway, comparing it although it wasn't an exact match. He would say, “It may not be Macintosh to Macintosh, but it's still apples to apples.”
Finding a cache of data can mislead you into force-fitting your questions to align with the available answers.
To stop the madness, you'll need to admit to your boss that you can't do what he's asked. You can't perform the research because you don't have the time, skills, or energy to chase the unknown. Instead, you need his help. You need his help to direct your efforts and allow you to be more efficient and effective. You need his help to ensure you're productive.
What's Wrong with Research?
You should have already formed your questions first and then sought out the answers via existing research or standards. Of course, if you're struggling with the questions, reviewing existing research data may help you, or then again, it may lead you deeper into the woods. The chief risk is that you may settle for what is available rather than what you actually need. The following are a few precautions to consider when using others' research:
Research data may not match your needs exactly. As my boss said, you may have apples-to-apples, but yours might be a Red Delicious while mine is a Granny Smith. While you may use others' data to compare, you have to note where it doesn't match your situation.
It's too easy to skip the question identification phase when you have reams of data to choose from. You end up just building pictures from what's available instead of determining the proper questions.
Worse than skipping the question altogether, when you have research results, you may be driven to creating questions to fit the answers in the metrics. This makes it nearly impossible to convince management to re-evaluate their needs.
Research data are still merely indicators; they are not truth.