Metrics for Trying Times

b2ap3_thumbnail_Figure1.jpg

Key Takeaways

  • Backing off on strategic planning and process improvement in the face of economic woes is a mistake – organizational improvement is essential in a crisis, not a luxury.
  • Real, measurable, process improvement reaps benefits that can turn your institution’s fate around.
  • Metrics can show where improvement is needed, help track improvement efforts, and show the likely return on investment of improvement initiatives.

According to statistics on the current economic recession, the United States is hemorrhaging jobs at an alarming rate, with 28,000 businesses failing last year and 68,000 this coming year.1 According to U.S. News and World Report, the U.S. had lost more than 5 million jobs as of April 2009.2

The nation’s economic woes have not bypassed educational institutions. Moratoriums on travel, job postings pulled from the web, and budget cuts reflect a harsh reality in many universities today. Resources are tight everywhere. So, where and how should higher education institutions spend money when funds are so tight?

Most organizations hunker down in times of fiscal crisis. They look for ways to save money and jobs. They refrain from filling positions that become vacant through natural attrition, and they cut programs. Strategic planning, metrics collection, and process improvement – efforts that enable organizational improvement – are set aside. That is a mistake. According to preliminary research by Ranjay Gulati, Jaime and Josefina Chua Tiampo Professor of Business Administration at the Harvard Business School, 40 percent of companies fail during a recession, 54 percent just survive, and only 6 percent take the opportunity to improve – “achieving breakaway success” compared to their peers.3

Do More with Less — How?

The mantra “do more with less” isn’t new. Unfortunately, this demand reappears when staff are expected to work longer hours or pick up the workload of team members who have left the organization. How, other than by working double shifts, are the same people, doing the same things in the same ways, supposed to do more with less?

Rarely does a crisis prompt process improvement efforts, which make it possible to do more with less. In most cases, organizational improvement initiatives are among the first things dropped. Improvement itself is considered a nonessential activity despite the likelihood of providing a positive return on investment (ROI).

In trying times, leaders look to cut costs and limit expenditures. The predictable outcome is that nothing changes – or things get worse. Although organizations normally react this way, this approach is not logical and not the best course of action.

How We SHOULD React

What if, instead of reacting by cutting – jobs, programs, and improvement initiatives – the organization invests money into improving processes? Adversity should catalyze improvement and innovation. What if we in higher education IT look at the crisis as an opportunity, specifically by finding savings through improving IT processes? The improvements would continue to pay dividends even when the economy stabilizes.

This isn’t a new concept. The premise behind quality improvement has always been savings – in time, money, or improved quality.

  • Fewer defects = more revenue, happier customers, and better reputation
  • Quicker delivery = more deliveries = more revenue, happier customers, and better reputation
  • Delivering products and services at lower costs means more funds to invest in more improvements = more revenue, happier customers, and better reputation

So, why isn’t everyone doing this? Why are improvement efforts ignored when needed most? I believe we in IT have failed to demonstrate the value of improvement efforts. If an organization is serious about improvement, it will measure its current situation and determine what needs improvement and how much. The data will help determine where to find the best ROI.

Views of Organizational Health

There are four major viewpoints to take when looking at an organization’s health and potential for the future:

  • Effectiveness – how well we satisfy our customers’ needs
  • Efficiency -how well we are doing as stewards of the institution’s resources
  • Employee satisfaction -how well we satisfy our employees’ needs
  • Project visibility -how well we manage our projects

Most organizations focus on the first two areas, effectiveness and efficiency, because they are the most directly related to visible success.

How Metrics Can Help

Proper use of metrics can:

  • Reveal where improvement is needed
  • Track improvement efforts
  • Demonstrate ROI

Metrics are useful tools, helping us see a clear and total picture – when we design them that way. First we need to select the areas to improve. Then we must consider which processes to improve, what training is needed, and what programs to develop.

To determine where improvements are needed requires collecting some basic data:

  • What processes – on the largest scale – do we already implement?
  • How efficient are those processes? In other words:
    • How much do they cost?
    • How long do they take?
    • How well do we do them?
    • How important are they to our business?
  • Do we have fully defined, clear process definitions?
  • Do we all follow the processes the same way?
  • Do the processes require special skill sets?

Answering these questions might find savings right away. For example, what if half the server administrators don’t follow established procedures for maintenance? Just by having staff follow the procedures created for that purpose gains efficiencies.

If a process definition (prose and diagrams) are not in place, start there. While defining a process (how we do it), identify the probable junction points for collecting measures on the process’s efficiencies. Just capturing the process will reap benefits – diagramming the process often elicits insights the process owner normally overlooks.

Managing business processes doesn’t just mean changing processes, but measuring the outcomes of those changes. When we challenge the workforce to develop efficiency improvements, like how to do things faster, cheaper, or better with the same set of resources, they’ll need a way to show that improvements actually result in positive ROI. This requires measuring the standard areas of efficiency:

  • Cost – what it costs to do what we do
  • Time – how long it takes to do it
  • Quality/rework – how well we do it the first time, or how much effort is wasted on defective products

The priority when capturing these measures is to determine improvements in the processes, not to evaluate staff. So, if computer labs use an imaging process to set up the computers, how long does it take to prepare the image? How long does it take to put the image on the lab computers? What other costs are involved? How many defects affect the image, and how much rework is involved?

We collect this knowledge to accomplish two basic goals

  1. Identify possible areas of improvement: Where are the largest costs? What processes take the longest time? Where are the biggest deficiencies in quality? Of course, this must be balanced with the processes being measured – it might be more logical to improve some more than others.
  2. Identify how well we’ve improved our processes: We need to show how much money, resources, and time we’ve saved and how much the quality has improved.

If we target improvement – not evaluation – we need to measure more than the overall efficiency levels, such as how long a process took before and after improvement. To really improve, we need to analyze the processes to allow us to increase our ability to improve them.

Process metrics have to be designed in conjunction with process definitions. When we lay out a process, step by step, and look at it under a bright light, we invariably improve it. When we add metrics at the correct points, we can improve it exponentially.

We usually know what the initial input is (a request, a date, or completion of another process) and we know the expected final outcome – but we have absolutely no idea what goes on inside the process (see Figure 1). The only things we can measure initially are the inputs (were the requests entered on time, did we start on the date planned, did the other process have all of its deliverables?) and the outputs (did we deliver what we said we would?).

b2ap3_thumbnail_Figure1.jpg

Figure 1. Inputs and Outputs

These rudimentary measures can be useful in telling us where to expend our energy.

From this level of process understanding we can determine if an overall process seems to take longer than expected. We can determine if the process produces the expected outcomes. If the inputs seem to meet expectations but the outputs fail to live up to our needs, this process might be one of the first we choose to analyze.

The first step is simply to understand the bigger process steps. That means identifying the handoffs is often critical.
Each of the boxes in Figure 2 can represent a process carried out by different departments (or people). Each arrow may represent a handoff between these process owners. With this slightly deeper view we can measure the inputs and outputs between each step.

b2ap3_thumbnail_Figure2.jpg

Figure 2. Bigger Process Steps and Handoffs

The reason we work stepwise from a high level toward a fully detailed view is that we may save a lot of effort by NOT focusing on areas that don’t require scrutiny. This stepwise process also allows management time to become accustomed to working with data. If we decide that one (or more) of the boxes warrant a deeper dive, we’ll start looking at the subprocesses contained within that box (Figure 3). Each set of processes generates the possibility of measures that might be useful for improving the step, procedure, or overall process.

b2ap3_thumbnail_Figure3.jpg

Figure 3. Subprocesses

Finally we can break down the process into its various branches and the details within each (Figure 4). Again, we only want to expend the energy to measure the components of the process that warrant our attention. Most of us don’t have time to conduct large-scale research or analysis efforts on our IT services, departments, and processes. We have to be more selective up front. No wasted effort. No wasted data.

b2ap3_thumbnail_Figure4.jpg

Figure 4. Processes, Subprocesses, and Branches

Conclusion

So, you don’t have time to read the entire article, and you’ve jumped to the end hoping for a simple summary of the pertinent points. Well, I aim to please.

When the next economic crisis hits, use it as an opportunity to leapfrog ahead of your peers. Rather than follow the “normal” organizational behavior of making cuts across the board to survive the rough times, be daring and take the risk of championing improvement within your organization. Push for real process improvement that can reap benefits you can measure. Invest in improvement, in your workforce, and in your business.

The key steps for leadership to follow are:

  • Take process improvement seriously. Attack the change effort as you would any other problem. Don’t whitewash the fence if it needs to be torn down and rebuilt.
  • Don’t look for the easy way out – don’t simply cut things to save money. In the long run, the most you can hope for is to not fall too far back. This is equivalent of playing to “not lose.” Even if you succeed, you won’t “win.”
  • Don’t settle for just tearing down the fence unless the need for it is gone. In other words, you will have to risk improving the large, cumbersome, expensive processes, not just the small, easy ones.
  • Measure before, during, and after the improvement efforts. These efforts should turn your institution’s fate around, creating new jobs, attracting new customers, and increasing revenue. These efforts have to become your top priority. Leadership should be watching progress closely.
  • Expect real improvement. You get what you expect, so expect the best possible outcome. Demand it. Support it. Reward it.

So, rather than striving to survive, take a risk and strive to thrive!

Endnotes
1. Richard Milne and Anousha Sakoui, “Company Crashes Set to Hit Record Next Year,” Financial Times.com, December 7, 2008.
2. Amanda Ruggeri, “Unemployment Rate Jumps to 8.5 Percent in March,” U.S. News and World Report, April 3, 2009.
3. Personal e-mail communication with the author June 19, 2009.

© 2009 Martin Klubeck. The text of this article is licensed under the Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 license.

Applying a Metrics Report Card: Evaluating the IT department from the customers’ point of view determines strengths and weaknesses in IT performance

“So, how are you doing?”

If the president of your university asked you this question about the health of the IT department, what would you answer? And, more importantly, how would you know?

Students can check their transcripts at the end of each term to see how they’re doing. The overall GPA quickly communicates overall academic performance, and they can see the grades received for each class.

Called a report card in grade school, the transcript highlights a student’s strengths and weaknesses. Using a grading rubric, the student can determine if he has specific areas of weakness. The grading rubric for each class combines multiple items: tests, quizzes, participation, homework, a final exam, and projects. If a student did poorly in one of these areas, he can determine where he has a problem. Did he do poorly on tests because he didn’t know the material or because he does poorly on tests in general? How did he do on each test? How did he do on tests in other classes?

Grades are not prescriptive—you can’t know what caused the poor test results by looking at the score—they are diagnostic. The complete report card (transcript and breakdown of grades) reflects the student’s academic “health” and helps in identifying problem areas. If the professor shares results of the graded components during the semester, the student can adjust accordingly. The most beneficial aspect of grading is frequent feedback.

The same holds true for simply, clearly, and regularly communicating the health of the IT department to university leadership, the IT membership, and our customers. Providing a report card enables the IT department to check its progress and overall performance—and adjust accordingly. A report card won’t tell us how efficiently the IT department functions, but it gives the insight needed for identifying areas for improvement.

Balanced Scorecard versus Report Card

So, what’s wrong with using the balanced scorecard (BSC) for this purpose? Isn’t the report card the same thing? Not really.

The BSC is based on quadrants for financial, customer, internal business processes, and learning and growth.1 The quadrants can be tailored to align more directly with each organization’s needs, however. Where most explanations of the BSC fall short is in the specific metrics used to develop each view.

The four quadrants are very different. While this gives a wide view of the organization, looking at each quadrant might require also considering the environmental criteria for that quadrant. In addition, unique metrics make up the score for each quadrant, complicating use of the BSC even more.

Instead of the BSC quadrants, the report card is based on four categories derived from how we measure rather than what we measure:

  • Effectiveness—the customer’s view of our organization
  • Efficiency—the business view of our organization
  • Human resources—the worker’s view
  • Visibility—management’s need for more insight into the organization

Each of the four categories provides a different view of the organization. The report card uses the effectiveness quadrant exclusively because it offers the greatest return for the investment required. The organization should strive to reach a state of maturity that allows it to measure all four categories, but effectiveness is an excellent place to start. If you ignore customers and lose their support, it won’t matter how efficient you are, how happy your workers are, or how much insight your management has. Even in the monopolistic academic IT environment, we have to please our customers first and foremost.

The report card lets us look at ourselves through our customers’ eyes, focusing on our services and products. For each key service the organization provides, we receive a grade. That grade is made up of components like any rubric. In the case of effectiveness metrics, the graded components are:

  • Delivery of service (quality of product)
    • Availability
    • Speed
    • Accuracy
  • Usage
  • Customer satisfaction

This grading rubric is a good standard partly because it allows for customization. Each component is worth a portion of the final grade for the service or product. The values can be weighted for each key service, and the aggregate of the grades becomes the organization’s GPA for the term. The organization might be failing at one facet of effectiveness while excelling in others. Even with a decent overall grade, we would know which areas needed attention. The weak area might require working harder, getting additional help, or dropping that service/product completely. It should be an option to drop a service if we are failing at it or if we realize it is not a core part of our business.

An Example of the Report Card Applied

A human resource office (HRO) of a large organization decides to use the report card as a first step in implementing a metrics program. The HRO offers many services, but it is important to identify the credit-earning items or key services and products. The HRO, with the help of its customers, selected the following as essential:

  • Provide training
  • Counsel on benefits
  • Counsel on hiring
  • Counsel on firing
  • Provide assistance through a help desk

For each of these services, the HRO diligently identified metrics for each graded component. Let’s look closer at one of the services, “provide training.”

Delivery (Quality) of Training

For the key service of providing training, the HRO identified metrics for determining how well it delivered its service. They asked the following questions:

  • Was training available when wanted? When needed? (Availability)
  • Was training delivered in a timely manner? How long did it take to go from identification of the training need to development of the course and actual presentation? (Speed)
  • Was it accurate? How many times did the HRO have to adjust, change, or update the course because it wasn’t done correctly the first time? (Accuracy)

Before going on with the example, it’s worth repeating that these measures are only indicators. In this case, it might not be essential for the HRO to achieve perfection in each training offering. Perhaps it is acceptable to update and improve the offering with each presentation. The “Accuracy” metric is not intended to seek perfection the first time out. Accuracy should be used for identifying how much effort, time, and resources go into reworking a task. This approach offers possible savings in the form of improvements along the way.

Usage of Training Offerings

For the next measurable area, usage, the HRO asked the following questions:

  • How many potential customers are using other means of training?
  • How many people are using the training? Of the potential audience, how many are attending?
  • How many departments are asking for customized courses?

If no one attends the training, the training is ineffective, no matter how well delivered. The time, money, and effort spent in developing the training were wasted. In essence, usage is the customers’ easiest way of communicating their perceived benefit from the service. Regardless of what they say in customer satisfaction questionnaires, usage sends an empirical message about the value of the service.

Customer Satisfaction with Training

While usage is a good indicator of customer happiness, we still should ask about their satisfaction with our offerings. Why? What if your offering is the only one available? Even given a choice, customers choose services and service providers for more reasons than satisfaction with the given service. Customers might tolerate poor service if other factors make it logical to do so.

For customer satisfaction the HRO asked:

  • How important to you are the training offerings?
  • How satisfied are you with the offerings?
  • Would you recommend our offerings to a coworker?

These questions were all asked in a survey with a numeric scale. We’re deliberately not recommending a specific tool or scale—the important differentiator for the tool you choose is simply belief in the answers. Find a scale that you can believe in.

A Warning

The three factors of delivery, usage, and customer satisfaction can tell us how effective we are, but many times the recipient of the data doesn’t believe the data are accurate. Management decides the data must be wrong because they do not match preconceived notions of what the answers should be. Management rarely asks to see data to identify the truth; they normally ask to see data to prove they were right in the first place. If (and when) the data don’t match their beliefs, the following accusations might be made:

  • The data must be wrong.
  • The data weren’t collected properly.
  • The wrong people were surveyed.
  • Not enough people were surveyed.
  • The analysis is faulty.

Actually, challenging the validity of the data is a good thing. Challenging the results pushes us to ensure that we collected the data properly, used the proper formulas, analyzed the data correctly, and came to the correct conclusions. Many times, a good leader’s intuition will be right on the mark. But, once we’ve double (and triple) checked the numbers, we have to stand behind our work. If, once the data are proven accurate, management still refuses to accept the results, it might mean the leadership (and the organization) is not ready for a metrics program. This possibility is another good reason to start with effectiveness metrics rather than trying to develop a more comprehensive scorecard.

Conclusion

Balanced scorecards are very useful in helping organize and communicate metrics. The report card takes the BSC approach a step further (by doing less) and simplifies the way we look at our data. Rather than grouping the metrics by type of data, we look at our services in light of the customers’ view of our performance.

The report card communicates the IT organization’s overall GPA, including the grades for each key service or product, and allows drilling down into the rubric to see the graded components. This provides a simple, meaningful, and comprehensive picture of the health of the IT organization from the customers’ viewpoint. It makes a good starting point for a metrics program. It also allows the IT organization to appreciate the value and benefits metrics can provide, especially as a means of communication, before embarking fully into more threatening data.

Endnote
1. Robert S. Kaplan and David P. Norton, “The Balanced Scorecard—Measures That Drive Performance,” Harvard Business Review, (January/February 1992), pp. 71–29.

© 2008 Martin Klubeck and Michael Langthorne. The text of this article is licensed under the Creative Commons Attribution-NonCommercial-No Derivative Works 3.0 license (http://creativecommons.org/licenses/by-nc-nd/3.0/).

by Martin Klubeck and Michael Langthorne, Published on Monday, May 5, 2008, EDUCAUSE Quarterly, vol. 31, no. 2 (April–June 2008)

Do-It-Yourself Metrics

Something new is on the horizon, and depending on your role on campus, it might be storm clouds or a cleansing shower. Either way, no matter how hard you try to avoid it, sooner rather than later you will have to deal with metrics.

Metrics don’t have to cause fear and resistance. Metrics can, and should, be a powerful tool for improvement. Before we can intelligently discuss the proper use of metrics, however, we must come to a common understanding of what a metric is and how to create one.

What Is a Metric?

Most metrics are created, collected, and reported to satisfy a leader’s request. The leader’s role is to supply clarity and direction by providing the proper questions. Middle management’s role is to answer the questions. Metrics offer a means of providing the answers so that all involved can have faith in them.

Unfortunately, leaders often don’t know exactly what they want. Chances are you have played the Guessing Game with a leader, where the data you provided wasn’t what he needed, so he asked for different data, figuring he would know the right data when he saw it. Despite repeated failures, you continued to chase data as if all the effort invested in collecting the wrong data would eventually prove worth your perseverance. There is a better way.

Speaking a Common Language

Defining a metric requires a common language. Data, measures, information, and metrics are distinctly different for our purposes. Using an IT help desk as an example, we can demonstrate those differences:

  • Data: The simplest/lowest unit available. Data represent “raw numbers” and are of little to no use alone.
    • Number of trouble calls
    • Number of employees
  • Measures: A little deeper view that builds on the data. Measures are rarely useful alone.
    • Number of calls per hour
    • Number of cases closed by worker
  • Information: Usually a comparison. This level of abstraction serves as a useful indicator.
    • Number of calls for each hour compared to number of workers on a shift
    • Average length of time to close a case, grouped by type
  • Metric: Tells a complete story. It incorporates information (built of measures and data) to answer a question fully. Normally, a metric is conveyed through a graphical representation and explanatory prose.

To understand the relationships among the components that make up a metric, imagine a metric as an oak tree: it has a massive trunk, and the leaves and branches provide shade and comfort. Data are analogous to the leaves on the tree. Numerous and abundant, they are interesting to look at, easy to get, and serve a purpose; but by themselves they are not very useful and will not survive once removed from the branches.

The smallest and thinnest branches represent measures—they provide an essential connection between leaf and tree (between data and the metric), although not substantial or robust enough to create anything on their own. The thicker, inner branches, the limbs, are like information. Useful at times in themselves (for supporting tire swings or tree houses, for example), they die if taken away from the tree. Information without a connection to the trunk will fail to reach its potential.

The trunk of the tree, where you can determine its age and strength, represents the metric—a picture telling a complete story. The metric may consist of many pieces of information, derived from many measures and data. However, even the largest tree will wither and die without roots.

The roots of the tree represent the questions the metric is designed to answer. As with the oak, the roots define the type of tree it will become, where it will live, how strong it will be, and if it will survive a harsh environment. The roots are born of the original seed (need) and spread out, providing a life-giving foundation for the tree. Even if you cut down the tree, the roots will continue to spawn new growth. Unless the root question is no longer necessary (the tree is uprooted), you will continue to need data, measures, information, and metrics to feed the root need.

To aid in the process of properly building our metric tree from the roots up, rather than picking leaves (data) and branches (measures) off the ground trying to create a tree, we use an implementation guide. This straightforward template allows us to focus our energies on the root need.

Creating a Metric

To best use the concept of storytelling, metrics require a level of structure and rigor. The most effective and useful metrics are designed with the end in mind. Focusing your efforts up front ensures that you don’t waste time, money, or good will in collecting inappropriate data or in creating a flawed metric.

The implementation guide helps in the planning, documentation, and implementation of a metric. The guide takes basic components of a metric and lays them out for completion. It also holds the keys to building a successful metric. Continuing the mighty oak analogy, think of the implementation guide as a root stimulator. It consists of the following parts:

Executive-Level Summary: One to two paragraphs about the metric. Although it comes first in the guide you are putting together, we recommend capturing the executive summary content after you have completed the rest of the guide. It should include a definition, summary, and history of the metric (the question and the answer).

Purpose: The most important part of any metric. What question are you trying to answer? What is the root question? Why do you want to tell the story? What do you hope to achieve? This is so important that it is a go/no-go proposition. If the purpose is not clear, stop. If you cannot clearly define the root question and identify how you will use the metric, stop. An extra test is to identify how the metric will not be used. It helps if the person asking the question is open to the possibility that something other than a metric might satisfy the need more effectively or efficiently.

Success Key 1: If you don’t know the purpose of the metric, don’t collect data.
Success Key 2: If you don’t know why you’re collecting data or reporting a metric, stop.

Customer: Several possible, beginning with the “root” customer who provided the root question—the obvious recipient of the final metric. There are other customers for any metric, however. If you have people collecting data, it improves quality and accuracy when they also get to see the fruits of their labor. Don’t forget yourself—if your boss asked the question and you’re in charge of developing the metric, you should be interested in the answers, too.

Graphical Representation: A hard point to grasp for many. Rather than describe the data wanted, ask the customer to describe how they would like to view the story. Incorrectly focusing on the data instead of a metric focuses on the answer wanted instead of the question. Obviously, a focus on the answer biases the building of the metric and what it should explain. This leads to a tainted and limited view, which leads to chasing data. The graphic represents a guess at how to tell the story and at the charts or graphs to use. The graphic can be a trend, Pareto, benchmarking, bar, line, dashboard, or other type of representation.

Success Key 3: Don’t chase data: determine the question regardless of the answer.

Explanation: A prose version of the story the metric tells, explaining how to read the picture. Remember, this is only a guess. If you identified the root question well, the explanation of the answer should be evident.

Schedule: Large steps in the metric’s lifecycle. Will you start collecting data at the beginning of the school year, calendar year, or fiscal year? When will you make reports available? Finally, when will you stop collecting data? Or, asked another way, when will this metric cease to be useful?

Did we surprise you with that one? A metric has a purpose. Its original purpose can change or be overcome by events. A metric is not eternal, although your organization probably collects (and maybe reports) measures no longer used by anyone. The purpose has passed, but alas, the effort continues. Write an expected lifespan in this section. Explain how you’ll know when you can stop collecting and reporting this metric.

Measures: Time, finally, for the leaves and branches. Identify the specific data to be collected and used to develop the metric. Target the lowest-level view of the data. Nothing is eternal—the purpose, question, and data used to create the metric can, and most times should, change.

Collection: Time at last for the processes. Now you can document the processes and procedures used to actually collect the data. Be as detailed as possible—you are creating a guide for the collector to follow. Include the collector (person/role), source of the data, frequency of collecting or reporting of the data, and method of collection. Document the process for collecting the data.

It will help immeasurably if you can collect the data with as little human intervention as possible. Any time you can automate the collection process, do so. Not only do you reduce the risk of human error (inherent in anything humans do), you also minimize bias—intentional and unintentional. The less intervention, the less pain for your busy workforce. The less intervention, the less chance of human error in the collection.

Analysis: Now for the story. All the assumptions, constraints, and known flaws around the information go into the story. Document any formulas or mathematical equations needed. You might want to enlist a statistician. Many times the data will tell you how to proceed. If you’ve done the job well and identified the root questions, built a picture, and then worked on the data required, you can now allow the data to dictate—to a degree—what to do next.

Threshold and Target: The range of acceptability or expectation. Any results better than the threshold and below the target are acceptable. Any results below the threshold dictate further investigation to find out if the causes can be avoided or the processes improved. Any results above the target dictate further investigation to find out if the causes can be replicated or leveraged.

Lessons Learned: Time to get your money’s worth. Document your lessons learned, and plan to visit this section of the implementation guide periodically so that you don’t end up with a metric that outlives its usefulness, draining valuable resources past the need.

Proper Use of Metrics

Metrics offer a powerful tool for improvement. They can provide a vehicle for communication, insight for planning, and visibility for decision making. A well-thought-out metric can be a valuable asset for attaining goals and predicting the future. Metrics can help determine an organization’s health and whether its products and services align with the organizational mission and vision. In short, metrics can help leaders ensure they are doing the right things, the right way (preferably the first time).

With the power metrics provide comes responsibility. Let’s examine the risks. Metrics (and data to a greater degree) are extremely easy to misuse. Management could try to solve problems prematurely, making decisions without investigation. Someone can look at a metric and decide that it is the “whole truth.” Too often metrics are used to justify a personal agenda instead of answering a question. Misuse will create a culture of distrust, encourage workers to make the data inaccurate, and foster an atmosphere of secrecy and passive-aggressive behavior.

Success Key 4: It is not enough to ensure you don’t misuse data; you must convince everyone involved that you won’t misuse it.

The misuse of data can make all of your well-intentioned efforts worthless. Worse, it can make future communications, trust, and metrics difficult, if not impossible. Each step of the way, ensure that you are a good steward of the information you gather.

Metrics are only a tool, a means to an end, not the goal. Because metrics, like anything derived from information, contain a certain amount of error (variance), the only proper response to a metric is to investigate. Metrics in their fullest glory are still just indicators to help answer a question. You must be careful not to accept the metric without critical review. Before you act, before you make a decision based on a metric, investigate and ensure that it is telling you what you think it is. A metric provides additional information, in a structured format, but it is still just information—not a truth. You must investigate to find the truth and ensure that you make the right decisions.

Of course, some metrics won’t require further investigation, like a metric used to give you an additional level of comfort. Before making any important decisions, however, you should make sure that you are not only answering the question but answering it accurately and truthfully.

Success Key 5: The only valid response to data (or metrics) is to investigate!

Conclusion

We had three goals in writing this article. We wanted to:

  • Introduce the concept of metrics as a form of storytelling.
  • Promote the use of tools, specifically an implementation guide to provide structure and rigor.
  • Raise awareness of the benefits and pitfalls of metrics.

When discussing metrics, remember the mighty oak analogy; it’s an easy and powerful way to explain the differences and relationships that exist among the components that make up a metric. When done properly, a metric tells a complete story and answers one or more questions. Since the question is at the root of the metric, start there and never lose sight of it. (See the sidebar on OIT Metrics.) The goal isn’t to develop the perfect data set or metric; the goal is to answer the question.

Metrics are never an end in and of themselves—they offer a way to focus your investigation. While nothing can guarantee success, the five success keys combined with a fully prepared implementation guide will help you avoid the most dangerous pitfalls.

© 2009 Martin Klubeck. The text of this article is licensed under the Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 license.