Hitting the Target may not be good enough

target

If you’re giving your workers “targets” to achieve each year, chances are you’ll hit the target and miss the mark!

target

 

How hard is it to NOT set targets for your workers? For some it’s extremely difficult. The problem is that the worker focuses on the bar you’ve set and not on the underlying purpose. You run the real risk that the worker will achieve the target but miss the point, and worse, not move the organization closer to the vision.

Ensure that the purpose is at the heart of all you do. That the vision is the basis for the goal and the target you are setting. When the question comes down to priorities and reaching a target, the vision should drive the choices NOT the need to check the box during the annual review.

It’s hard to move forward if you’re always looking behind you

lookingback

lookingback

“Never look back unless you are planning to go that way.”

― Henry David Thoreau

You may think that’s a flippant statement…that I am going to recommend that you “never look back” because you shouldn’t be planning to go that way. But, I find myself many times going back. Why? To apologize or right a wrong I committed. Yup…I make mistakes.

But for me the secret is that we ALL make mistakes. We all have potential regrets. And these regrets will side-track you as effectively as any of the distractions the demon of procrastination has come up with. Regrets eat at us and make us impotent. By impotent I mean; lazy, docile, resigned, defeated – unable to move forward.

So, please do go back – to right the wrong.

Remove it from your luggage (NEVER take your regrets as a carry-on!). Other than that, I’m all about not “planning” to go that way…we shouldn’t plan to mess up, or fall on our faces. We shouldn’t plan on being rude, inconsiderate or selfish. We shouldn’t plan on doing things we’ll regret. But when it happens (and it will because we’re human AND we’re pushing some really cool envelopes):

  • Acknowledge it,
  • Own it,
  • Own up to it, and
  • Move on

Remember Your Vision WILL Change the World!

Networking is a Necessity

connect1

connect1

The way of the world is meeting people through other people.

~ Robert Kerrigan

It’s not only the “way of the world” but it’s one of the major tools at your disposal for getting your vision to come true. That means being a “Connector” – part of your job as a visionary is connecting others…doers with dreamers, dreamers with other dreamers, yourself with anyone who can help your vision come true.

Remember:

  1. Enlist others. You can’t do it alone – your vision is too big for you to make happen on your own.
  2. If what they want to do helps the vision, then get out of their way…or better yet, support them. They don’t have to be doing what you want or what you would do, or the way you’d do it.
  3. Share your vision freely. It should be the answer to the common question, “so what do you do?”
  4. Always look for opportunities to share your vision. If you’re living your vision, more and more opportunities will come your way. Keep your eyes, ears, and heart open to them.

Your job as a visionary is to Share It, Preach It, Champion It, and Connect It.

So, who have you Connected this week? Come on…you should be making new connections (either for yourself or others) every DAY. So, one per week is underachieving.

I ask again, who have you Connected this week?

Remember, Your Vision Won’t Change the World Without Your Help,

Marty

Moving forward requires movement

IMG 8564 graceonly small

IMG 8564 graceonly small

Our biggest regrets are not for the things we have done but for the things we haven’t done

~ Chad Michael Murray

No Regrets. I never have regrets for trying and failing…only for not trying.

 

The question isn’t what have you done that you regret…it’s what haven’t you done – because you WILL regret it. “Get ‘Er Done” Is a deeper statement than you might think at first look. You know your vision is your calling because when you ignore it for too long, you start to feel guilty. You don’t sleep well. You start lamenting the time you’re wasting…

Yes, wasting.

If you’re not working on your calling, if you’re not living your vision…what are you doing? Vegging on the couch? Playing an App on your smartphone? Reading, watching, or listening to nonsense?

OK, I’m not telling you to avoid all fun, free-time, or down-time. But I AM telling you that if you don’t follow your calling, you will regret it. You’ll eventually die unhappy. You’ll look back and regret not trying, not sharing, and not spreading your vision. You will regret not doing what you were called to do.

We learn from our mistakes…which means we learn by doing. If you are not living your vision, you won’t learn from the missteps, and you’ll wonder if you did all you could.

No need to wonder. You didn’t.

So DO IT.

Remember, Your Vision WILL Change the World!

Marty

A Metrics Workbook

The idea seems simple enough. My publisher wants me to write a companion workbook for “Metrics: How to Improve Key Business Results.”

I thought, “hey, this should be a win/win.” Not only can it stand as a companion to the book, but I can use it when doing workshops and on-site consulting. It will have to be a good tool for working through the process – as a self-help tool; with or without a facilitator.

Interested?

The main challenge is I can’t have blank sheets…like a page for “draw your metric picture here.” My editor rightly pointed out that no one wants to pay for blank pages – there’s no better deal than blank notebooks! Of course the other challenge will be wrapped around the length, but that’s my problem.

I’m a an “E” which means I’d rather do anything with someone than alone. Actually I’d rather do nothing with someone than anything by myself. So, I’m looking for anyone who wants to participate/beta test my ideas. I will walk you through the metrics development process and use our experience together to create the workbook. As I create the pages I’ll share them with you (to use and thereby test).

Anyone need help developing metrics out there? Want to try the processes defined in my book? Besides getting a pre-copy of the material that will make up the workbook, you’ll develop metrics that you need, AND I will mention you in the book (if you allow it) when I reference our work for it. I may even use our experience for one of the stories/scenarios.

Send me an email if you’re interested.

So, first come, first served (I can’t imagine I’ll get too many takers, but just in case…I need a disclaimer).

No More Targets!

Seems like this has been the topic du jour for me! I recently presented at the Educause Midwest Regional Conference with my good friend Don Padgett – and we did our best to convince our audience to stop setting targets when using metrics for improvement. It wasn’t a hard sell since most of those in attendance were doers vs. leaders. It seems that those who are tasked with developing, collecting, analyzing and reporting metrics have no problem with this concept.

I wrote a short article for EdTech magazine on the topic (just published): http://www.edtechmagazine.com/higher/article/2013/05/incorporate-metrics-improve-it-service

And am finishing up a longer, more in depth piece for The Cutter Consortium. There are exceptions of course (as there are with all things). I have no problem with people setting targets for themselves (although there are still risks and problems with this). The main point is that we have to stop looking at our metrics and then deciding that they’d be more useful if we set a target!

I’ll write soon about why metrics should NOT be seen as “actionable.” Why we shouldn’t make “data-driven decisions.” Yes, I understand I may be fighting an uphill battle and doomed for failure…but what fun would it be if everyone already agreed?

Feel free to read the article listed above and weigh in with your thoughts!

Analytics: Leaders Asking the Right Questions

On Thursday, April 18th, Don and I were lucky enough to present at the Educuase IT Enterprise Leadership conference.  We had a great time and met some great people.  We only had 45 minutes for the entire presentation which translates to about 40 minutes max to convey key concepts and ideas around how to do metrics right. 

See video clips from the presentation!

Our focus was on how the Leader of an organization can help (or hurt) a metrics effort.  The Leader has a great and unique opportunity to not only set an example, but to actually “lead” the organization through the analytics effort.  In preparing what we wanted the leader to do, we quickly realized that there were as many things we wanted them to stop doing.

So we designed our presentation to offer “new norms” for behaviors we had witnessed.  The key points were:

  1. Finding the Root Question instead of building metrics from whatever data you have available.  This was the most critical lesson shared:  http://www.youtube.com/watch?v=0YA_HJfsS7w
  2. When designing your metric, think abstractly first – come up with a picture of what the answer would be before you think charts, graphs, or tables
  3. Being “Data Informed” instead of “Data Driven”
  4. Remember to start (and stay) at the Macro-view instead of getting into the weeds

 Here is a 6 minute video summarizing the presentation: http://www.youtube.com/watch?v=__9hOP6ux5I

Hope you enjoy the clips!

 

 

Reaching new Heights in Business Performance Metrics: Getting the right Answers for the most important Questions

b2ap3_thumbnail_ReachingFigure1.jpg

In the book Metrics: How to Improve Key Business Results, Martin Klubeck introduced a simple, but powerful tool – the Answer Key. While it is simple to use, it offers a much more comprehensive starting point than we in Business Performance Measurement have had up to now. Besides being a template for developing meaningful metrics, it also enables the user to audit an existing set of metrics. The “lock” the Answer Key opens is the root (analytical) question and the treasure behind the lock is the rich set of metrics you can develop.

Evaluating an Existing Metrics Program

When you use the Answer Key to evaluate your current metrics program, you “plug” in your current metrics to one of the four quadrants (see figure 1). To the left of the four quadrants we find “Organizational Information Needs.” This represents a high-level root question such as: “What is the organization’s overall driving need?” or “What is the overall health of the organization?” From the root question we can as a rule pick one of two tracks – Return vs. Investment (operational) or State of the Organization (strategic). This high-level split leads to more specific organizational health quadrants. The quadrants provide a view of an organization’s health from four distinct and critical perspectives. These four perspectives are the foundation of the Answer Key. Each key contributing stakeholder to an organization’s success has their own perspective:

  1. Product/Service Health (customer view),
  2. Process Health (manager view),
  3. Organizational Health (worker view), and
  4. Future Health (leadership view).

b2ap3_thumbnail_ReachingFigure1.jpg
Figure 1, the four quadrants of the Answer Key

Product/Service Health – measures that can determine how well your products and services satisfy your customers. This quadrant represents the viewpoint of the most critical of stakeholders for your organization–the customers’ viewpoint. Customers are the primary determiner of the organization’s success. The customer may buy the products, use the services, or be the “purpose” of the organization. In the case of a not-for-profit, customers are the recipients of the products or services offered by the organization.

Process Health – will measure how well the organization’s processes are working. How efficiently the organization can produce its deliverables or how efficiently it can provide services. This viewpoint is the business’ perspective. Managers regard this viewpoint seriously, wanting to use metrics primarily to continuously do more with less. This tendency in itself is not a “bad” thing. Often to become more effective (improve product/service health) the organization first improves its processes. The problem occurs when the business puts profit ahead of the customer as the real purpose for its existence. If the business believes its purpose is making money and measures success accordingly, then efficiency becomes the problem rather than the solution.

Organizational/Program Health – measures that indicate the health of the workforce. The environment, the workspace, and the all-important culture comprise this essential perspective. This perspective belongs to the workforce which is the heart and soul of the organization. The use of this quadrant brings truth to the expression – our workers are our greatest assets.

Future Health – these measures are focused on the capability of the organization to grow, prosper, and adapt in the future. This perspective involves strategic planning, goal setting and attainment, and programs and projects initiated by the organization. This viewpoint resides with the leadership of the organization.

The simple (and first) test for all measures within these quadrants is to ensure that they are viewed (reported) within the proper perspective. Let’s take availability as an example. If you look at availability from the customer’s point of view (Product/Service Health) – you are only concerned with downtime in relation to the customer’s effort to access the service. If no customers attempt to access the service – then downtime is a non-player in the “customer view” quadrant. On the other hand, if you are looking at it from the “business view” (Process Health) then the amount of time down matters, even if the customer isn’t aware of it.. If you have a fully redundant system…the amount of time the primary (or secondary) system is down matters to the business viewpoint, while the customer’s viewpoint would not even consider it.

Let’s get back to assessing your current program using the Answer Key. For every metric, we look at the components of a robust program: Alignment and Comprehension. Are your measures aligned consistently with your root question? Are your measures (which comprise your metric) providing a complete answer? Are they telling the full story?

Find the perspective your metric fits within. If you have a high-level root question you may have more than one metric. Plug in what you have. Then move to the measures you collect and report for the metric. Map only one root question at a time. If your root question has more than one metric, you should map one metric at a time. Granted, this is a simplification of the effort it will take to move along the Answer Key, but the key provides a map which you should find extremely helpful.

b2ap3_thumbnail_ReachingFigure2.jpg
Figure 2, The Answer Key including measures

Plug in the measures that you have collected. Then do each of the audit checks for the following:

  1. Alignment. Do all the measures flow cleanly back to the root question? Or are your measures spread among the four quadrants in a haphazard fashion?
  2. Comprehension. Does the summation of your measures provide a complete answer to the question? This includes the rule of triangulation (three or more measures making up a single viewpoint)?

If you don’t have a metric, plug in the measures. You can still do the two checks for Alignment and Comprehension. You will need (highly recommended) to determine if you have a metric – or just some disparate measures. If you believe you have a metric (a complete story for one or more perspectives) check and see if there is a root question you are answering. Perhaps it has just never been documented. If you don’t have metrics or measures, but you’ve been collecting (and perhaps analyzing) and reporting data, you are missing the context necessary to tie the data back to your root questions. You will spend most of your time “chasing data” instead of working toward solid answers to your organizational needs.

Creating Metrics from Scratch

There are many errors that can occur when we try to develop a comprehensive program of performance measures. Two prevalent ones are inadequate sampling – settling for readily available data instead of what we “need” and misaligned sampling – having data from disparate areas so that our story is disjointed and incomplete.

Both pitfalls are easily avoided by using a performance measurement matrix. Again, the “Answer Key” provides a simple and easy to follow tool for ensuring alignment, triangulation, and completeness of the measures you use to build a metric program. A very simple definition of triangulation is the use of multiple measures, methods for collecting and analyzing the measures, and multiple sources to ensure the picture painted by your metric is not dependent on only one viewpoint.

Performance measurement for a business, regardless of the size of the organization, falls into four distinct categories:

  1. Customer Viewpoint
  2. Business (manager’s) Viewpoint
  3. Workers’ Viewpoint
  4. Leadership’s Viewpoint

For each of these perspectives, we can measure different components of the organization’s overall health.

Customer Viewpoint

We examine the organization’s ability to deliver products and services to the customer. The question driving these measures is, “how effective are we at satisfying our customers’ needs?” We call this group of measures “effectiveness” measures.

Business Viewpoint

We examine the organization’s ability to perform. Specifically, how healthy is the organization in performing the processes necessary to delivery products and services. These measures are more commonly referred to as “efficiency” metrics.

Worker Viewpoint

How well does the organization take care of itself? These views can be used to analyze the health of a large organization or an individual. They measure effectiveness, efficiency, and overall health. Your personal health, like the health of the organization, will help determine how well you deliver services, products, and perform the processes to do so.

Leadership Viewpoint

Finally we look at the organizations potential for growth. Will the organization thrive or is it stuck in a rut, simply trying to survive? What are the plans for the future? Are we investing properly to meet anticipated and imminent challenges? How well is the organization prepared for the future?

b2ap3_thumbnail_ReachingFigure3.jpg
Figure 3, Viewpoints vs. Health

Where to Start?

It is normally best for an organization (especially one just starting to use metrics) to focus their initial priority on the first quadrant – Customer Viewpoint–and avoid working within the Business Viewpoint. While the workforce can understand and support measures around customer satisfaction, using measures around efficiency requires a level of trust many “immature organizations” lack. Since no business can survive without customers, it is a logical place to begin. Indeed the workforce can more easily understand the customer’s viewpoint, and it is usually a safer (less contentious) place to start.

Another good starting point is the Worker Viewpoint. Leaders often claim that “our workers are our greatest asset.” By starting with the Workers Viewpoint, a leader affirms this statement. Besides directly checking on how well you please your customers, ensuring that you care for your workers is the next most important thing you can do to ensure success.

The key in this process is to help the organization identify root questions that drive the metrics. These root questions can be (literally or metaphorically) placed on the Answer Key (wherever they belong), giving you a clear picture of the type of measures you will need to provide the answer.

The Answer Key can help you check the quality of your work and ensure that you’re on the right track. And if (or when) you get stumped and you don’t know which direction to go, it can help you get on track.

Most metrics you design, if they fall on the Answer Key, will most likely start at the third tier and belong to one of four viewpoints:

  • The customer’s viewpoint (effectiveness)
  • The business’s viewpoint (efficiency)
  • The workers’ viewpoint
  • The leadership’s viewpoint

As you move from left to right on the Answer Key, you will transition from the strategic to the tactical aspects of Organizational Performance Measurement. Another way to look at this is that you move from the root question toward data that should answer it.

Regardless of where your metric (or root question) falls, you’ll have to move to the right to find the measures and data you need to answer the question. At the fourth tier you address:

  • Return vs. Investment
    • oProduct/Service Health – Customer View
    • oProcess Health – Business View
  • State of the Organization
    • oOrganizational Health – Employee View
    • oFuture Health – Leadership View

This tier is easily the most frequently used. It is far enough left that root questions starting here are worthy of metrics to answer, and far enough right that they are easy for most organizations to comprehend their use in improving the organization. The fifth (and any consecutive) tier, we find mostly information and measures. If we find our root question residing here, the question is probably very tactical and may not require the upper or leftmost tiers of the matrix to be sufficiently answered. Considering a Metric Development Plan, you should flesh out the metric by identifying not only the information and measures, but also document the individual data points needed. Please note, The Metrics Development Plan is another concept that is discussed in depth in the book.

Conclusion

The reason the Answer Key leads off the practical part of the book is that it is a great shortcut tool for you to implement metrics. It helps you put your root question into a context of organizational health. If you are forced to work without a root question, it can be used to ensure you are not trying to blend incongruous measures, as well as to help you work toward a driving need.

Working from the left, moving right, you go from the Strategic-level to the tactical. It will help you identify possible information and measures you can use to answer your root question.

Working from right to left, you can work from specific measures (or even data) back toward a driving need. Working in this direction will also help you to ensure that your metrics are logically grouped and organized as well as being aligned with your organization’s strategy. You normally don’t want information from different areas; product/service, process, organizational, or future health mixed together into one metric – as they would not be answering a single question (unless your question is a the highest levels – right-most levels of the Answer Key),

Remember that the Answer Key, while a useful shortcut, is still only a tool for helping you develop your metrics. It’s not the whole answer, and it doesn’t relieve you of the need to follow the model for developing metrics (including the identification of the Root Question).

The Answer Key, or Performance Management Framework as it is starting to be called, is not the only useful concept introduced and discussed in the book, Metrics: How to Improve Key Business Results, by Martin Klubeck. There are many other pearls contained within its pages including; the Root Question, Triangulation, Expectations, and several real-world examples of their usage. We invite you to read the book and utilize all its tools and concepts to build powerful and well-adopted metrics that can inspire your organization, company or program to thrive, increase performance and build more value within.

——-

Authors: Martin Klubeck is a strategy and planning consultant at the University of Notre Dame and a recognized expert in the field of practical metrics. He holds a master’s degree from Webster University in human resources development and a bachelor’s in computer science from Chapman University. He is coauthor of Why Organizations Struggle So Hard to Improve So Little and numerous articles on metrics. His passion for simplifying the complex has led to the development of a simple system for developing meaningful metrics which is captured in his new book, Metrics: How to Improve Key Business Results. Klubeck is also the founder of the Consortium for the Establishment of Information Technology Performance Standards, a nonprofit organization focused on providing much-needed standards for measures.

Russ Cheesman is a senior information technology professional and consultant with experiences in all phases of the System Development Life Cycle. Much of his career had been devoted to enabling IT solutions for business problems and/or opportunities. He has served as an IT manager and practitioner in many industry sectors, including banking/financial, manufacturing, construction, retail, pharmaceutical, telecommunications, and health care. Mr. Cheesman, in recent years, has been practicing business performance measurement and management within several IT and health care organizations through the use of business strategy, balanced scorecards, metrics, key performance indicators, and business analytical systems.

© 2009 Martin Klubeck. The text of this article is licensed under the Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 license.

On Expectations and Metrics

Key Takeaways

  • The concept of “expectations” as a replacement for traditionally used “targets” and “stretch goals” will not be universally accepted.
  • The frequent misuse of targets/stretch goals makes them a poor motivator for people’s behavior and performance, while expectations shift the focus to processes.
  • Customers, not managers, set expectations for an organization’s performance.
  • Metrics, when used to inform, become a means for continuous improvement, and using expectations to frame the use of metrics allows them to achieve their full potential.

Using “targets” and “stretch goals” for analyzing your performance metrics is not only outdated, it’s a bad idea.

Not everyone agrees with me, though. For the past five years, in numerous online discussion groups I have offered the concept of “expectations” as a replacement for the more traditional (some would say “time-tested”) targets and stretch goals. I have also presented the concept in seminars and presentations on metrics, and it has met with decidedly mixed reviews. When I propose the use of expectations to performance measurement and performance management experts, I generally receive cautious rebuttal. The idea of not driving behavior through the careful collection, analysis, and reporting of data goes against the accepted paradigm. In contrast, many others support the concept of expectations and are ready to drop targets and stretch goals. The make-up of these two groups is telling: those who believe metrics should be used to manage behavior, and those who believe they should be used to provide insights to improvement.

Perhaps I should start at the beginning. What are “stretch goals,” “targets,” and “expectations” and what impact do they have on applying metrics in your organization?

Stretch Goals and Targets

Goals are a great tool for improvement, on the personal or organizational level. Well-defined goals can give direction and purpose. They can be used to motivate and recognize exceptional effort and performance. Goals are the basis for any strategic plan and the foundation for almost every significant improvement.

Stretch goals are a little different. Stretch goals, when used properly, are simply goals that require us to push ourselves beyond our comfort zones. Stretch goals can be used to encourage greater and greater effort. When stretch goals are misused, the goal setter does very little to reward those who achieve the goals, believing that the goals must not have been challenging enough. If the worker achieves the goal, then the goal setter picks a tougher, more challenging goal next time. Instead of building a healthy leader/follower (manager/worker) relationship, the abuser uses stretch goals to manipulate the worker. Even with proper recognition (a rarity in cases of misuse), the worker quickly figures out that with each achievement, the difficulty level for the goals is raised. There isn’t an overarching vision to which the goals lead — instead, the stretch goals are the only thing visible.

Targets, like stretch goals, can be a good thing, especially if set with the worker’s input and realistically determined to be what the organization should achieve. The performance management experts rightly argue that targets are a great tool for managing performance and effort. When misused, however, targets become a tool for manipulating the workforce instead of communicating the desired level of performance. As a target is reached, the target-setter moves the target a little further along the continuum. This “moving” target creates an environment of distrust because the workers quickly realize that the target is arbitrary and will move upon successful completion. Rarely is meeting the target rewarded.

Imagine the perception of the worker who, when given a target or stretch goal, decides to earn her boss’s recognition by achieving the mark as fast as possible. She works extra hours, neglects her own concerns, and perhaps even improves some processes to achieve the mark months before the “deadline.” Instead of the accolades she anticipated, her boss seems a bit disappointed in his own inability to adequately challenge her. Obviously he set the bar too low. Instead of the reward she hoped for, she receives a tougher set of targets and stretch goals for the next cycle. How long will it take her to become disenchanted with this work environment?

Expectations

Expectations reflect what the customer expects from your organization. The customer in this case can be an internal or end user. Expectations are set by the customer, often in service level agreements. They work best when they represent a range.

Rather than being something to achieve, expectations are measures of what is. You meet, exceed, or fail to meet expectations. Of course you could argue that you can achieve, fail to achieve, or exceed the target (or stretch goals), and you’d be correct. But, most targets and stretch goals are a single point — either you reach it or you don’t — while expectations use a range. Expectations are also harder to misuse. They are not a tool for manipulation or encouragement; instead, they tell an organization how well it is doing. Conceptually this moves managers from trying to push workers to continuously produce more, produce faster, or produce for less to pushing them to meet customer expectations.

Expectations and Norms

Expectations have a strong relationship with “norms.” Service norms can help establish customer expectations. The best example I’ve found is my own expectation for how fast “fast food” really is. My town has two well-known hamburger fast-food restaurants. One is considerably slower than the other. My expectations for each differ. If the slower one was so slow that it was “unacceptable” to me, I would stop buying from them. As it stands, I go to the one which has what I crave at the time (their menus are slightly different, of course).  But if I’m in a rush, I go to the faster one. My expectations for each are built on my experience of their norms. The same will be true of your customers’ view of your services. Their expectations will be influenced by their experience with you, but if you fall well below what they accept as normal, chances are you will lose their business.

The usual argument I hear against expectations is that customers will say they expect perfection or immediate service, or 100 percent uptime. In practice I’ve never found this to be true. Customers, for the most part, are very understanding and realistic.

The performance management/measurement experts who champion targets and stretch goals posit that when used properly (and with the full and honest collaboration of workers), they are excellent tools for improvement. I don’t disagree. But, they can be — and frequently are — misused. They also tend to miss valuable insights about the health of the service provided. You can achieve targets and stretch goals and not know if you are satisfying your customers’ needs. Expectations provide direct insight into that and share that insight with your workers (building self-worth and pride in the work), management (providing insight into how well the work is being performed), and customers (demonstrating the value of the organization).

Another way expectations are different from targets or stretch goals is in their use over the long-term.   

Expectations as a Long-Term Tool

Expectations are first and foremost a means for tracking and communicating how well the organization is performing — from the (internal or external) customers’ point of view. Expectations also reveal how improvement efforts affect the organization’s performance. Unlike a target or stretch goal, expectations reflect long-term improvements.

Expectations are not modified unless there is a change:

  1. To processes or environment that makes performance consistently different. This avoids changes based on “working harder” and reflects “working smarter.”
  2. In resources, whether more resources (new hires) or better resources (new tools, upgraded equipment, or more skilled workers).
  3. In a customer’s expectations; this occurs only if performance has changed or if a competitor’s norm is way out of sync with yours. This will be evident in feedback, direct (surveys or comments made) or indirect (customers will switch to your competitor).

Anomalies

These changes are long-term rather than event-driven. If you find that you’ve exceeded expectations, you don’t raise the expectations for the next cycle. Exceeding expectations is not in itself a good thing, any more than failing to meet expectations is inherently a bad thing. So, if they are not good or bad, what are they?

When you do not meet expectations — either by exceeding or failing to meet expectations — you have “anomalies,” which are focus points for further investigation. If you have metrics for each of the services you offer, how do you know which ones to spend time on? How do you use the metrics and your limited resources (especially your time) wisely?

Expectations tell you where to focus your time and energy. Not in “fixing” a below expectations or celebrating an above expectations, but instead by investigating to determine causes. The true goal of a service organization is not to exceed expectations, but to consistently meet expectations. The idea is to institutionalize processes that lead to healthy services.

The idea of continuous improvement is a good one, but not when it is defined as the continuous raising of the bar or constantly moving the target. Continuous improvement only works when you have repeatable, stable, and controlled processes. These improvements might be reflected in higher quality deliverables, faster speeds, or greater functionality, but they can also result in efficiencies the customer does not see and that give the organization a “rebate” on the resources required to provide the services.

Recap

Goals: something to achieve.

Stretch goals: by nature, something to achieve with extra effort. This extra effort can be super-human or result from improving underlying processes. Like targets, usually used as motivators.

Targets: something you aim for. Depending on how close you get to hitting it (or exceeding it), you can react differently. Best when set with the person involved. They are a tool for motivating improvement.

Expectations: used to inform and identify when there may be catalysts (internal or external to the processes) that are creating anomalies. This information indicates things needing investigation.

If you fail to meet, meet, or exceed a target or goal, you might investigate, but chances are you’ll only chastise (fail to meet) or celebrate (meet or exceeds). Since they are seen as more of a motivational tool, reactions to them are distinctly different from the reactions to expectations. Expectations reveal the health of your services/products and therefore your processes. This is especially useful when the expectations reflect the customers’ viewpoint.

Bottom Line

Expectations are heavily tied to how the process performs under normal circumstances. Unless the norm differs significantly from a competitor’s the norm will strongly influence what the customer expects. Norms are not based on preference but on historical, measurable performance. Using expectations separates the reality of the processes’ expected performance levels from an organization’s hopes, dreams, or demands. This objective view allows the organization to look at the processes for delivering your services/products and determine:

  1. If you want to improve or modify it
  2. The presence of indicators that the process is not performing as well as expected
  3. The presence of indicators that the process is performing better than expected
  4. A baseline for measuring how changes affect the process (positively or negatively)

Perhaps the most important difference expectations bring to the party is putting the focus on the process and not on the people.

The concept of using metrics to drive behavior and motivate increased performance is a time-honored, management-preferred, and totally bad idea. Metrics should not be used to push, shove, pull, motivate, or manipulate. Metrics are best when used to inform. Used wisely, metrics become a means for improvement. Using expectations to frame the use of your metrics allows them to achieve their full potential as a tool.

Published on Wednesday, June 6, 2012

Martin Klubeck is a Strategy and Planning Consultant in the Office of Information Technologies at the University of Notre Dame.

© 2009 Martin Klubeck. The text of this article is licensed under the Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 license.

Measuring the Cloud

Key Takeaways

  • Measuring cloud services will finally make it obvious that measuring effectiveness is more important than tracking possible efficiencies.
  • Obtaining the desired metrics data from cloud providers won’t necessarily be easier than measuring performance of services delivered by the institution.
  • In addition to agreeing on definitions for performance measures, IT departments need to establish a benchmark for determining the minimum level of performance expected of cloud service providers.
  • Cost, although important, is not the major factor in providing services; customer satisfaction is.

I’ve preached from the top of every soapbox I could find. I’ve conducted full-day seminars. I’ve written numerous articles for EQ.1 Who knew it would take the outsourcing of services to the cloud to finally convince management that effectiveness measures are more important than efficiency measures? Cloud computing, in one simple view, is just outsourcing. For those who choose the cloud, however, measuring the service will finally make obvious the logic of measuring the effectiveness of the service rather than any of the possible efficiencies. How well is the service delivered? How well is the service maintained? In moving services to the cloud, management will not care about the efficiencies realized (or lost) by the provider but about the effectiveness of the services provided to campus customers.

Perceptions

Perhaps the largest driving force behind moving services to the cloud is the perception of various communities that cloud services are “better” and “more reliable” than the current institutionally provided services. I’m not sure this perception reflects reality, but many leaders believe it does. A related and interesting phenomenon I’ve noticed, but can’t prove, is that customers tolerate problems with cloud services better than they do problems with institutionally provided services. Case in point: Google had an outage of their e-mail system for four hours at the end of August 2009. It seemed to me that customers did not complain much and generally accepted the outage as a relatively minor inconvenience. I doubt that customers of institutionally delivered e-mail services would have been as tolerant of a similar service failure.

This reaction hints that customer expectations change with the service provider. Perhaps one of the reasons customers seem to have tolerance for Google’s service blip is their expectation that Google services will always be available, making it easier for them to brush off the rare (albeit long) downtime as an anomaly. Meanwhile, an outage of institutionally provided e-mail is seen as another indicator of the expected lower level of performance.

Getting Service Data from the Cloud

When I think about existing third-party outsourced service contracts, it seems nearly impossible to:

  1. Obtain effectiveness metrics like availability, speed to repair, and mean time between failure (MTBF)
  2. Count on the providers to live up to their service level agreement (SLA) for repairs and replacements

If this reflects what we can expect from outsourcing to the cloud, disappointment lies ahead for those expecting good data on service performance. We should be asking cloud service providers for the same types of metrics we currently collect, analyze, and report for institutionally provided services:

  • Availability — on a 24-hour, 365 days-a-year calendar
  • Reliability — in the form of MBTF
  • Speed — time to respond and time to resolve issues
  • Accuracy — defects per _____ (fill in the blank) and rework
  • Security — number of vulnerabilities and events
  • Customer satisfaction — determined by customer surveys both annually and when trouble arises

A major factor to analyze is the actual performance compared to customer expectations, which will be a struggle until standard definitions of performance measures exist. Even with agreed upon definitions, though, we’ll need a benchmark for determining the minimum level of performance expected.

The need for metrics in the IT industry is definitely growing. This need will not lessen if, and when, specific services move to the cloud. If we agree that the informational needs will be the same, the next question is, “How will we use the data obtained?” In the institutionally provided service model, we expect the IT provider to improve the delivery of services so that eventually they meet the customer’s performance expectations. But in the cloud scenario, what happens if the provider doesn’t meet those expectations?

With the outsourcing model, your strongest bargaining chip is that you can change providers. That means you might not want to have a long-term contract that makes it difficult to switch later. The most obvious recourse for a cloud customer likewise is to move to another cloud provider with better performance. But — can you obtain statistics on a cloud provider before making that decision? Furthermore, while being able to switch between providers easily (and regularly) can help you achieve better pricing and sometimes service, it creates a level of instability that your customers might not tolerate. If your provider knows this, you may find your services held hostage — and service metrics hard to come by.

Measuring the Cloud Before and After You Buy

Regardless of the type of cloud computing you plan to use (infrastructure, platform, software as a service, or any combination), the questions of how to measure and what to measure may be secondary to the essential question of when to measure: measure before and after you buy.

Measure Before You Buy

Most providers of cloud computing will have a cache of metrics to show how much savings in person-hours, infrastructure, hardware, and funds you will get by moving to their cloud services. Although most vendors prominently display cost savings, there is more to learn. For example, Google claims “less spam, a 99.9% uptime SLA, and enhanced email security” for its Gmail for Business. Does this mean that Gmail for Business lets through less spam than other services? Less than non-business Gmail? Or less than the system and settings you currently use? (Obviously it can’t be the latter — how would they know?) A 99.9 percent uptime SLA doesn’t mean that the vendor has a record of 99.9 percent uptime, but that the SLA offers it. And, if you’ve been working with availability measures, you know that 99.9 is a lot different than 99.999 when it comes to e-mail service.

Granted, this is only the hook. I’m sure most vendors of cloud services can and will provide measures to prove the effectiveness of their services. The bottom-line savings is the biggest selling point and the one you can have the most confidence in. You should be able to determine the current cost of providing any of the services you are considering moving to the cloud (though many CIOs will admit difficulty in doing so accurately). You can then easily compare that to the costs via the cloud. If this is your only criteria, you can make a good, data-driven decision. If you want historical data from the cloud service provider on other factors you care about so that you can make the best choice, you may have to deal with the vagueness caused by a lack of IT performance measurement standards.

The good news is that you can focus on the more important metrics defining the effectiveness of the service. If you outsource a service to the cloud, you won’t care how many hours the service provider puts in, how efficient they are with their resources, or how much rework they create for themselves. The only metrics you should care about are the same ones that your customers have always cared about:

  • How well is the service being delivered?
    • Is it there when I want it? (Availability)
    • Does it do what it is supposed to do? (Accuracy)
    • Are issues resolved as fast as I expect? (Speed)
    • What support model comes with the service? (Customer Service)
  • How much is the service being used? (Cloud computing might be ideal if your contract only charges for what you use.)
  • Do customers like the service? (Customer Satisfaction)
  • Is the service providing the necessary level of data security?

These are the same metrics you should be collecting on services you currently provide. While economics may be the driving factor in your consideration of moving to the cloud, the evaluation of how well the service is being delivered is based on its effectiveness, not its cost.

Measure After You Buy

So you’ve decided to go to the cloud and want to demonstrate the value of that investment. Upper level management will want to see evidence that the move was the right choice. As with the hook for moving in the first place, leadership may be satisfied with seeing a cost/benefit analysis. The bottom line is probably the measure you will use most often. You can show that the institution is saving N dollars monthly, annually, and “wow, look at how much we’ll save over 10 years!” But if cost were the only consideration, you could have moved to outsourcing those services a long time ago.

As time goes on, your customers will begin to weigh in on the value of the investment. That is when you will need effectiveness measures. It is convenient (and logical) that you should measure the same things as you did “Before You Buy” because you want to show the actual value compared to the expected benefits. But there is more.

Other reasons have kept many institutions from moving control of their IT environments off campus. Along with measuring the effectiveness of the cloud services and the cost savings, you will want to measure some other key factors, which might be as nebulous as “confidence,” “trust,” and “friendliness.” Having the service provider on campus has substantial benefits, including: personal recognition, loyalty to the organization, and accountability (especially when it is someone you write a performance review for). These factors might be tougher to measure, but not impossible. While most of the cost savings and effectiveness measures can be obtained from the cloud service provider, these “comfort measures” would not be available, and they require effort on your part — an effort that might be seen as unnecessary. Most leaders will probably be happy with just the bottom-line measures. Some will want to see how well the cloud service provider meets the SLAs. Others will want to ensure their customers receive good service, and they will ask for (and collect) effectiveness measures. This level of analysis should be applauded. It will be rare to find the organization that, after making the decision, wants to go the extra mile to also determine the comfort level of their customers. But, there are great gains possible if we do.

Summary

As the overseer/broker/relationship manager of cloud services, IT departments will need data on the effectiveness of those services and assurances that the promises made by cloud providers will be realized. Uptime, customer service, time to resolve issues, and customer satisfaction are just a few of the measures that will determine how well the services are being delivered. Management can focus on how well the customers’ needs are being met and not on how many man-hours it takes to upgrade an application server.

And that’s the point.

From the purely metrics point of view, focused on gathering and analysis, moving to the cloud might be the biggest blessing for an IT department that has come along in years. Rather than chasing the elusive efficiency measure, organizations that ship their services out to the cloud will focus on what have always been the most important measures — effectiveness.

The bottom line for any service has always been effectiveness — how well is the service delivered to the customer? How happy is the customer with the service? While price (cost) matters, it is not the most telling measure.

In fear of being outsourced, many in-house service providers have worried over the cost of providing the service. And while the leadership of the institution may require more information than ever before on the cost, the real bottom line has always been the level of customer satisfaction.

Endnote
1.    See my earlier articles, “Metrics for Trying Times,” EDUCAUSE Quarterly, vol. 32, no. 2 (April-June 2009), “Applying a Metrics Report Card,” EDUCAUSE Quarterly, vol. 31, no. 2 (April-June 2008), and “Do-It-Yourself Metrics,” EDUCAUSE Quarterly, vol. 29, no. 3 (July-September 2006).

© 2009 Martin Klubeck. The text of this article is licensed under the Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 license.