- Measuring cloud services will finally make it obvious that measuring effectiveness is more important than tracking possible efficiencies.
- Obtaining the desired metrics data from cloud providers won’t necessarily be easier than measuring performance of services delivered by the institution.
- In addition to agreeing on definitions for performance measures, IT departments need to establish a benchmark for determining the minimum level of performance expected of cloud service providers.
- Cost, although important, is not the major factor in providing services; customer satisfaction is.
I’ve preached from the top of every soapbox I could find. I’ve conducted full-day seminars. I’ve written numerous articles for EQ.1 Who knew it would take the outsourcing of services to the cloud to finally convince management that effectiveness measures are more important than efficiency measures? Cloud computing, in one simple view, is just outsourcing. For those who choose the cloud, however, measuring the service will finally make obvious the logic of measuring the effectiveness of the service rather than any of the possible efficiencies. How well is the service delivered? How well is the service maintained? In moving services to the cloud, management will not care about the efficiencies realized (or lost) by the provider but about the effectiveness of the services provided to campus customers.
Perhaps the largest driving force behind moving services to the cloud is the perception of various communities that cloud services are “better” and “more reliable” than the current institutionally provided services. I’m not sure this perception reflects reality, but many leaders believe it does. A related and interesting phenomenon I’ve noticed, but can’t prove, is that customers tolerate problems with cloud services better than they do problems with institutionally provided services. Case in point: Google had an outage of their e-mail system for four hours at the end of August 2009. It seemed to me that customers did not complain much and generally accepted the outage as a relatively minor inconvenience. I doubt that customers of institutionally delivered e-mail services would have been as tolerant of a similar service failure.
This reaction hints that customer expectations change with the service provider. Perhaps one of the reasons customers seem to have tolerance for Google’s service blip is their expectation that Google services will always be available, making it easier for them to brush off the rare (albeit long) downtime as an anomaly. Meanwhile, an outage of institutionally provided e-mail is seen as another indicator of the expected lower level of performance.
Getting Service Data from the Cloud
When I think about existing third-party outsourced service contracts, it seems nearly impossible to:
- Obtain effectiveness metrics like availability, speed to repair, and mean time between failure (MTBF)
- Count on the providers to live up to their service level agreement (SLA) for repairs and replacements
If this reflects what we can expect from outsourcing to the cloud, disappointment lies ahead for those expecting good data on service performance. We should be asking cloud service providers for the same types of metrics we currently collect, analyze, and report for institutionally provided services:
- Availability — on a 24-hour, 365 days-a-year calendar
- Reliability — in the form of MBTF
- Speed — time to respond and time to resolve issues
- Accuracy — defects per _____ (fill in the blank) and rework
- Security — number of vulnerabilities and events
- Customer satisfaction — determined by customer surveys both annually and when trouble arises
A major factor to analyze is the actual performance compared to customer expectations, which will be a struggle until standard definitions of performance measures exist. Even with agreed upon definitions, though, we’ll need a benchmark for determining the minimum level of performance expected.
The need for metrics in the IT industry is definitely growing. This need will not lessen if, and when, specific services move to the cloud. If we agree that the informational needs will be the same, the next question is, “How will we use the data obtained?” In the institutionally provided service model, we expect the IT provider to improve the delivery of services so that eventually they meet the customer’s performance expectations. But in the cloud scenario, what happens if the provider doesn’t meet those expectations?
With the outsourcing model, your strongest bargaining chip is that you can change providers. That means you might not want to have a long-term contract that makes it difficult to switch later. The most obvious recourse for a cloud customer likewise is to move to another cloud provider with better performance. But — can you obtain statistics on a cloud provider before making that decision? Furthermore, while being able to switch between providers easily (and regularly) can help you achieve better pricing and sometimes service, it creates a level of instability that your customers might not tolerate. If your provider knows this, you may find your services held hostage — and service metrics hard to come by.
Measuring the Cloud Before and After You Buy
Regardless of the type of cloud computing you plan to use (infrastructure, platform, software as a service, or any combination), the questions of how to measure and what to measure may be secondary to the essential question of when to measure: measure before and after you buy.
Measure Before You Buy
Most providers of cloud computing will have a cache of metrics to show how much savings in person-hours, infrastructure, hardware, and funds you will get by moving to their cloud services. Although most vendors prominently display cost savings, there is more to learn. For example, Google claims “less spam, a 99.9% uptime SLA, and enhanced email security” for its Gmail for Business. Does this mean that Gmail for Business lets through less spam than other services? Less than non-business Gmail? Or less than the system and settings you currently use? (Obviously it can’t be the latter — how would they know?) A 99.9 percent uptime SLA doesn’t mean that the vendor has a record of 99.9 percent uptime, but that the SLA offers it. And, if you’ve been working with availability measures, you know that 99.9 is a lot different than 99.999 when it comes to e-mail service.
Granted, this is only the hook. I’m sure most vendors of cloud services can and will provide measures to prove the effectiveness of their services. The bottom-line savings is the biggest selling point and the one you can have the most confidence in. You should be able to determine the current cost of providing any of the services you are considering moving to the cloud (though many CIOs will admit difficulty in doing so accurately). You can then easily compare that to the costs via the cloud. If this is your only criteria, you can make a good, data-driven decision. If you want historical data from the cloud service provider on other factors you care about so that you can make the best choice, you may have to deal with the vagueness caused by a lack of IT performance measurement standards.
The good news is that you can focus on the more important metrics defining the effectiveness of the service. If you outsource a service to the cloud, you won’t care how many hours the service provider puts in, how efficient they are with their resources, or how much rework they create for themselves. The only metrics you should care about are the same ones that your customers have always cared about:
- How well is the service being delivered?
- Is it there when I want it? (Availability)
- Does it do what it is supposed to do? (Accuracy)
- Are issues resolved as fast as I expect? (Speed)
- What support model comes with the service? (Customer Service)
- How much is the service being used? (Cloud computing might be ideal if your contract only charges for what you use.)
- Do customers like the service? (Customer Satisfaction)
- Is the service providing the necessary level of data security?
These are the same metrics you should be collecting on services you currently provide. While economics may be the driving factor in your consideration of moving to the cloud, the evaluation of how well the service is being delivered is based on its effectiveness, not its cost.
Measure After You Buy
So you’ve decided to go to the cloud and want to demonstrate the value of that investment. Upper level management will want to see evidence that the move was the right choice. As with the hook for moving in the first place, leadership may be satisfied with seeing a cost/benefit analysis. The bottom line is probably the measure you will use most often. You can show that the institution is saving N dollars monthly, annually, and “wow, look at how much we’ll save over 10 years!” But if cost were the only consideration, you could have moved to outsourcing those services a long time ago.
As time goes on, your customers will begin to weigh in on the value of the investment. That is when you will need effectiveness measures. It is convenient (and logical) that you should measure the same things as you did “Before You Buy” because you want to show the actual value compared to the expected benefits. But there is more.
Other reasons have kept many institutions from moving control of their IT environments off campus. Along with measuring the effectiveness of the cloud services and the cost savings, you will want to measure some other key factors, which might be as nebulous as “confidence,” “trust,” and “friendliness.” Having the service provider on campus has substantial benefits, including: personal recognition, loyalty to the organization, and accountability (especially when it is someone you write a performance review for). These factors might be tougher to measure, but not impossible. While most of the cost savings and effectiveness measures can be obtained from the cloud service provider, these “comfort measures” would not be available, and they require effort on your part — an effort that might be seen as unnecessary. Most leaders will probably be happy with just the bottom-line measures. Some will want to see how well the cloud service provider meets the SLAs. Others will want to ensure their customers receive good service, and they will ask for (and collect) effectiveness measures. This level of analysis should be applauded. It will be rare to find the organization that, after making the decision, wants to go the extra mile to also determine the comfort level of their customers. But, there are great gains possible if we do.
As the overseer/broker/relationship manager of cloud services, IT departments will need data on the effectiveness of those services and assurances that the promises made by cloud providers will be realized. Uptime, customer service, time to resolve issues, and customer satisfaction are just a few of the measures that will determine how well the services are being delivered. Management can focus on how well the customers’ needs are being met and not on how many man-hours it takes to upgrade an application server.
And that’s the point.
From the purely metrics point of view, focused on gathering and analysis, moving to the cloud might be the biggest blessing for an IT department that has come along in years. Rather than chasing the elusive efficiency measure, organizations that ship their services out to the cloud will focus on what have always been the most important measures — effectiveness.
The bottom line for any service has always been effectiveness — how well is the service delivered to the customer? How happy is the customer with the service? While price (cost) matters, it is not the most telling measure.
In fear of being outsourced, many in-house service providers have worried over the cost of providing the service. And while the leadership of the institution may require more information than ever before on the cost, the real bottom line has always been the level of customer satisfaction.
1. See my earlier articles, “Metrics for Trying Times,” EDUCAUSE Quarterly, vol. 32, no. 2 (April-June 2009), “Applying a Metrics Report Card,” EDUCAUSE Quarterly, vol. 31, no. 2 (April-June 2008), and “Do-It-Yourself Metrics,” EDUCAUSE Quarterly, vol. 29, no. 3 (July-September 2006).
© 2009 Martin Klubeck. The text of this article is licensed under the Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 license.