A: Controlling the cost of storage growth is on the top of nearly every CIO's to-do list because, quite frankly, many organizations do a rather poor job of forecasting and planning for storage growth. Resolving that concern is, in and of itself, a major task but other factors also contribute to the cost of storage growth both directly and indirectly such as the absence of any concrete storage strategy and the inability of many organizations to consolidate and standardize their storage infrastructure.
The first step in getting a handle on controlling storage costs is to establish an understanding of what is driving the need for storage. Oftentimes those that provide storage resources within an organization are not the primary consumers of it. There are customers, clients, and internal partners (consumers) that depend upon the storage infrastructure to get their work done. How well you understand what their work entails and how it impacts storage is the first key to getting a handle on storage growth. Work to ensure that a clear understanding is developed that covers:
Understanding how and why storage is used for a particular consumer will help you to better plan and prepare for future needs. Assuming a customer-centric approach is vital in protecting the overall storage investment and managing growth because through it you can:
In addition to the benefits of developing a relationship with a storage consumer, planning for storage also allows for a measure of relief in being able to link storage growth to business growth. If business grows faster than expected and storage demands increase proportionately, you can clearly demonstrate why storage must be expanded—it is clearly driven by the business need. This also allows storage engineers, architects, and managers to make more informed decisions about how much more storage to put into play and escape from the routine of "it's at capacity" guesswork.
In the 19th century, Helmut Karl Bernhard von Moltke, who later became the Prussian General Staff Chief, was credited with saying "Plans are nothing, but planning is everything!" He was also known for having said that "No battle plan survives contact with the enemy." So then why do we endeavor and endure to create a storage (or any other IT) strategy? In a word: Awareness.
Developing a storage strategy creates situational awareness and provides a common point of focus and orientation for a team. Question 3.4 covered some of the components of a successful information management strategy. These components apply here as well. The Kalpič, Pandza, Bernus model provides a solid foundation to develop a storage strategy (and really any IT strategy) because it takes into consideration the following four key areas.
Strategic identity creation includes the development of a mission, vision, and strategic intent as well as identification of core capabilities resources and competencies. Who are you (as a storage provider)? What is your mission? What are your own strategic intentions, or, in other words, from a direct storage and technology point of view, what strategy do you desire to follow? This could include a desire to standardize SAN solutions to a single provider or to outsource desktop backups within the next 3 to 5 years. During this step, you will also identify all of those items that are "core" to your group and, perhaps, evaluate if they really need to be.
As touched on in the previous topic, strategic analysis focuses on industry foresight development, product and market competencies identification, and new competencies identification. This would be on a broad scale. In the realm of storage strategy, this may specifically include analysis of:
Once the analysis is complete, the next step is to articulate the strategy, evaluate the strategy, elaborate on the strategy, and execute it. This equates to documenting the overall strategic plan, measuring and evaluating it against the need, and elaborating on any ambiguous area prior to executing the plan. For example, if your strategic development has led your organization to standardize on a particular SAN architecture, this is the point in time where all the communication takes place to ensure that all parties involved in storage procurement, application development, infrastructure, and business understand that direction, why it's taking place, and what process exists for an exception (if any are permitted). The strategic analysis portion of developing the overall strategic plan should have clearly demonstrated that this direction is for the common good of the organization. Understanding that the common good of the entire organization is rarely the common good for every interest in the organization is the kind of tact and concern that must go into articulating and elaborating on the strategy in order to persuade individual interests to the best interest of the organization.
Herein lies the difference between developing a plan and planning. Strategic reformulation is the process whereby the overall strategy is re-evaluated and re-designed on a regular basis. Developing a strategic plan is a continuous process that will need to be revisited and recycled as often as the needs of the business demand. Technology, on average, can be forecast out to 3 to 5 years. This number, however, may be more driven by the length of an average equipment lease than it is on the technology itself. However, assuming 3 to 5 years as a planning cycle, it would be advisable to revisit the overall strategy at least every 6 months and to begin the strategic planning process anew at least every year.
In the whole of IT, it is important to understand that standardization is your friend and variation is your enemy—but complexity should not be feared. Once that statement is accepted your organization has taken its first step towards realizing the benefits of consolidation and standardization.
Question 2.6 discussed the steps that can be undertaken to consolidate storage management and identified the four stages of consolidation. The top three stages can directly apply to storage consolidation. To recap they are:
The most cost-effective step to implement is usually logical consolidation. For some organizations, this might simply mean the purchase and implementation of software capable of simplifying the management of all storage assets under one system. Centralized consolidation involves the co-location of physical storage devices and is usually more costly to implement than unified management, depending of course on the size and complexity of the storage infrastructure. If your organization, for example, currently has data stored sporadically across the infrastructure, consolidating the data to few, or one, data center will simplify the management of the data to that physical infrastructure. Physical consolidation, which oftentimes is realized handin-hand with centralized consolidation, is an easy entry point for standardization efforts as this involves the simplification and standardization of an infrastructure often to a single platform.
Standardization, to a large degree, is required for civilization. If we couldn't all agree on a common language when speaking to one another, or a common monetary system when buying or selling, we would live in chaos. These two examples, however, were chosen with purpose because although we may standardize on a local scale, there are other options on the boarder to be considered as well. That which works best for us may not always work best for others, so there is, to a large degree, a balance that must exist between standardization and diversity, whether we are considering languages and money or storage solutions.
There are many options in the storage arena on which to standardize, and you may find that there is no one-size-fits all solution. We need a compromise that fulfills the needs of the business in the most efficient and direct way possible. We next need to ensure that everyone uses the standard system. Some of the benefits to standardization include:
A: Best practices in storage procurement should equate to more than just getting the best price for your storage solution at the point of sale. Best practices should also equate to a manageable solution that aligns with business, technology, and storage strategies and results in a lower total cost of ownership (TCO) throughout the life of the solution.
To accomplish this goal, the first stage in procurement must be to define the required benefit to be derived from the procurement. What is it that you are trying to accomplish? The answer can be quite complex but it must be clearly defined in order to move forward and should be considered a critical defining point in the procurement process. For example, a new application system may require 80GB of online storage. Once you understand the required benefit, you can begin to compare products, services, and solutions offerings to the presented need and your own storage strategy. The next step is to match the benefit to actual use so as to be certain not to overspend for unneeded resources. Using the storage example, it may be foolish to purchase an extra TB of SAN storage to meet an 80GB application need that can be met by a local Direct Access Storage Device (DASD).
Good procurement is conducted in a fair and relatively transparent manner. This will allow potential vendors to bid in an open and unassuming environment. Treating vendors as fair and equal partners in the procurement process will result in lasting healthy relationships with your key vendors and respect for your organization within the market. Organizations that refuse to play fair within the market can lead vendors to take anti-competitive steps in their solutions offerings, which often yield little or no benefit to both the organization and the vendor.
The use of brand or product names inhibits competition and cost savings by locking procurement choices into a limited scope. Unless there is a specific reason why you must choose a single brand, and only that brand, try to keep your specifications as vendor neutral as possible. This will encourage vendors to offer innovative products at competitive prices.
A procurement strategy is essential in ensuring the success of your procurement program, and it should be developed in such a way as to be complementary to the business and IT strategies of your organization. The goal of a procurement strategy is quite simply to develop a plan that will drive out maximum procurement benefits. This strategy should be developed through close partnerships with internal stakeholders (those making the purchase requests) to understand their needs and to predefine categories of products and services required by each stakeholder. Each category should be assigned both a market champion, who will act as a subject matter expert (SME) on procurement within that category, and a risk profile that clearly outlines the risk associated with the category. For example, if a category has been created for "Storage Devices," a member of the storage engineering team may be assigned as the SME for that category and its associated risk profile may state that storage devices must comply with internal standards (by listing internal reference documents) and perhaps outside regulations.
The concepts of transparency and separation of duties are both central concerns to procurement operations that are all-too-often subjugated by projects that have become out of control. Indeed, one of the most difficult jobs in procurement is to prevent the chaos generated by a project from becoming the chaos of procurement; these two concepts—transparency and separation of duties—can help prevent that.
Ensuring transparency essentially means that any information related to any procurement should be readily available for consultation, and the individuals responsible for the procurement should be able to provide any additional information as required. Transparency is a key concept that should govern any procurement action. To ensure transparency, many organizations have implemented procurement committees to address procurement actions and award contracts to the most responsive bidder. Members of a procurement committee need not be permanently placed. They can be initiated as a single-point solution depending on the nature of the goods or services to be procured, frequency of the procurement, and the technical competence of the staff involved.
The principle of separation of duties also reinforces transparency of the procurement process. Separation of duties in the procurement process is important to maintaining financial control. It simply states that no one individual in a the procurement process should be granted the ability to request goods and services, write the specification for the goods and services, solicit bids or proposals for the goods and services, and further award contracts and payment for the good and services. Separation of duties is a challenge for both small and large organizations, especially when the procurement process is being forced to meet project deadlines. To prevent separation of duties from becoming a concern, you might choose to implement some sort of physical or logical control in the procurement process.
A: The major difference between a Request for Information (RFI) and a Request for Proposal (RFP) is in the scope. RFIs are broad documents meant to get a feel for the capability of a business or its products and services to meet the needs of your organization. An RFP, however, is a request for a bid on a specific product or service offering to be provided by the vendor. Both have their specific use, play an important part in the procurement process, can positively impact cost management by increasing the awareness of the capabilities of various vendors—how far they can stretch to meet your needs, and drive competition.
An RFI is a business process whose purpose is to collect information about the capabilities of various suppliers for comparative purposes. To be fair to the vendor, a good introduction of your organization should be included with the RFI; the introduction should cover the basics of who you are and what you're looking for. An RFI can be used to gather information such as business background and capabilities.
It is important to capture the background of your vendors. Who runs the business and who the key decision makers are can be equally important as what kind of products and services are being delivered. Although it doesn't happen very often, a vendor executive wouldn't be without precedent by closing one failing business and reopening it under a different name.
Key items to request in the business background information section include:
Ask the vendor specific questions regarding their capabilities. Feel free to ask them how many customers they have and how many fall into your specific demographic. If they are a SAN storage provider, for example, you might want to ask how much storage they have brought online in the past 6, 12, or 24 months. The important thing is to pose questions that will either qualify or disqualify the vendor. Key items to request in the capabilities information section include:
An RFI affords you the opportunity to ask probing questions, so be creative. Asking a potential vendor about their competition, for example, can reveal a great deal of information about how they envision their role in the market place. Even though a particular vendor may have a product or service you need, it may not be their niche.
An RFI is, in many ways, like an interview. Use this as a tool to interrogate prospective vendors for their potential to serve the needs of your organization. Unlike an RFP, an RFI is not an invitation to bid and is not binding for either the buyer or sellers.
An RFP is an invitation for suppliers, through a defined bidding process, to bid to provide a specific product or service. It is important to remember that there is often a direct correlation between the amount of detail provided to outside vendors during the RFP process and the resulting cost accuracy. The lesson here is that by being as detailed and specific as possible when requesting proposals, you can help to ensure accurate cost forecasting.
RFP's often include very specific and highly detailed information such as the following:
Through the use of RFI and RFP documents, you can first evaluate outside vendors as potential suitors for your IT investments, then encourage competition through specific proposal requests. The first serves to protect your investment by helping to ensure the party with which you're doing business is not only reputable but also capable and the second takes the most capable vendors and gives them an opportunity to compete for your business.
A: The Greek letter sigma (σ) is used in mathematics to represent standard deviation. Six Sigma (often represented as 6σ) is a business improvement methodology that was originally developed by Motorola in the 1980s to improve processes by eliminating defects. The concept of a "defect" is central to this methodology and is defined as any unit that is not a member of the intended population. If, for example, a file needs to be restored from a backup copy and the restoration process works flawlessly 99 out of 100 times, that 1 time would be the defect. Defects in relation to the population are usually represented as Defects per Million Opportunities (DPMO); Six Sigma, in the mathematical sense, represents, essentially, a rate of 3.4 DPMO or 99.9997 percent. The Six Sigma methodology, however, is about much more than just measuring defects and is used in today's business world to refer to various strategies, processes, and tools employed to deliver results to an organization's bottom line by defining what needs to be measured, measuring what needs to be analyzed, and analyzing that which needs to be improved.
Six Sigma has two methodologies that are most often used to improve existing business processes, DMAIC, which is used to improve upon existing business processes, and DMADV, which is used to create new product or process designs. DMAIC as a process consists of five phases:
DMADV as a process consists of five phases:
Both processes can assist in reducing storage costs by establishing a process for design and improvement that is based upon a clear understanding, measurement, and analysis of the need. Defining that need is often gathered through an exercise known as Voice of the Customer (VOC).
Every organization has a customer either internal or external to their own organization. For example, if your department provides computer and networking services to the other major divisions within your organization, such as Sales, Marketing and Human Resources, those divisions are, in many ways, your "customers." To provide the best possible service, your organization will need to gather some form of feedback or "voice" from these customers as a starting point for improvement. This customer-centric approach is not only aimed at gathering the need but also at creating the best possible customer experience and will in turn drive customer loyalty. If, for example, your customer informs you that response times on an important business application are lower than desired, this "voice" can then be used as a starting point for an improvement effort. As this effort progresses and your division begins to interact and fully understand the need of the customer, this, often times, will be recognized as a legitimate concern for the well being of the customer. It's this kind of communication that drives customer loyalty and will pay dividends in the long run.
A good Six Sigma project will never have a predetermined outcome; after all, if you already know the solution, you really should just go and fix the problem. As an example, consider an improvement effort around reducing server downtime using the DMAIC approach.
During the define phase, the project team will be formed and clear direction will be given on the scope of the effort. In this example scenario, the project team may consist of the line of business partner (or customer) that uses the server, the infrastructure partner that provides for network connectivity to the server, the team that supports the server, and any application support team that may need to be involved.
At a minimum, the following deliverables and checkpoints should be met before proceeding to the measure phase:
Once all of these points have been met, you should have a team of qualified individuals who have a clearly defined customer impact to address and are prepared to move forward together.
During the measure phase, the key measurements are identified and some form of data collection method is established. It is also in this phase that you capture the first sigma level for the process you want to improve. In the case of server downtime, there are a number of ways in which to measure overall downtime. If your organization is a 24/7 operation, the methodology used to gather downtime will differ from an organization that only operates for 8 hours a day or only during certain days of the week. For example, if your organization operates 24/7 every 30 days, the server has 43,200 minutes (or opportunities) to be down; assuming the measurement you choose for downtime is minutes. If your organization only operates 8 hours per day, 5 days a week, the server would only have 9600 opportunities in 30 days to be down. The saving grace of the Six Sigma process is that defects are always measured per million opportunities (not per x amount of days or any other measure of time). Thus, although your measurements may be gathered per hour, day, or month, you'll need to reflect that measurement off of the mirror of DPMO to get an accurate representation:
DPMO = (Total Defects / Total Opportunities) * 1,000,000
If the server operating 24/7 had 10 minutes of unexpected downtime (10 defects out of 43,200 opportunities), this would equate to 231 DPMO or a yield of 99.98 percent uptime. Sounds pretty good right? Well, not quite. As good as that sounds, a DPMO of 231 is equal to about a Sigma Level of 5.00, which isn't quite Six Sigma quality. To reach Six Sigma, you would want to reach a DPMO of 3.4, which, for this example, would mean less than 9 seconds per 30 days of downtime. If your servers aren't quite at Six Sigma level just yet, don't panic, it can be done with the right team and when the proper focus is given to decreasing downtime. If, however, you're of the opinion that Six Sigma cannot be reached for server downtime, by all means do panic because your competitors are very likely working towards this goal or something close to it depending upon the anticipated return on investment (ROI).
At a minimum, the following deliverables and checkpoints should be met before proceeding to the analyze phase:
It is important to note that some projects end at this point. There may, for example, be a perception offered by a VOC that leads an organization to believe a problem exists that is larger than it actually is. Once the problem is measured, however, a clearly documented analysis may show that the problem is not as bad as it first appeared. For example, in the example case of server downtime, if your organization has a tolerance for a sigma level of 5.00 or 5.5, it may not be worth the time spent to improve upon the problem. Even if this is the case, however, it may be prudent to at least complete the analysis phase simply to ensure that the problem is clearly understood.
During the analyze phase, data collected is, of course, analyzed and an active search is underway for the root cause of the issue. Using the downtime example, during the analyze phase, an indepth look into server and network activity during the time of the outages may be undertaken to identify the underlying impact factor.
At a minimum, the following deliverables and checkpoints should be met before proceeding to the improve phase:
Once you understand what is causing the problem and what you have to gain by fixing the problem, you can move forward together to improve the situation.
Those who really enjoy problem solving live for the chance to be a part of a team during an improve effort because it is here, during improve, that problem solutions are developed, tested, selected, and put into place.
At a minimum, the following deliverables and checkpoints should be met before proceeding to the control phase:
Reaching the end of the improve phase is a monumental step in the Six Sigma process. So far, your team has defined a problem, measured the problem, analyzed its nature, and implemented a solution that will hopefully maintain a higher state of quality within your environment—but that is never guaranteed. To ensure that all this work isn't forgotten in 6 months or a year, the final phase of the DMAIC process, control, will build in some safeguards to protect the investment of time, effort, and money to maintain the improved state.
The purpose of the control phase is to ensure that control is exhibited over the improvement. At a minimum, the following deliverables and checkpoints should be met during the control phase:
It's important to remember that Six Sigma is a tool and any tool is only as good as the hand that wields it into action. With the proper team and direction, Six Sigma can yield great dividends and has for companies such as Motorola, 3M, AMD, Caterpillar, and Bank of America.
A: Increasing the return on investment (ROI) of storage costs and maximizing technology investments is a bit of a numbers game the crux of which will, like many things, focus specifically on the organization's understanding of the numbers, how they're calculated, and more importantly on where the perceived value is derived.
ROI is the ratio of money gained or lost on an investment relative to the amount of money invested. There are arguably as many different ways to calculate ROI as there are people using the term, so it is important to grasp the basics. There are three basic calculations that have been used:
ROI can also be represented over multiple years by expanding the formula. The equation for a 3year ROI, for example, might be:
(Benefits in year 1 / (1+discount rate) + Benefits in year 2 / (1+discount rate) + Benefits in year 3 / (1+discount rate)) / Costs
Thus, if the initial cost for your new storage infrastructure was $10,000, your annual benefits minus annual costs are constant at $5000 for the next 3 years, and the discount rate is 10%, your 3-year ROI would be:
($5,000 / (1 + .1) + $5,000 / (1 + .1)^2 + $5,000 / (1 + .1)^3)/$10,000 = 124%
Increasing the return on storage and technology investments begins with a solid understanding of what is being invested and what value is being returned on that investment. The value of a storage infrastructure is not, simply, in its capacity to store data or information but rather should be represented as a value in relation to an underlying business process. Although a storage engineer, architect, administrator, or manager might look at any given storage media and calculate it's ROI based upon cost per Megabyte, Gigabyte, or Terabyte, the real value lies in what the storage technology returns to the storage consumer.
For example, if a consumer of storage resources within your infrastructure has an application that is responsible for placing orders on a stock market, and that application requires very fast (low latency) response times from the storage infrastructure in order to complete transactions and log them appropriately, then the impact of storage latency on storage value is a direct contributing factor to ROI. After all, what good is $0.15/GB of storage if it's so slow that your stock traders are losing money waiting on storage? Storage response time saved nearly always equates to personnel time saved and personnel time saved nearly always outweighs the actual cost of the storage infrastructure.
Once you have established the basis for ROI and a focus on where the value should be determined (within the business), the next step is to maximize the ROI of storage investments. This is, of course, easier said than done, but there are a few things you can do to aim in the proper direction:
A: No matter what return on investment (ROI) formula you choose, by reducing your total cost of ownership (TCO) you are reducing your costs and thereby increasing your ROI. Reducing the overall TCO will require focus and a solid understanding of what exactly a TCO analysis is in the storage space and how this analysis can then, in-turn, be used to reduce TCO.
TCO is a financial estimate designed to assess direct and indirect costs related to the purchase of any capital investment, such as storage devices. A TCO assessment is a process whereby all financial aspects of financial ownership are evaluated, and usually takes into consideration at a minimum the costs related to:
Taking each of these aspects of ownership into consideration is the first step in reducing their overall impact. Oftentimes an organization will find that they failed to measure a potential impact to ownership that ties into the TCO of storage, such as the cost of decommissioning or the cost to ensure adequate security for the lifetime of the environment.
To reduce TCO:
A: Utility storage is a business model whereby storage resources are provided on an on-demand and pay-per-use basis. This is the same concept and model on which utility computing is based and differs from the conventional model in that storage consumers do not have to invest in owning the entire storage infrastructure in order to take advantage of it during their time of peak need.
It is important to note that utility computing is really just beginning to gain solid industry muscle because it aligns so neatly with the concept of Service Oriented Architecture. SOA began as a software architecture that defined the use of loosely coupled software services to support the requirements of business processes and software users. It is a term that today, however, is also used in the realm of service delivery management to define business services and operating models that provide a structure for IT to deliver against the actual business requirements and adapt in a similar way to the business. For example, a complete document management system that can be accessed by multiple line-of-business applications for any number of purposes (from a business perspective) is an example of SOA in infrastructure. Utility storage represents just this kind of extension of SOA that may yield benefits in reducing capital and operating expenses.
As an example, let's focus on an organization that has three internal storage consumers. These storage consumers receive storage as a service that they pay for by the actual capacity used and each one has the need to use an average of 10TB of data on a daily basis. Additionally, these storage consumers may need to flex their storage an additional 2TB each during peak usage, which may be different for each consumer. Rather than purchasing 36TB of storage to meet the average + peak utilization for each consumer and charge them the full price that your organization needs to recover, you could, instead, partner with a Storage Service Provider (SSP). An SSP is a company that provides storage as if it were a utility. The concept, though unoriginal, is novel in that it allows organizations to pay for only what they need and often at a rate less than what they could do on their own. After all, few among us can generate our own electricity cheaper than our local utility can provide it. Utility storage offers an alternative to purchasing and administrating your own storage infrastructure at a rate that is typically much less than an organization can achieve independently. The only large drawbacks to date is that there are very few SSPs competing in the marketplace and the future of storage, as a commodity, is ambiguous at best.