Virtualization and Service Automation

Virtualization and Service Management

Without the right integration into your enterprise's business processes, virtualization is little more than technology hype.

It is likely you've read the news stories and seen the promise that virtualization brings to the enterprise data center. In just a few short years, the idea of virtualization and the business benefits it brings has spread to virtually every facet of IT. Whether consolidating servers, individual desktops, the network itself, your critical storage, or the applications that drive your data processing, virtualization is the hot topic throughout Information Technology (IT) and its technologies today. Virtualization's play within the enterprise organization promises a host of easily recognizable benefits:

  • Reduced cost for power and cooling. More workloads per hardware chassis means fewer servers to power and keep cool.
  • Right-sizing of resources for workloads. Yesterday's data center best practices recommended similar hardware compositions whenever possible due to cost savings. Yet this practice resulted in a wasteful under-subscription of hardware resources for light workloads.
  • Elimination of legacy hardware. Data centers are collections of hardware that was procured for various projects over time, yet maintaining that legacy hardware over extended periods grows to become a liability. Virtualization provides a mechanism to retain the service atop the hardware platform while eliminating the hardware itself.
  • Reduced total cost of ownership. Improvements in the ability to spin up new services and manage those that already exist reduce the management cost of doing business for IT. These improvements arrive through significant improvements to the speed in which common IT tasks can be accomplished.
  • Enhanced agility. Deploying new services and scaling those that already exist grows faster once virtualized due to virtualization's intrinsic ability to rapidly deploy configurations across devices and environments.

Although all of these purported benefits are valid, the smart enterprise recognizes that virtualization's promise is only truly achieved when virtualization augments the processing of business. Alone and without the right indicators in place, virtualization becomes yet another technology in a long string that improves the lives of individual IT administrators yet doesn't demonstrably impact the bottom line.

The intent of this guide is to assist the smart enterprise with understanding virtualization's fit into the rest of the IT environment. A major part of that fit is in aligning the promise of virtualization technology with the automation benefits associated with virtualization management. What you'll find in reading this guide is that notwithstanding what technologies and technological improvements virtualization brings to the table, there are a set of management enhancements that also arrive. Those enhancements are a function of the levels of automation that naturally bundles with the move to virtualization.

In this guide, we'll peruse those elements of automation from the perspective of IT automation frameworks, specifically focusing on those framed by the IT Information Library (ITIL) version 3. This guide isn't intended to teach you the fundamentals of ITIL nor is it necessarily intended to fit virtualization technologies into this process framework. It is, however, intended to use that existing framework as a guidepost for explaining how virtualization and service automation join to improve the fulfillment of needs for IT's customers.

What Is Virtualization?

Any discussion on virtualization's benefits needs to start with a definition of virtualization. Avoiding at this point any discussion on virtualization's technologies—a discussion that is left to the next section—virtualization at its core is involved with adding layers of abstraction to the IT environment. These layers of abstraction decouple one layer of the IT system from another. This is perhaps best explained through a series of examples:

  • Hardware virtualization adds a layer of abstraction between individual operating systems (OSs) and the physical hardware to which they're normally installed. Figure 1.1 shows a graphical representation of this separation. With hardware virtualization, the layer of abstraction allows more than one OS to simultaneously interact with one set of hardware.
  • OS virtualization, also called container-based virtualization, adds its layer of abstraction on top of an existing OS. By positioning this layer of abstraction at a slightly higher level, individual OS instances that are based on, yet separated from, the core instance can be created from what was formerly a single instance. The result is that multiple OS instances that leverage the same source consume less resource overhead.
  • Application virtualization moves the layer of abstraction even higher, separating the individual applications from the host where they would typically be installed. This process enhances the installation, management, update, and removal of applications by eliminating their direct tie into OSs.
  • Network virtualization, similar to hardware virtualization, adds its layer of abstraction atop the physical network infrastructure. With network virtualization, one physical network can be broken into multiple virtual networks. Alternatively, software can be used within virtualization environments to create virtual networks that connect virtual machines without the use of traditional network devices.
  • Storage virtualization adds its layer of abstraction atop traditional storage hardware. The layer of abstraction here enables advanced management functionality such as combining the storage capacity of multiple devices to present a single, logical storage device to hardware. It can also enhance the provisioning, backup, and disaster recovery of storage through replication and snapshotting capabilities.

Figure 1.1: With hardware virtualization, the layer of abstraction exists between physical hardware and virtual machines.

The commonality among all these types of virtualization is that in their implementation, the aforementioned layer of abstraction is created between two disparate elements of the IT system. In implementing technologies that bring about this layer of abstraction, the IT environment immediately gains a number of workflow benefits:

  • Flexibility. Leveraging a layer of abstraction means that elements on top of that layer are dependent on the interfaces of the abstraction layer as opposed to those structures below it. In the case of physical hardware, this means that an OS instance no longer needs special configuration based on the hardware to which it is installed. In the case of network or storage, the layer of abstraction enables complex management activities to occur without requiring costly and time-consuming physical reconfigurations.
  • Transience. Virtualization removes the reliance on the infrastructure below the layer of abstraction, making it much easier to replicate configurations or even entire instances between devices. As an example with network virtualization, the use of virtual networks enables administrators to easily extend and expand the network on the fly when necessary as its entire configuration resides in software as opposed to being based on its physical composition.
  • Commonality. Virtualization of all types reduces or eliminates the need for many types of low-level customizations traditionally required with physical systems. Individual instances of IT elements such as applications, OSs, and network or storage configurations instead start with a common "template" configuration. That common configuration can be quickly replicated to anywhere that is needed by the demands of the business. It further serves as a reduction in complexity across the IT environment. When IT elements start with a common basis, this commonality aids incident and problem management activities. It also aids in configuration management activities, reducing the change variance in configurations across device classes.

The Technologies Behind Virtualization

The intent of this guide is not to proscribe one vendor or product over another. In fact, its goals are to show that the best ways to align virtualization and its automation benefits with the rest of the IT environment is in using tools that treat virtualization like the rest of the environment. Although the press and vendors extol one platform over another on the basis of features, the smart IT environment must look at how that platform can be managed. As you'll learn later in this chapter, leveraging unified toolsets that treat physical and virtual assets as two halves of the same whole is critical.

That being said, there are a number of technologies that typify each of the virtualization architectures discussed previously. These technologies are presented in the following list to assist with framing each virtualization architecture in the context of available products:

  • Hardware virtualization:
    • VMware VirtualCenter and ESX
    • Microsoft Hyper-V and Virtual Server
    • Citrix XenSource
    • Open Source Xen
    • HP nPar/vPar/Integrity
    • Sun xVM
    • IBM zSeries
    • IBM POWER
    • Linux KVM
    • Oracle VM
    • VSE
    • Virtual Iron
  • OS virtualization:
    • Sun Solaris Containers
    • Parallels Virtuozzo
    • HP Secure Resource Partitions
    • Open Source Open VZ
    • Linux Jails
  • Application virtualization:
    • Citrix XenApp (Presentation Server)
    • VMware Thinstall
    • Microsoft Softgrid
    • Symantec (Altiris) SVS
    • AppStream
    • Endeavors
  • Network virtualization:
    • Cisco
    • 3Com
    • Juniper
  • Storage virtualization:
    • F5 Acopia
    • NetApp V-Series
    • LeftHand SAN/iQ
    • HP StorageWorks

This list is not intended to be comprehensive.

Why Virtualization Is Now Being Adopted in Dramatic Numbers

It is easy to see with the previous list how the vendor ecosystem has translated virtualization's layer of abstraction concepts into product reality. Virtualization today is being adopted in dramatic numbers in many ways due to the synergies of available product sets across all elements of the technology spectrum. IT environments also understand the operational benefits gained through virtualization's promise. Simply put, unlike many technology-based solutions for IT problems, virtualization provides a very real and very tangible return that can be easily quantified into dollars and cents.

This widespread perception of value comes with its own set of risks. With the mere word "virtualization" being thrown around by vendors across the IT landscape today, there is the risk of being caught up too far into the technology itself rather than its proper fit into the environment. To validate this statement, a recent study found that 44% of virtualization deployments ultimately fail for just these reasons (Source: http://www.virtualization.info/2007/03/44-of-companies-unable-to-declare-their.html). The major reason cited for declaring project failure was the inability to quantify the return on their virtualization dollar. With the vast majority of the cost attributed to technology projects relating to their management over their life cycle, it is critical that enterprises look first to virtualization as a mechanism for improving business processes. Mature organizations that make use of frameworks such as ITIL tend to have an easier time accomplishing this goal—and ultimately ensuring the success of their virtualization implementation.

What Is Service Management?

Consider those things outside the world of IT that you think of as services: The person who answers the phone when you call your credit card company performs a service. The power and water utilities embody services that are critical to your daily life. With all services, people, products, and processes come together in highly efficient ways to fulfill a need. By focusing on the fulfillment of specified and identified needs, organizations that focus on services have the ability to center their attention on accomplishing that goal with high levels of quality.

Among other service organizations available today, IT remains a relatively new player in the world of service fulfillment. Although others, such as the traditional power and water utilities and phone operators, have been performing their duties for tens or dozens of years, the institution that is IT has a history that barely spans two decades. Being the relative newcomer, IT's history has started with relatively chaotic levels of process control. In the early years, one IT organization likely performed necessary activities in much different ways than others. With little similarity between organizations, each finds themselves starting from ground zero in determining the best ways to accomplish their needed tasks.

IT Service Management is a formalization of those process-oriented tasks that are required of virtually every IT organization. Leveraging best practices developed among multiple organizations over an extended period of time, IT Service Management is effectively a collection of ways to accomplish the needed tasks of an IT organization. Because these necessary tasks are relatively common among all IT organizations, IT Service Management has developed a set of frameworks that assist with understanding the tasks to be accomplished and the best processes in which to accomplish them. One such framework, mentioned earlier, is ITIL, now in version 3.

ITIL v3 breaks down the necessary tasks required of an IT organization into five discreet phases with specific activities assigned to each phase:

  • Service Strategy. Before any services can be offered by an IT organization, there must be high-level planning involved with determining the organization's overall strategy. In this phase, the services that will be provided by IT are identified and scoped. It is here that IT services are linked to business goals.
  • Service Design. Once identified and aligned from a high level, the structure and processing of individual services must be designed. In the second phase, the structure of services is defined along with their inputs and outputs. Here, IT translates the services and business goals as identified in the first phase into actionable plans for implementation.
  • Service Transition. Actually implementing those plans involves a state of transition between where the service does not exist and where it does. In this third phase, IT implements the service as designed and brings that service into full operation. This phase additionally deals with the handoff of services from the teams whose job it is to build the service and others who will ultimately run them in production.
  • Service Operation. Once fully implemented, services are then operated for the fulfillment of customer needs. The Service Operation phase embodies the day-to-day management of the implemented service as it accomplishes its goals.
  • Continual Service Improvement. It can be argued that the implementation of an IT service is never perfect. Wrapping around each of the other phases is a continuous process of evaluation and improvement. Here, gaps in service are found and improvements are made. This process happens during each of the other phases, as services are scoped, designed, implemented, and operated to always find better ways to service customers.

Figure 1.2: A graphical representation of the five phases of the ITIL life cycle. In each phase, there are a series of activities to be accomplished.

The Business Impact of Service Management

The move from a reactive mode of operations to one that focuses on process-oriented solutions doesn't happen overnight, nor does it occur without cost to the organization. But the value in bringing process maturity to an IT organization lies around the ability to rapidly design and implement new services as the organization needs them. IT organizations that leverage process frameworks are also more quickly able to adapt existing services to a changing business landscape.

At the same time, not all businesses or IT environments are alike. Not all organizations require the same level of process formalization to truly move towards maturity. Thus, process frameworks are designed in such a way that they can be tailored to the needs of the organization. The end result is a type of automation of process development itself, enabling IT organizations to better align their technology with the goals of business.

Virtualization's Impact on Service Management

Smart organizations then should quickly see how the concepts of virtualization and the technologies that enable those concepts very quickly enable levels of automation not previously possible within IT:

  • Virtualization's tenet of flexibility eliminates the limitations of IT's old and difficult-tochange technology underpinnings, allowing IT to consistently develop service solutions required by the business with a minimum of cost.
  • Virtualization's tenet of transience means that IT organizations can rapidly and easily reconfigure the IT environment as the needs of business change over time.
  • Virtualization's tenet of commonality enables the rapid development of new services as well as the assured reliability of existing ones. By leveraging standardized configurations across technologies, IT reduces its potential for error while speeding the rate in which technology underpinnings can be brought online.

Each of these in concert enables levels of agility never before seen with traditional IT and process structures. Notwithstanding the technology or platform used in bringing virtualization into the environment, the enhancements to IT's capabilities in meeting business needs goes far in aligning IT with the business.

This section hasn't discussed the traditional impacts associated with virtualization implementations— energy cost savings, speed in deploying new equipment, automation of configuration settings, enhanced problem and incident management, and optimized performance management. All these are also valid benefits that virtualization brings to the IT environment and will be discussed later in this guide.

Virtualization: The Service Management Approach

Like all IT services, the implementation and use of virtualization within the environment can easily be made part of an existing ITIL workflow. At the same time, if not done properly, virtualization can involve a large amount of expense with little tangible benefit. Integrating any virtualization project into an existing environment requires the proper levels of strategic as well as tactical planning prior to the purchase of any hardware or software. Enterprises that attempt to introduce virtualization and its workflow-changing concepts into the environment without first analyzing their effects are not likely to see the desired results.

A major problem with virtualization today lies with its nascence. Technology implementers have a tendency to focus on the virtues of individual feature sets or the benefits of specific technologies without looking at the larger picture. Smart organizations discover through the planning process that virtualization's value proposition over the long haul actually comes through its enhancements to operational processes over and above any technology features. Whether through the improved data distribution and resiliency enabled by storage virtualization, virtual machine rapid deployment and configuration management improvements gained through hardware virtualization, or the logical network extension provided for by network virtualization, each aspect brings a set of strategic benefits to the organization.

Let's take a look at a few key activities identified by the ITIL Service Management framework, and explore how the addition of virtualization stands to enhance their functionality.

Strategy and Design Activities

The strategic phase of any service management engagement is involved with the identification of gap and the scoping of services to be provided to the customers of IT. Virtualization's impact here lies specifically in improving the vision into two sets of data. With the first set, historical analyses, organizations look across existing services to identify areas in which customer needs are not properly being met.

One benefit gained by enterprises with the move to virtualization is an added set of tools that store historical information about the activities of the environment. These tools may not necessarily be part of the technologies used to virtualize but are often add-on technologies designed with the management of virtualization in mind. Virtualization's ability to homogenize configuration sets within many areas of the IT environment allows design teams to paint a wide brush across technologies in locating these areas of gap.

Another set of data commonly associated with virtualization management toolsets is in identifying areas of demand. Virtualization's layer of abstraction becomes a point of commonality among all virtualized elements, so that layer can be monitored and measured to identify where performance is insufficient to meet demand. It also becomes a central point of data gathering to identify the need for new services or service augmentation. As with the earlier set, this data gathering is often possible through the implementation of virtualization management toolsets.

Once identified, part of the design phase is in building prototypes along with what will eventually become services in production. Virtualization brings particular benefit to both of these activities due to its intrinsic ability to roll back changes and rapidly rebuild environments as necessary. Design prototyping rarely results in the desired conclusion the very first time. Design modifications and configuration tweaks are a necessary part of taking a design from "on-paper" to "ready for production." Virtualization and especially its management tools are fundamentally useful in speeding this conversion.

Lastly and ultimately, the decision to sponsor service improvement projects rests on the anticipated return from that project. Environments that have already made the move to virtualization tend to see a positive cost impact during this decision-making process. This is due to the virtualized environment's ability to rapidly and cost-effectively implement new services once designed.

Once virtualized, an environment tends to incur lower marginal costs with the addition of each new service in comparison with an equivalent physical environment. This is the case because the separate hardware required for each component of the new service does not need to go through the specification, purchase, delivery, and provisioning processes commonly associated with physical additions to the environment.

Transitioning Activities

Before any service can be turned over to operations, its configuration and modes of operation must be cataloged. Its assets must be identified and entered into change management databases (CMDB), and the documentation of its activities must be prepared and made available to operations teams. Within reactive organizations, these processes can be chaotic—which is the reason that process frameworks such as ITIL exist.

But proactive organizations sometimes have the opposite problem. Their processes can sometimes be solidified to the point that transitioning new services from development to reality is a substantial cost all to itself. This occurs when transaction activities are required as components of the process framework, but the actual activities are non-optimized. Even when transaction activities are optimized, virtualization provides an assist with this process. The levels of commonality associated with individual IT elements allow organizations to scope logged assets at a higher "level."

It is easiest to explain this concept with an example. With virtualization, the configuration and asset tracking of an individual server no longer needs to be logged at the level of the individual setting. A virtual server exists as a copy from a predetermined and pre-controlled server template, allowing for the template to become the configuration item—and the tracked asset— rather than the individual settings within that server. An indirect benefit of virtualization's impact on service asset and configuration management is to reduce its overhead. Change management activities over the long term gain a similar benefit.

Validation activities are positively impacted as well, in much the same way as service prototyping and testing activities in the previous section. Validation activities are the last check on a new business service prior to bringing it into production. Those activities tend to run similar sorts of tests against that service that were done during the prototyping phase in an attempt to prove the worthiness of the service to be moved into full operations. Thus, they tend to see many of the same benefits associated with environment rapid-rebuild as well as rollback functionality that are seen in the initial testing phases.

Activities in Operation

Once in operation, the goal of IT is in keeping services up and operational for its customers. Thus, services in operation tend to have a vastly different set of needs than those that are yet under development. Reliability of services in operation is related to responding to incidents and problems as well as doing whatever possible to ensure that those behaviors never occur in the first place.

Virtualization's primary enhancement to service management comes in its native ability to stay alive during non-nominal conditions. Virtualized storage tends to be distributed across multiple physical devices, allowing the loss of one or more devices to have no impact on overall storage availability. Virtualized servers can usually relocate themselves from a failed host to one that remains operational, a functionality that reduces or eliminates the impact of a host failure. Virtualized networks can reconfigure as necessary to route around problem locations. All these situations are possible due to the transient nature of virtualized resources. In effect, virtualized workloads are able to process their assigned mission no matter which physical device actually processes the bits and bytes.

Yet service resiliency isn't the only area in which virtualization brings efficiency to the environment. The technologies that enable the types of resiliency discussed earlier do little when entire IT environments become unavailable due to disaster or large-scale data destruction. In either case, restoration or full disaster recovery is necessary to bring an environment back to functionality. Both are processes that have traditionally been painful and laced with error. Virtualization reduces the complexity of both activities by elevating the layer in which each item is backed up and ultimately later restored or recovered.

Consider the situation where a disaster strikes a data center, requiring a move to recovery operations. Should this occur with a non-virtualized environment, bringing back business services means restoring individual servers—including each and every file that makes up those servers—as well as the network and storage equipment that interconnect them. Disaster recovery sites themselves gain benefits due to the ability for services to be easily replicated to alternative locations in real time. Lacking virtualization, disaster recovery requires duplicate physical hardware in both production and backup sites, the configuration of which must be exactly matched for a successful failover to occur. Virtualization and the management tools that wrap around virtualization provide ways to automatically snapshot and transfer changes to the service itself as they happen.

Ensuring Continuous Improvement

Throughout all these activities wraps an overarching goal of finding and eliminating areas of gap and waste. This process of continual process improvement ensures that a recurring analysis activity is engaged throughout the service life cycle, looking for mistakes or omissions in the delivery of the service and identifying areas in which the service can be made better for its customers.

Virtualization has a number of impacts on this process as well. Its tenet of commonality means that individual instances are likely to be similar in configuration to each other—for example, the concept of templating, which is a process commonly used across all forms of virtualization. This level of similarity provides a basis to look not only at individual services but across all services for areas of improvement. A reconfiguration that results in improvement for one service can often be easily integrated into another because their core configurations are equivalent. Individual services themselves can be more easily analyzed for gaps in functionality or performance because other services and their underlying technologies can be used as a guidepost for measurement.

The added monitoring and management data that arrives with the move to virtualization was discussed already as a benefit to the strategy and design phases. This data can be leveraged during process improvement as well. With the right virtualization management tools in place, monitoring data across all virtualized devices can be aggregated and analyzed to look for historical trends in performance, use, and level of identified problems. Best-in-class tools provide mechanisms for leveraging this data alongside that from non-virtualized elements as well.

Consider the situation in which an improvement activity is looking at performance within a business service. Management tools that have the ability to compare performance characteristics between virtualized and non-virtualized assets illuminate actionable suggestions for service workload placement. Enterprises that use these tools gain the ability to identify where workloads should be placed for optimal performance: virtual versus non-virtual, network placement and configuration, storage consolidation versus segregation.

Measuring and Ensuring Service Levels

Another activity whose processing is critical during service operations is the identification and measurement of service levels. This process looks at predetermined metrics of service performance to determine whether the service is operating to desired specifications. When services are not operating to the level desired by the organization, IT is charged with identifying and resolving the source of the failure.

However, there is a problem intrinsic to this process. For the non-automated organization, this process of identifying and measuring service levels over time is not a trivial task. Depending on the level of detail required by established Service Level Agreements (SLAs), service measurement can be a highly time-consuming task if accomplished through manual means. The steps required to measure the service's behavior and performance for the purposes of reporting can consume large levels of IT resources.

Along these lines, when manual steps are used in service measurement, the resolution of the resulting data is often not at a level of granularity useful for providing actionable information. For example, when a team of individuals are charged with manually gathering data for a monthly report on service quality levels, the data gathered is but a one-time snapshot. That snapshot can only be compared against one-time snapshots that were gathered in previous months. This coarse level of granularity makes it very difficult to accomplish quantitative demand management or augmentation planning.

Virtualization and the automation toolsets that wrap around it can expose the necessary metrics required for doing this activity correctly. With the right toolsets—those that watch both virtualized and non-virtualized elements simultaneously—those metrics can be properly gathered through automated mechanisms. Automating the gathering of metrics also enables the real-time gathering of them, which increases the resolution of data and enhances the ability of the organization to make good decisions.

Increased data resolution for performance and quality-of-service metrics has another benefit as well. With the increase in metrics resolution comes a greater ability to directly see how changes in the environment impact service levels. When service quality is measured on a real-time basis, it grows much easier to recognize the direct impact of an environment change.

Challenges to Successful Deployment of Virtualization

All these are valid observations, but only when virtualization is done properly. Organizations that don't enter into virtualization projects for the right reasons stand to not achieve their desired goals over the long haul. With projects of this sort, early attempts for virtualization technology insertion are often point solutions. Implementing a specific virtualization platform to solve the needs of a particular service or data center tends to be a non-interruptive activity when those projects are small in scope. But virtualization's promise often leads enterprises down the road of escalating commitment once they've seen the benefits of its first successful implementation.

However, this point-solution mentality cannot scale. Enterprises who don't engage in up-front strategic planning for virtualization may find themselves awash in multiple technologies that simply don't integrate. They might find implementation-based fiefdoms based on the operation of individual projects. They may find platforms incorporated that cannot correctly integrate with existing management or monitoring toolsets. As an organization's level of inclusion of virtualization scales, the technologies grow interruptive in nature.

Virtualization's aggregation of physical resources can quickly move an environment from a position of undersubscription to one of oversubscription if not monitored.

Smart organizations must eventually see virtualization as an enterprise player across all services, and strategize its incorporation appropriately and at a high level. There is a risk to the environment in not doing this, one that has been seen by its early adopters who did not consider the enterprise-level impacts to this technology. The enterprise who does not take a big picture perspective of virtualization early in the process will see a set of challenges to its successful deployment:

Technology Focus

The earlier section that discussed the technologies behind virtualization listed a dozen products for hardware virtualization alone. Adding in its other architectures easily doubled that list. With this many disparate technologies in play, an enterprise organization that doesn't pare down its list of those that are acceptable can find themselves forced to support all of them. Some of the technologies discussed work well with each other and integrate well into enterprise automation frameworks. Others do not.

Lacking a focus, service design and maintenance teams may find themselves implementing technologies in a vacuum and without the consultation of others. The end result may be an added cost to the organization that is involved with integrating each individual purchase decision with others.

Point Solutions

Enterprise decision-making and budgeting is rarely centrally controlled. Down-level teams often find themselves with today's problem that requires today's solution. When these situations occur, those teams look for the solution that fits their deployment timeframe and budget. When it comes to virtualization and its available platforms, solving today's problem with today's technology drives up tomorrow's overall maintenance cost.

Moreover, there are multiple virtualization solutions. Some have the advanced functionality that ties into enterprise management and monitoring toolsets. Others do not. Even others are completely free products that appear worthwhile at first blush but down the road may suffer from supportability concerns. Beware the situation in which design and maintenance teams look too closely at initial implementation cost while ignoring total cost of ownership issues that include needed management capabilities.

Siloing

Earlier was discussed a long description of the data gathering capabilities that arrive with virtualization and its automation toolsets. However, those toolsets can gather data only when they're specifically coded to do so. When individual teams make point solution decisions about the inclusion of virtualization in service operations, the outcome often results in a technology silo that is incapable of integrating with the rest of the environment.

These technology silos prevent individual virtualization implementations from being compared with others using automated toolsets. They prevent the effective data gathering necessary to determine customer demand and augment services as necessary. They also inhibit the ability for the analysis of use patterns and performance across multiple services. The result is an organization with lots of services, none of which can be analyzed against the other.

High-end virtualization platforms have the ability to scale to the level of the enterprises. Due to the intrinsic high cost of these types of virtualization implementations, enterprises who want virtualization should also want this scalability. This is due to the increasing economies of scale that become possible as the virtualization environment gets large in size.

Cloud services, also known as the capability for "virtualization-as-a-service" becomes a very real possibility for enterprise organizations with very large and relatively homogeneous implementations. The cost savings potential per service at this level of commonality is high enough to be a compelling end goal, even with the high cost to implement.

Environment Complexity

Many aspects of virtualization at first blush appear to be "easy" installations. It takes only a few minutes to drop a DVD into a computer's hard drive and begin building a virtual host. Yet architecting an environment that goes beyond a small number of devices to implement the levels of high availability needed by business services quickly grows complex. Moreover, virtualization technologies themselves tend to add complexity to the environment while at the same time they reduce it in other ways:

  • Backups get easier and backups get harder. Virtualization allows for entire devices to be backed up as a single entity with a high assurance of restorability. Yet those same devicelevel backups often do not have the granularity required for individual configuration or file restoration. This means that enterprises may require multiple types of backups to meet their Recovery Time Objectives for restoration. As another example, storage virtualization provides snapshotting capabilities for reducing backup windows but does so at the cost of additional disk space.
  • Networking gets easier and networking gets more complex. With hardware virtualization's aggregation of more server instances onto fewer physical devices, the built-in virtual networks make it very easy to interconnect virtual machines. At the same time, this aggregation can very quickly begin to oversubscribe the underlying network infrastructure. Making the problem even worse, the built-in networking capabilities commonly seen in virtualization platforms tend to lack the advanced features commonly seen in traditional network devices. Effectively, a virtual switch is in many ways a step backwards in terms of capabilities.
  • Storage gets easier and storage gets bigger. This element relates not only to the virtualization of storage itself, which makes its management much easier, but also to the consolidated storage of virtual devices. In the case of virtualized storage, its added management functions often require added storage to support their functionality. For virtual machines, more machines in fewer places require greater amounts of high-end and high-dollar storage.

Sprawl

Lastly, in much the same way as electricity takes the path of least resistance, when something is easy to do, we tend to do that thing more. All of these improvements to the speed of rolling out new services and augmenting existing ones with more capacity come at a cost: Simply put, more is more. Enterprises that make the move to virtualization must be cautious about how virtualization impacts the replication of resources. It is not unheard of for an organization—who before was rate-limited by the specification, purchasing, and provisioning process required of physical servers—to see a geometric growth in virtual resources once virtualized. This sprawl must be managed to prevent it from growing out of control.

Resolving Challenges

Getting through those challenges requires successfully strategizing the incorporation of virtualization into business services. Consider the following suggestions as potential solutions towards resolving the challenges noted earlier

Incorporate Virtualization into Existing Operational Frameworks

When virtualization projects are incorporated as one-off implementations to solve the problems of the day, service management teams will not be cross-coordinated. By embracing virtualization at the enterprise level, organizations can control its placement and configuration. This results in a reduced operational cost spread across all projects. It also aligns with the templatizing idea builtin to virtualization technologies. Templatizing the request and release processes associated with virtualized resources enables greater control over how and why they are brought into production.

Consider Virtualization a Process Enabler Rather than a Technology Enabler

Virtualization's hype has a tendency to excite technology implementers due to the capabilities it brings to the table. Yet implementers often lack the strategic vision to incorporate its technology for the right reasons. Smart organizations remove the responsibility for control from the technology implementer and hand it to the process implementer. Doing so ensures that technology rollouts are being done with business goals in mind, and that business services align with established business processes.

Standardize, Yet Do Not Standardize

There are economies of scale associated with the wholesale migration to virtualization. These economies of scale work most efficiently when the underlying technology underpinnings are equivalent across the enterprise. Yet there is also a risk in standardizing too far onto one platform or technology. Leaving alone the obvious concerns of vendor stability and support, the virtualization technology that brings about the layer of abstraction discussed in the beginning of this chapter is quickly growing to become a commodity. Whether your layer of abstraction is made by one vendor over another is becoming less of a concern these days than ever before.

What is important are the tools used to manage virtualization environments. Toolsets that work across multiple platforms and vendors are arguably the best in this case, as they have the ability to perform the necessary functions of each from a single and unified interface. At the same time, they retain the data gathering capabilities across all platforms.

Considering that some virtualization platforms are significantly different in terms of cost and features, a good practice for organizations that have such management tools is to target high-priority workloads atop high-dollar virtualization platforms. Those workloads that aren't critical to the operations of business can be operated on alternative platforms that have fewer availability features yet are lower in cost.

Wrap Automation Around Virtualization

In relation to all of these, virtualization is all about automation. But that automation only happens when the organization makes use of it. Thus, any enterprise virtualization implementation must consider automation as its primary goal. When architecting the processes that govern the implementation and use of virtualization technologies, smart organizations look to their management platform as the locus of process automation. Finding the right management platform that provides automation of all forms across multiple platforms is the best practice, and becomes especially critical with the growth of the environment.

Integrate Physical and Virtual Management

Lastly, not all elements of the IT system will be virtualized. For some, their performance needs prevent them from being good virtualization candidates. Others may run legacy technology or have legacy needs that do not function when virtualized. Even others may not be virtualized for political or business reasons. It is paramount that the technology used in managing the virtualized environment be the same technology that can manage its physical elements.

Siloing the virtual from the physical brings about the same types of problems as what appears when individual projects are siloed from each other. The intelligent enterprise will look hard for technologies that allow them to manage and monitor their environment irrespective of physicality.

Service Management Defines the Structure of Virtual Activities

This guide intends to illustrate to the reader that virtualization and service automation are designed to go hand in hand. You shouldn't have virtualization without automation, while at the same time automation is made substantially easier when it is applied to a virtualized environment. This chapter has attempted to illustrate how this works from a process standpoint.

The rest of this guide will dig a little deeper into the various aspects of the technology itself. Chapter 2 begins with a continuation of the discussion on management toolsets, and how those that hook into every aspect of the IT environment—physical or virtual—should be desirable by the enterprise IT organization. Chapter 3 and 4 then dovetail this knowledge into specific discussions on how virtualization enhances the automation of the change management and problem resolution components of IT administration.