Roadmap for the Automation Journey

Orchestrating the IT infrastructure to meet the needs of the business is central to delivering on the promise of IT Process Automation (ITPA). Doing so requires a keen understanding of business, a flexible IT management mentality, and the ability to identify and evaluate critical IT processes maturity. Unlike the systems that our teams support, however, organizations and IT managers don't always adhere to a standard systematic architecture when aligning people and processes to meet the needs of the business. It's fair to say that many organizations have a few IT processes with a relatively high degree of process maturity but rarely does an organization have a high degree of process maturity across all IT processes. Some may be very well documented and even tried, tested, and stood the weight of time, but there are many more that could contribute to IT efficiency that are left untouched. Beyond those few IT processes that are mature, these untouched processes exist because they've been developed ad-hoc or on the fly out of a reaction to an emerging need. There is certainly a line between what an IT associate should be expected to execute based upon their own core competencies and what needs to be a well-documented IT process. Knowing the location of that line is central to the discussion of this chapter, as we evaluate ITPA, examine processes that will help your organization improve your IT processes, and automate your returns by turning efficient processes into automated solutions.

Evaluating Process Maturity

Evaluating process maturity begins with defining the structure upon which your organization will gauge maturity. Not to muddy the issue, but the task of gaining process maturity is itself a process. We're going to discuss the importance of having a method to gain process maturity to your IT processes and all that will need to be considered. First is the nature of your influence to the organization. Many IT processes have dependencies upon other processes that fall outside of IT, so it's important to step into this "process" knowing that your findings may have repercussions outside your span of control. The key to navigating those repercussions is in making sure that your partnerships with your internal and external business partners are well founded. So let's begin.

One of the first steps is to agree upon a framework when discussing process maturity so that all parties involved can benefit from a common taxonomy and structure. An easy way to approach this is to leverage a common body of knowledge such as the Capability Maturity Model. CMM is a structured, staged approach to measuring capabilities and because CMM is widely used in software development, there is a high likelihood that many of your existing IT staff and outside vendors are familiar with the concepts associated with CMM. In addition to being a good place to start discussions, the stages of CMM can be used as a template to design a more granular process maturity structure for your organization.

There are five stages of CMM:

  • Maturity Stage 1: Initial—During the initial stage, processes are non-repeatable, not clearly defined, unmanaged, and un-optimized. During this stage, the effectiveness and efficiency of an organization is entirely dependent upon the core competencies of the individual associates responding to daily demands.
  • Maturity Stage 2: Repeatable—In stage 2, a process is repeatable, but this repeatability is often not the result of defined process but rather a business process or system design.
  • Maturity Stage 3: Defined—The difference between stage 2 and stage 3 can be identified in a single word: documentation. Stage 3 processes are repeatable and well defined. This stage will include tailoring the process to meet the organization's own needs.
  • Maturity Stage 4: Managed—In stage 4, processes are managed. That is, metrics for measuring process efficiency have been defined and goals have been set by which the organization can quantitatively measure the effectiveness and efficiency of a process.
  • Maturity Stage 5: Optimizing—Stage 5 processes have reached the pinnacle of process maturity and often represent the culmination of work of many parties. Maturity in stage 5 processes are repeatable, defined, managed, and measured, and the results of those measurements are used to further refine and improve upon the process.

Each stage of CMM represents not only the maturity of the process but also the effectiveness of an IT organization. Organizations whose processes fall at stage 3 or below may function, and they may even function well, but the ability to measure process contributions to business processes in a quantifiable way is virtually non-existent. As processes become managed and begin to move towards optimization, IT managers begin to develop a greater level of control over the consistency of IT service delivery, the effectiveness of their people and processes to deliver upon the organization's needs, and the efficiency of their delivery. To begin working towards developing mature processes in your organization, we need to look at the four main process management alignments common to most IT organizations today. Keep in mind that we're examining these for the focus they bring to evaluating processes and the organizational models that can effectively execute each management alignment.

Manual Process/Element Management

The most basic form of process is the manual process. These are tasks carried out each day often considered part of the core competency of an individual or team. There are two big reasons we are going to begin our discussion on increasing IT process maturity here. The first is that evaluating manual processes is important because it gives the examiner a close-up look at a specific task. The second is that these processes are critical to the effectiveness and efficiency of your organization.

Evaluating process maturity at the manual process/element management level is essentially analyzing processes in their most basic form. In IT management, focusing on an elemental management process may mean looking specifically at how a Help desk ticket is created or how a change management request is input without necessarily taking into account the processes that exist up and downstream of the particular process being examined. The focus is to target and isolate a particular problem area for evaluation and ultimately institute corrective actions.

Examining process maturity at the manual process or elemental task level is useful in resolving problem areas in a single process. Often tasks at this level are taken for granted as core competencies but end up presenting a risk when not properly executed. Something as simple as taking the trash out, for example, can cause big problems when there are two sets of trash bins one of which is marked "Proprietary Information" and the is merely marked "Trash." Executing a manual process of trash removal on the wrong bin may result in lost customer information, Social Security numbers, or trade secrets. Improving process maturity in your organization is going to require a closer look at manual and elemental tasks, identifying the most critical, and using CMM to evaluate and improve the processes. Reaching CMM stage 3 is important because of the role these processes play in cross-silo process maturity and ultimately in mapping IT processes to business processes.

Silo-Task Based

If beginning with the manual processes seemed a bit odd, it may be because many organizations focus on process maturity at the silo or task-based level. Managers tend to begin to improve things within their span of control, and the people most focused on process improvement often have management over an entire silo of processes. As organizations align their teams and processes to deliver on a specific goal, silos are formed along a vertical alignment of people, processes, and technology. The silo or task-based focus on process maturity is a broader approach to examining process maturity than manual or element-based and occurs within these vertical alignments or "silos." A silo is an organization of similar processes to fulfill a goal. Think: operations management. For example, an organization may have many processes that fall within the business continuity silo or the problem management silo. These processes interrelate with one another. Unlike evaluating tasks at the manual process or elemental-task level, examining processes at the silo level is a more proactive evolution. Processes interrelate, so the problems that are associated with one process may be identified as having a source deriving from other processes upstream. Conversely, they could be identified as having impacts to other processes within the silo downstream. Beyond looking at root causes for a process within a silo and their relationship to the process maturity, the ability to look at process maturity on a more holistic level within a silo enables an organization to refine best practices at the operations management level.

Cross-Silo Approach

One step further up the organizational alignment of processes is the cross-silo approach to evaluating process maturity. This approach is where silo managers begin to work together to bring out process improvements across their respective spans of control. During this evaluation, processes are examined across multiple silos to achieve a common goal. For example, the server efficiency of a database server may depend upon processes that support the database itself as well as processes that support the operating system (OS). These two process areas may be controlled by different groups. The regular shipping of the server's database is a function that may be performed by a database administration team while day-to-day updates and security updates applied to the OS may be performed by a server engineering team. Despite the organizational alignment of the teams, the processes they support interact. By examining processes for maturity cross-silo, organizations take one step towards not only process maturity but also organizational maturity. The ability for the teams within an organization to work closely to isolate and drive out process efficiencies and improve IT processes is one of the hallmarks of effective IT management.

Map to Business

The highest level of process maturity evaluation occurs when mapping processes to the needs of the business. For instance, if an organization is in the business of architectural design and services three types of customers—residential, small commercial, and large commercial—an understanding of the business can enable the IT organization to drive out greater efficiencies. The server storage team, for example, may be able to weigh each customer account type for the amount of data (on average) their designs will generate in the storage infrastructure; then, based upon sales estimates alone, the IT storage team can forecast their storage needs for the next year to service that line of business. This is a simplified example, but the end goal is illustrated in that the business processes are now clearly linked to IT processes and in doing so the organization can deliver greater process and organizational efficiency.

Gaining the focus needed for this stage of evaluation is challenging. IT processes have not been traditionally evaluated against the needs of the business and doing so requires not only high technology acumen but high business acumen as well. Reaching an organizational state that is conducive to evaluating IT processes against business needs is a huge accomplishment and often represents years of work maturing processes at the cross-silo, silo, and manual process level.

This investment in time and effort is one you will want to protect, and we'll talk more about how to protect this investment later on in the chapter; for now, it's important to understand that the reason this maturation took so long is that IT processes are often linked within a silo and across silos by manual processes. If an associate in your organization needs to order a server, he or she may need to place that request by placing a phone call to a support team. The support team may not actually be the team that purchases and procures the server, rather they execute other processes to order the equipment from a vendor and dispatch the appropriate teams for set up and configuration. Even if all the teams involved, from supply chain to server and network engineering and outside vendors, all had their own request system, manual steps would still exist in between these disparate processes. This inability to integrate fully has restricted the growth of many organizations to cross-silo process evaluation.

Recipe for Success—IT processes are like recipes refined through years of meticulous taste testing that your customers eventually sit down to eat. But even with the most thoroughly tested and wellwritten recipe, who would you trust to prepare it? Professional chef's cost a lot of money, as do IT solutions providers. But what if you could implement a solution that would ensure that your recipe was executed flawlessly each and every time? This is the reality of ITPA today. The same production drivers of quality and consistency that production organizations have sought after for years is finally available to IT process designers. No knowledge of "code" or complicated interfaces is required—the recipe for success is to simply point, click, design, and deliver.

End-to-End Process Automation

In many organizations today, the effectiveness and efficiency of processes are measured on reporting dashboards. These dashboards range in complexity from simple spreadsheets to fullblown business intelligence systems and exist to provide senior managers with a view into how well processes are being executed within the organization. Some are called automated dashboards, though few actually are automated. Dashboards, particularly those that are driven by a business intelligence engine, can offer a consolidated perspective of how well an organization is executing processes and afford managers the ability to see process impacts and perform a variety of "what if" scenarios, making these tools useful for identifying gaps between processes. The challenge comes in determining which processes need to be monitored and captured in the dashboard, and then defining or developing an automated process to get the data into the dashboard. Rarely can all the information requested by senior managers be automatically generated and placed into a dashboard report and often some manual intervention is required.

The range of views available from a dashboard vary greatly by industry and line of business markets, but often they reflect a view either at the silo or operations-management level, enabling operations managers to take on a more proactive role in managing their production environments. That is, at least, their most effective level at this point in time. Ideally, these reports would generate information for senior leaders that would enable the highest decision makers in an organization to react and adapt to changing conditions and demands upon their business; however, this is more often than not a distant reality. The problem is that the underlying processes are not mature enough to provide the dashboard or business intelligence tool with accurate and/or up-to-date metrics.

Organizations with low process maturity levels have a difficult time identifying processes that need to be tracked as contributors to the organization's efficiency; they also have a hard time gathering metrics from their processes for reporting. The more mature a process, the easier it is to monitor and report on the process. Today, immature processes are often identified as source data and lack the rigor necessary to be of genuine use to a dashboard. Such data often requires a good deal of manual data 'scrubbing' to normalize the report; a task that is time consuming and introduces even more process. Thus, we now have processes to support the reporting of processes and before too long the entire system becomes unmanageable.

The ideal state of process reporting would be for processes to be reported on automatically.

People, and further manual processes, shouldn't need to be interjected into the reporting process. The very act of data scrubbing and normalizing should indicate a problem with data quality, and this problem stems from immature processes. Many off-the-shelf and custom vendor solutions exist today to perform dashboard reporting, but, like the old programming axiom states "garbage in, garbage out." Getting sound reporting on process efficiency requires a good deal more than a tool that aggregates process metrics; it requires mature processes that are capable of delivering sound metrics. So how does an organization reach the state of reporting necessary to react and survive in today's business environment? We should begin by improving the processes that will be supplying the metrics.

Processes for Improving Processes

Depending upon your industry and background, you might be familiar with many methods of improving upon process efficiency. Some organizations have taken the measured approach and instituted practices such as Six Sigma. Others derive approaches based upon industry best practices and measure internal processes against those best practices. Many use a combination to refine best practices to a model more custom tailored to their own organization. Regardless of how your organization decides to approach process improvement, you should be aware of the many options out there to assist you in IT process improvement efforts. Each of these frameworks, methodologies, and practices can be leveraged at some point by any IT organization and may assist you not only in improving your own internal processes but also building business cases for process improvement that can be supported by your business partners.

Six Sigma

Six Sigma is a widely used set of process improvement practices that were originally developed by Motorola. Six Sigma delivers process improvement by focusing on what it refers to as defects and defines as a nonconformity of a particular product or service to a predefined specification. The name Six Sigma stems from the mathematical expression of process defects equaling 3.4 defects per one million opportunities. The concept of defects per million opportunities (DPMO) is used throughout the Six Sigma methodology and the ultimate goal is to continually reduce defects through continuing efforts to reduce variation in products or services.

Although Six Sigma, as a whole, is split into two key methodologies, the one of greatest interest to process improvement efforts is the DMAIC methodology. DMAIC stands for Define, Measure, Analyze, Improve, and Control and sets five basic steps for improving upon a product, service, or, in our case, process to reduce the number of defects or nonconforming output from the process:

  • Define—During the define phase, the process improvement focus is on defining the improvement goals that are consistent with customer demands and enterprise strategy or direction. Different organizations will begin with different scopes of what they authorize to be an adequate stage definition; regardless, this definition should be relatively short (for example, "Increase server utilization"). As the project progresses, various aspects of what might contribute to "server utilization" will be identified, measured, analyzed, and ultimately improved upon.
  • Measure—The measure phase of the DMAIC process exists to collect the metrics relevant to the current process (as it exists today) and use the data for future comparison.
  • Analyze—Throughout this phase, the focus is on verifying relationship and causality. That is to say, a determination must be made of what relationships exist that drive the metrics and take them into account. If your goal is to improve a business continuity process and that process depends upon files being provided by a process keyed off of an outside vendor, that process, and vendor relationship, needs to be identified as a relationship at this stage because it might impact your metrics.
  • Improve—Based upon the goal established in the define phase, the data identified in the measure stage, and the relationships and causality identified in the analysis phase, the improve phase exists to implement the actions necessary to improve or optimize the process.
  • Control—The control phase exists to ensure that any variances in the process are identified before being put into production. This often involves a pilot exercise to determine whether the final process delivers as expected before being depended upon for production work. For IT processes, this may include User Acceptability Testing (UAT), load testing, stress testing, or—in the case of a business continuity process—conducting an exercise scenario specifically designed to flex the process.

The Six Sigma methodology improves processes by taking a non-assuming approach to process improvement. The process is defined by its needs, measured by the expectations placed upon it, analyzed for its capabilities, improved based upon scientific findings, and tested through a control phase prior to implementation. It works because it doesn't begin with an answer; rather it begins with a requirement and works toward an answer. This singular point of focus is one of the biggest driving points behind the success of Six Sigma. One simply cannot execute a Six Sigma process with a predetermined answer and expect the improvement effort to deliver the anticipated result. You might have an idea about the outcome but often the metrics will tell a story of their own; taking the time to really look into a process with this approach will yield measurable results.

Of course, once a process is improved, it is unlikely that it will stay that way. The relationships and causalities identified in the analysis phase may change and the focus of the process effort and business strategy might also alter. These variables may result in a process that, while once efficient, no longer conforms to the needs of the organization.

The Six Sigma methodology has been executed by production organizations for years, and its focus on quality aligns directly with ITPA tools. Just as assembly plants added robots to perform manual, labor-intensive tasks to improve quality and consistency, IT organizations can add process automation tools to rapidly develop, implement, deliver, and refine processes hand in hand with the Six Sigma method.

Double Diamond Design Process

Much like the world of design, information technology (IT) management is often a creative effort and improving IT processes is, at its core, a creative exercise. As organizations bring together internal and external subject matter experts (SMEs), define a goal, and work to design the best solution, a certain amount of flexibility and creativity must emerge. Innovation is often merely the art of using existing things in new ways. Identifying ways to use existing items in those new ways, however, is the hard part.

The Double Diamond design process model is a simplified graphical way of describing the design process in four distinct phases: Discover, Define, Develop, and Deliver. This process maps the divergent and convergent stages of the design process to show the different modes of thinking that designers might employ. Supported and funded by the British Government Design Council (—the strategic body for design for all the United Kingdom).

The Double Diamond is segregated into quarters with convergent and divergent areas. As Figure 2.1 illustrates, the first quarter is the Discover quarter, which marks the beginning of the project or process improvement. Translating the Double Diamond to IT process improvement isn't much of a stretch. During the discover phase, an initial idea is evaluated. Much like the undocumented idea generation & planning (IG&P) phase of a Six Sigma effort, the Discover phase includes initial idea generation, research, management of information, and perhaps the establishment of workgroups to "discover" potential new ways to approach a process management concern.

Figure 2.1: An illustration of the Double Diamond approach.

The second quarter of the double diamond model is the Define stage. During this quarter, the design will, aptly enough, be defined through the identification of key business objectives and alignment of key activities for project development, project management, and ultimately project sign-off. To compare with the Six Sigma methodologies, the Design quarter of the Double Diamond is akin in many ways to the measure and analyze phases of the Six Sigma DMAIC.

Developing a solution to the problems presented as part of the process improvement effort falls into the Develop quarter of the Double Diamond. During this quarter, design-led solutions are developed and tested. The activities and objectives during the Design stage may include the commissioning and tasking of multi-disciplinary teams to lend broad subject matter expertise to the use of visual management techniques, the development of prototype processes, and the testing of potential solutions.

The final stage of the Double Diamond design approach is the Deliver stage, where the finalized process is taken through a period of final testing, review, approval, and launch. This stage also includes the evaluation of the success of the initiative against predefined targets and feedback from key stakeholders.

End to end, the Double Diamond approach may appear on the surface to be very much like Six Sigma, but beneath, the Double Diamond is much more focused on creativity than on hard metrics, especially in the Define and Develop phases.

Leveraging ITPA tools, the teams executing the Double Diamond approach will have on hand a means to rapidly create and deploy solutions in new and interesting ways. Experimenting with process designs in a lab environment has never been quicker, as solutions that would have traditionally required human intervention or complex coding—such as integrating with your ticketing system, your asset management system, and your monitoring tools—can all now be executed in a fully automated fashion, cutting the time needed to develop and implement new solutions. That saves money.

Another approach an enterprise can take to improving process maturity is to leverage industry best practices. For IT organizations, best practices abound; for sound IT management governance, two frameworks clearly stand out as world class—the IT Infrastructure Library (ITIL) and the Control Objectives for Information and Related Technology (COBIT).


ITIL is a series of texts commissioned by the United Kingdom's Office of Government Commerce (OGC) that set forth an integrated framework of best practices for IT organizations. These best practices can offer a starting point to organizations focused on improving the effectiveness and efficiency of their IT operations. Unlike Six Sigma and Double Diamond, however, ITIL is a framework of best practices and is less focused on performance. Rather, it focuses on sound IT management practices and organization. Among these are Service Support and Service Delivery. The former defines best practices for Service Desk operations, incident management, problem management, configuration management, change management, and release management—all of which are key areas of focus for IT managers working to align IT delivery to business needs. Service Delivery focuses on how the support functions of IT are delivered to the business by defining areas of best practice in service level management, capacity management, IT service continuity management, availability management, and financial management of IT services.

ITIL can be used to improve existing IT processes by first aligning your organization's processes with industry standards. In much the same way that "backup software" is now referred to as "business continuity software," solutions for what was once thought of as "Help desk" software is now aligned to service desks as well as incident, problem, change, and release management. The focus has shifted from a manual process to a silo and cross-silo based approach. As organizations realize that incident management, for example, is far more complex than merely opening a ticket and dispatching a vendor, management of incidents has evolved into an entire hierarchy of needs. ITPA tools enable ITIL and ITSM by bridging the gap between disparate systems and allowing processes to be structured around sound governance rather than whatever your current system is capable of doing on its own.


COBIT is a framework of best practices published by the Information Systems Audit and Control Association (ISACA) and the IT Governance Institute (ITGI). Its purpose and focus is to provide IT managers (and auditors) with a set of best practices for measuring IT. This includes the categorization of IT into four domains: Plan and Organize (PO), Acquire and Implement (AI), Delivery and Support (DS), and Monitor and Evaluate (ME). Each domain is divided into multiple high-level control objectives. Each of COBIT's four domains contains specific guidance and maturity models for IT management and control that can be extremely useful at the silo and cross-silo organizational levels. What makes COBIT particularly valuable is an appendix entry that maps IT to business goals. Before we dive into a discussion of this process, let's first examine each of the four major COBIT domains and their high-level control objectives:

Plan and Organize (PO)

This domain covers the use of IT within an organization to achieve the organization's objectives from its highest level in the definition of a strategic IT plan and direction to the lowest level of managing projects. Of particular interest to process improvement efforts is COBIT PO4, which provides guidance and a maturity model on defining IT processes and is achieved by defining an IT process framework, establishing the organizational structure necessary to deliver the IT processes, and defining roles and responsibilities. Each COBIT high-level objective clearly illustrates the control point, the reason the control point exists (to satisfy a business requirement), the control point focus, how the control point will be achieved, and, most important to process improvement, how it will be measured.

Acquire and Implement (AI)

The high-level control objectives within the AI domain cover the identification of requirements related to the acquisition and implementation of technology. These best practices and their associated maturity models can be particularly helpful in improving upon processes within the change and configuration management space. They cover a broad scope of acquisition and implementation from identification of solutions to the installation of and accreditation of solutions and changes. Each of the areas within the AI domain offer valuable insight into industry best practices that can be used for refining any process associated with the acquisition and implementation of hardware and software from procurement through production.

Delivery and Support (DS)

The bulk of IT operations fall within the DS domain. From the definition and management of service levels and third-party contracts through the day-to-day incident and problem management and education of users, the DS domain covers all the high-level objectives focused on the delivery of IT.


On the surface, one might look at the list of high-level objectives within the DS domain and think that many of these objectives cross over with ITIL—they would be correct. ITIL and COBIT have similarities in structure but offer different scopes of guidance. ITIL is more of a framework. It is heavy on organization and light on the actual implementation. COBIT is quite a bit more detailed in the execution of processes. There really is no choice to be made between ITIL and COBIT; both frameworks can work side-by-side to help you strengthen the processes within your organization. You might, for instance, desire to align your organization and processes to the ITIL framework and then use guidance from the COBIT framework to identify areas for improvement.

Monitor and Evaluate (ME)

The ME domain focuses on the continuous monitoring and evaluation of IT and business objectives that require control. This includes evaluation and control over IT performance and regulatory compliance. The ME domain is the most often-overlooked domain in IT management, but it is really an area that can't afford to be neglected. When you consider the time and effort put into any process improvement, it is important that investment be protected.

COBIT is a control framework and maintaining a control framework can be especially challenging as organizations and business processes grow. ITPA tools provide a means to maintain pace with business process growth by offering the tools necessary to rapidly author and deploy control sub-processes. ITPA tools don't just make IT process development and deployment faster, they also introduce new ways to automate the gathering of control metrics. For example, measuring the number of management-level escalations that have occurred in traditional incident management might require skimming through tickets, reviewing Help desk logs, and digging deeply into post-problem review reports. An automated IT process, in contrast, could simply include a logging sub-process to report and keep track of each escalation actively so that escalation metrics are captured in real time. The result then becomes escalations gaining instant visibility to all parties involved and an active track of escalations being maintained for future review and management activities.

Linking Process Improvement to ITPA

Throughout this chapter, we've examined methods to improve processes from a systematic approach (Six Sigma and Double Diamond) as well as a best practices approach (ITIL and COBIT). Each of these tools can be used to improve processes within an organization, and they will certainly each bring their own specific set of tools to bear on the problems facing your IT organization. This begins the process of continuous improvement that will drive your organization to world-class performance.

Automating Returns

To many, the business of process improvement, to be followed by organizational change and then to be followed again by process improvement, may seem status quo drudgery and it is; however, it is so only in the same way that rows of bank tellers in large financial institutions existed to service customer needs in the 1950s. Today, there are automated teller machines (ATMs), and if you walk into your local branch, you're likely to see only a small handful of tellers required. An organization could have spent years working with all manner of SMEs and never reach the level of efficiency accomplished through this single device. What changed wasn't the process, it was the entire paradigm. When ATMs hit the market, no matter how efficient teller operations could become, they simply couldn't compete with the automated solution, and organizations that failed to jump to the next curve of innovation were left behind and eventually crushed under the weight of their own processes.

ITPA is the ATM of IT management, and it's poised to be the single most substantial contribution to IT management in recent history. By implementing ITPA tools within an IT infrastructure, investment in process maturity is returned and continuously reported. Dashboards feeding senior management decisions can reach a state of near-real-time reporting as processes are measured by other automated processes and inefficiencies in process can be reported and escalated in a timely way.

Consider an incident management process and how it might function today with full-time employees. In a non-automated environment, a problem, such as a down server, may go unnoticed and unreported until an actual human takes note and calls for help. Vendor time to arrive on site, identify the root cause, and take corrective action is largely an unknown as technicians are dispatched "in the blind" without any up-front diagnostics to know where to look for the problem. Sometimes problems, such as a server down, are actually network problems and server technicians respond only to find out they need to dispatch network technicians, further delaying resolution and further impacting production.

With an ITPA tool in play, however, the server down would likely have been noticed by automated monitoring in near real time. The tool could open the ticket, perform basic diagnostic processes (such as checking network connectivity), and then based upon those results dispatch the appropriate teams. An ITPA tool could even update senior leaders on key issues as they occur in near real time, such as vendors failing to respond to a page or massive outages impacting the business.

This is a simplified example but it illustrates the capabilities of an ITPA tool to interface with both the monitoring and ticketing software and execute predefined IT processes—definitely saving time and cost associated with routine IT process recovery. Similar parallels can be made anywhere routine IT processes are executed upon.


Just like the banking centers of yesteryear, IT management is a process-driven environment that has become entirely too dependent upon manual interaction. We often reward and recognize our associates in IT management for their hard work and heroics that take place to keep the business functioning, but how often do we neglect to research the reason heroics were required to get the job done in the first place. IT managers shouldn't be in a position to have the word hero interjected into their title; rather, they should be recognized for the tremendous value they bring to the organization. As businesses become more IT enabled, IT managers play an ever-increasing role in the financial viability of operations. Leveraging their knowledge to understand processes before they are automated is vitally important to any organization seeking growth in a highly process-driven economy.