When you think of data availability, recovering lost data is often the first thing to come to mind. Data availability is more than ensuring you can recover lost data; it also includes ensuring that data is where it is needed, when it is needed, and in the form that it is needed. Considerations also include the need to migrate data from one system or platform to another. Data migration comes in many forms, from moving applications from one OS to another to replicating data from transactional systems to analytic applications. Fortunately, the same tools and sound practices used to ensure data can be restored after a data loss are useful for data migration.
This discussion of data migration includes a discussion of the:
As with the discussion of so many IT management issues, this one starts with the dynamic nature of IT infrastructure.
Data migration is sometimes required for one‐time moves and in other cases for ongoing operations. These use cases are driven by:
Organizations have a number of OSs to choose from. Microsoft is a standard for business desktops, but even when using a single vendor's platform, there can be data migration requirements. If the average lifetime of a desktop or laptop is three years, IT departments can expect to replace one‐third of their end user computers every year. Similarly, servers need to be upgraded, leading to additional data migration.
With the increasing use of virtualization, there is a need to migrate data from physical servers to virtual machines. Virtualization can improve the efficiency of server utilization.
Ideally, you would minimize the time your applications are down during the migration. Backup solutions can help with this minimization to ensure you have complete and accurate copies of your data from the current server, which can then be migrated to the new virtual server. Backup and restore operations can similarly be used when migrating virtual machines from one hypervisor to another.
Extracting data from production systems for use in business intelligence and analytics applications can put unwanted load on the source systems. Backups can be used to replicate the data to a staging area where it is extracted and transformed for use for analysis.
Given the way business and technical requirements change, it is not surprising that organizations need to migrate data. However, there are unique challenges to data migration practices.
Data migration is often driven by ad hoc requirements. Unlike backups, which are performed repeatedly and on a regular basis, data migrations are often one‐time operations. (Migrating data for data analysis operations is an exception.) Several physical servers may be consolidated into one physical server running multiple virtual servers. Systems administrators might decide to migrate several servers from Hyper‐V hypervisor to VMware. These kinds of operations are often dealt with as they arise without formalizing methods for managing them.
Data migrations are so often done once and not repeated that there is less motivation to create well‐defined policies and procedures. This shortcoming can lead to many inefficiencies in the data migration process:
In addition to these kinds of operational inefficiencies, systems administrators face the potential problem of not replicating data properly. In an effort to get the migration done quickly, systems administrators might be tempted to write scripts using simple copy operations.
Simple copy operations work well for simple operations, such as copying documents or directories that are not being updated. Problems arise when files are being changed during a copy operation. For example, if a directory is being copied and a file is deleted during the copy operation, the two versions of the directory will be inconsistent at the end of the operation. Changes can also occur within a file during the copy operation. Unless all the files are put into read‐only mode, a file could be updated by another program or user during the copy operation, again leading to inconsistent versions of the directory.
A better option to simple copy operations is to make snapshots of data to be copied. Doing so minimizes the time files are in a read‐only mode and ensures that at the end of the data migration process, the source and target copy of the data are in consistent states.
A few practices can help to improve the efficiency and quality of data migration operations:
Data protection solutions used for backup have the functions needed to improve data migration practices. Most organizations will have backup solutions in place, so there is no additional cost to acquire or maintain a data migration–specific tool. In addition, backup solutions will have features that allow you to verify data migration operations, schedule operations, and report on the restoration/migration process.
Using the right tools for data migration will improve the process' operation and efficiency, but further improvements can be realized if you implement policies and procedures for data migration. The purpose is to capture information learned by performing migrations and standardizing the processes. Systems administrators will not have to design and implement new procedures for each data migration. Re‐using procedures can help reduce the chance of errors because systems administrators can re‐use tested and debugged procedures.
Organizations need to migrate data for a number of reasons, from improving server efficiency with virtualization to enabling new business services such as business analytics. Data migration can be treated as an ad hoc process that does not lend itself to standardization. However, a better approach is to standardize migration procedures using data protection systems that have the features needed to meet a wide range of data migration requirements.