Loading...
You are here:  Home  >  Data Education  >  Data Architecture News, Articles, & Education  >  Data Architecture Blogs  >  Current Article

Migrating Your Data Centre? Here are Six Key Relocation Considerations

By   /  June 28, 2017  /  No Comments

Click to learn more about author Francis Miers.

Companies need to migrate applications from one Data Centre to another, or from a Data Centre into the Cloud, for various reasons. These include regulatory requirements, mergers and acquisitions, limited space in the old Data Centre and simply cost savings. Data Centre migrations are quite common but even so they are complex, involving relocating application software, infrastructure software, sometimes hardware, and all their interdependencies. In this process two crucial imperatives must be obeyed: to ensure that no data is lost, and to avoid any unplanned downtime.

Moving an application to new infrastructure, or moving the infrastructure itself involves taking account of the dependencies between the components of the application, dependencies between the application and other applications, and dependencies between the application and elements of the infrastructure. For example, an application may share its database server with one or more others; applications may communicate with each other over particular network connections which cannot easily be replicated over a longer distance to a new location; and security or regulatory concerns may prohibit some applications from being moved to the new location. Also, information on existing IT assets is often incomplete, which makes it difficult to be sure of the effect of decommissioning certain elements of hardware or infrastructure.

The benefits of a successful move can be substantial, however. They may include improved performance and more efficient administration and operation of the migrated systems.

The following six rules of thumb can help lead to a successful Data Centre migration.

1. Thoroughly Assess the Current Setup

Most organisations’ knowledge of their current IT assets is imperfect, and the older the assets or the Data Centre in which they are hosted, the less complete the knowledge is likely to be. In order to maximise the chances of a smooth migration, it is important to make good as far as possible this lack of knowledge by reviewing everything in your current Data Centre, discovering as much as possible about what is unknown.

For each application, there are likely to be multiple relationships and dependencies with other applications and infrastructure elements. Some may be decades old and involve half-forgotten legacy technologies. Often internal reports, process documents or training manuals will not have been kept up-to-date – and staff with knowledge of the older systems will have left the organisation. This is the nature of the gaps in knowledge which must be filled.

Network tracing tools can help the process of (re)discovery by detecting which components are communicating with which others, and building a picture of which applications and systems relate to each other and how. It is best to put these tools in place well in advance of your planned migration. Some systems may be in regular contact, but others may only communicate every few months or even once a year. Ideally, you want to build the fullest possible picture of how your Data Centre works over the longest possible period before undertaking a migration.

2. Choose the Right Migration Method for Each Application

Several different methods of migration usually exist for each application. The options normally include physically moving the hardware in a ‘lift and shift’ operation, copying virtual machines to the new Data Centre, reinstalling the application and migrating the data, and restoring machine images. The best choice depends on which technology is currently used by the application. Applications that use the latest technology typically run on virtual machines which can easily be copied across to the new Data Centre. ‘Lift and shift’ is best-suited for applications which use old technology like old minicomputers, and which cannot easily be emulated in modern hardware.

Once the migration method has been determined, you need to decide on how much rigour and testing should be applied to the process, for example whether to perform a trial migration. If an application is particularly important – “mission-critical”, then a trial migration followed by a rigorous testing and approval process is likely to be worthwhile. If not, the time and money required for the trial migration and rigorous testing may be better spent elsewhere. For example, if the application is for tracking team members’ birthdays, the migration process can be less stringent than if it is for managing money such as a banking application, or still more, controlling a process in which lives are at stake, such as air traffic control.

In summary, the migration method and the rigour to be applied in the migration process are primarily governed respectively by the business importance of the system, and the technology it uses.

3. Perform Trial runs for the Important Applications

As mentioned above, the more important applications are candidates for a trial migration. Relocating priority applications generally involves migrating sensitive information, as well as a need to keep downtime to a minimum. To avoid loss of data, the migration process should involve backups at each major stage, and a detailed roll-back plan in case of problems. The roll-back plan will provide a clear route back to the pre-migration status in case of serious failures in the migration process.

Performing a trial migration will test the migration method and flush out any problems before they can occur in the live migration. It allows for detailed testing of the migrated application without affecting the live application, which is still running in the old environment. Only after the testing is successfully completed and any problems have been solved, will the migration of the live application be attempted.

4. Pay Attention to Your Legacy Systems

Many old IT systems still provide companies with good service decades after they were installed. However, transferring a legacy system to a new Data Centre – or even the cloud – may not be straightforward.

Let’s say you have an application that has run on a VAX minicomputer since the 1990s. If the application is being transferred to another traditional Data Centre, the ‘lift and shift’ migration method may be the best solution. If the application is moving to the cloud, it would need to be ported to a VAX emulator before migration. Should an emulator not be available, the application may need to be rewritten, replaced or removed completely.

5. Allocate Predetermined Space for Applications in the New Network

Each application needs to have a predetermined space in the new network. In other words, the new infrastructure must be sized and put in place (or be able to be put in place predictably) taking account of the requirements of all the applications to be migrated, before migration begins in earnest.

The process involves quite a lot of work. You must consider all aspects of infrastructure, including local network design, external connectivity, servers, virtualization, monitoring, operating systems and databases amongst others. You’ll also need to make a plan for older applications which aren’t compatible with the new Data Centre’s security and other characteristics.

In some cases, it may be necessary to rewrite aspects of some applications to fit the new Data Centre, or to relax your network security measures on a case-by-case basis. Some applications may be too difficult to migrate to be worthwhile and the best solution may be to abandon them and replace them with more modern applications.

6. Anticipate Latency

Migrating to a new Data Centre or to the cloud may well introduce a significant degree of network latency, and this may affect application performance. Older systems are more vulnerable to latency problems than modern ones, which tend to be web-based.

The effect of latency on performance can be modelled, and this is, in most cases, the most cost-effective way to address the problem. If the modelling shows that latency will be a problem for a particular application, measures can be taken in advance to address the problem.

Network latency may also have a bearing on application performance in the transitional phase, as some servers are migrated while others remain, for the time being, in their original location. Systems that used to communicate via a local area network may for a period be many miles apart.

Temporary latency can be addressed by scheduling the migration of interdependent applications as closely together as possible. Where latency is predicted to affect the performance of an application in its planned permanent end state, other solutions are required. They include using an application delivery system such as Citrix XenApp to deliver the application to users; reconfiguring the application; rewriting parts of the application; or abandoning or replacing the application.

About the author

Francis Miers, Director, Automation Consultants After establishing a career in IT and the City, Francis Miers joined Automation Consultants in 2002 as a Director. He began his career in a French software and services company and has been engaged in corporate finance in the technology sector. He has a BSc in Physics from Imperial College, London and an MBA from Erasmus Business School in Rotterdam. Automation Consultants is a leader in the field of automating the application life cycle. Automating the application life cycle is the key to quicker and more-cost effective IT project delivery – and Automation Consultants are the leaders in this field, providing automation services across the entire application life cycle. Follow Francis and Automation Consultants at: Twitter, LinkedIn, Facebook

You might also like...

The Data Governance Playbook: Sixteen Steps to Better Data Privacy

Read More →