Physical Infrastructure Still A Power Play In Virtualised Worlds

Virtualisation and cloud computing continue to grow but Panduit’s David Palmer-Stevens stresses the importance of the underpinning physical infrastructure

While virtualisation and cloud are seen as forging ahead and innovation is coming out all over the market, physical data centre design is perceived as lagging behind. However, this does not reflect the effort and investment required for a successful data centre development project.

Physical data centre infrastructure must deliver the required level of computing power, while keeping power consumption and cooling demands as low as possible. It’s a balancing act, but it still needs to be reliable – having a data centre that only meets requirements at specific peaks is not a successful design.

A measure of improvement

The Power Usage Effectiveness (PUE) ratio, developed by the Green Grid consortium, is one of the prime ways that data centre managers can judge the success of their own installations.

While PUE offers a measure of reliability and efficiency, it is not intended to say your data centre is bad and that other data centres are good, which appears to be how it is perceived in the market. Instead, this should be treated as a measure of progression to efficient use of resources.

To start with, virtualisation makes workloads more mobile within the data centre. Whereas previously a data centre would cater for mission-critical workloads on large physical servers with dedicated additional power and cooling resources, these days virtual machines can be moving all around the data centre independently.

From a design standpoint, this makes traditional approaches less efficient. The concept of the data centre ‘hot spot’ that you could correct by adding additional cooling does not exist in the virtual world. These ‘Hot spots’ become mobile depending on load and which resource the virtual machine is moved to.

Instead, there are a couple of routes that can be taken to keep the power and cooling efficiencies in place, while ensuring that the benefits of virtualisation are delivered. The first is the move to hot aisle / cold aisle throughout the data centre. This involves capturing airflow of a particular kind – either hot or cold – and keeping this to a particular location so that any cooling applied provides as much benefit as possible.

The second approach considers containment providing passive thermal management. This involves the design of the racks, cabling and venting in which the servers, switches and storage are hosted. This may seem like a small detail, but overlooking the physical design stage here can add massively to the ongoing overhead for the data centre.

Passive thermal management designs air flow so that heat is taken away from the devices more efficiently. It uses the physics of the hot air itself and channels it away from the IT assets faster, rather than pumping more cold air at the devices themselves, which requires additional power.

Up and down: planning for the future

Employing this approach to energy efficient deployments, including passive thermal management techniques, has seen average results of 15 percent reduction in power consumption and a 38 percent reduction in cooling costs. Over the life of a data centre, this can be a massive saving, based on getting the basics right.

Using this understanding of physical infrastructure and passive air cooling is essential in the building of new data centres, alongside an awareness of the demands that virtualisation and cloud place on the data centre. There is a difference in the demands that these two approaches place on the data centre.

While it is much easier to deploy new servers as virtual machines, virtualisation deployments tend to be more static in the number of VMs that are in place. The biggest consideration around implementations is thus the mobility of virtual machines.

Similarly, areas of heat generation may move around the data centre, as bigger virtual machines are moved to provide them with the right amount of resources. On the physical IT side, this means designing the data centre in a uniform way with power, cooling and heat management thought about across the whole design.