Data Centre Cooling – It’s Not Rocket Science

There’s no rocket science in cooling data centres, but very efficient ones are still a rarity. So when a record-breaking centre set up in Surrey, we were very keen to speak to the man behind it.

The Petroleum Geo-Services (PGS) data centre in Weybridge Surrey appears to be the first in Europe to have an annual efficiency score (PUE) as 1.2.

PUE is the amount of energy put into the system divided by the amount that reaches the servers – and most of today’s data centres have a score greater than two, which means less than half the power input reaches the IT kit. By contrast, 1.2 means only one fifth of a Watt goes on overheads for every Watt at the servers – and Google got gasps of surprise, when it announced last year that some if its centres had achieved that figure.

That’s a feather in the cap for Keysource, the company that built the PGS centre but it’s important not to oversimplify the discussion, said Mike West, Keysource’s managing director. A centre’s PUE depends a lot on the outside temperature, and should be quoted as an annual figure, based on a year’s figures for temperature fluctuations.

“The important factor is the annualised PUE in kwH,” said West. “Air conditioning is where the biggest gains can be made. Losses from UPS inefficiencies, and standby power, are all linear and predictable, but cooling is the area of biggest opportunity.” Because cooling depends on outside temperatures and other factors, it’s the one where extra work can get the biggest gains. “Nuances around the specialist mechanical and electrical plant can have a dramatic effect on the outcome of the facility from an efficiency and performance point of virew,” he said.

No secret sauce

Despite this, the big surprise is that, despite having a fancy name – Ecofris – Keysource’s data centre design has no “secret sauce”. Rival data centre builder Imtech ascribed the success of its Common Rail Cooling design to multi-storey architecture, but West says Keysource’s Ecofris involves no major break with earlier technologies – it just pushes them further than they have normally been pushed before.

“The biggest issue is the high density hardware,” he said. Bladecentres can pack more processing power in a smaller space, but that raises the amount of heat that needs to be disippated. The PGS data centre has around 16kW per rack position. .

The only way to get a low PUE is to cut down the amount of active cooling that needs to be done, and use “free cooling”. Instead of turning on mechanical chillers that burn power and push the PUE up. PGS only needs to turn on chillers when the ambient temperature is above 24C. Most free cooling systems so far have needed a temperature of five degrees – so they can only be used for maybe 1000 hours a year.

Page: 1 2

Peter Judge

Peter Judge has been involved with tech B2B publishing in the UK for many years, working at Ziff-Davis, ZDNet, IDG and Reed. His main interests are networking security, mobility and cloud

Recent Posts

Russia Accused Of Cyberattack On Germany’s Ruling Party, Defence Firms

German foreign minister warns Russia will face consequences for “absolutely intolerable” cyberattack on ruling party,…

6 hours ago

Alphabet Axes Hundreds Of Staff From ‘Core’ Organisation

Google is reportedly laying off at least 200 staff from its “Core” organisation, including key…

7 hours ago

Apple Announces Record Share Buyback, Amid iPhone Sales Decline

Investor appeasement? Apple unveils huge $110 billion share buyback program, as sales of iPhone decline…

11 hours ago

Tesla Backs Away From Gigacasting Manufacturing – Report

Tesla retreats from pioneering gigacasting manufacturing process, amid cost cutting and challenges at EV giant

1 day ago

US Urges No AI Control Of Nuclear Weapons

No skynet please. After the US, UK and France pledge human only control of nuclear…

1 day ago