Hot aisle containment is an efficient way to cool your data centre. Peter Judge is intrigued by a Google patent for it
If you feel chilly walking round a data centre, it is probably wasting energy. The data centre has to provide cold air to keep the servers cool, but if it is also cooling the space were people are walking around to an uncomfortably low temperature, then it is wasting energy. Now, it seems Google has a patent on a “hot aisle” containment system which should cut that waste.
Servers use a lot of electricity, and convert it all into heat, which must be removed or they get too hot. Traditionally, this was done by blowing hot air up through a raised floor in the data centre, and drawing the hot air in to cool it by chiller units positioned round the edges of the room – but this is now seen as very wasteful, because hot and cold air mixes in the room, and a lot of the cooling energy is waste on cooling empty space.
Keep it in a hot aisle
Data centre designers have concentrated on preventing the hot and cold air mixing , and started out with “cold aisle” containment systems, where rows of servers are installed back to back, and cold air is blown up through the confined area between the backs of those racks, and then drawn through the server racks by negative pressure.
A lot of data centres have done very good work, applying plastic sheeting to make curtains which separate parts of the data centre, and blocking up holes in floor tiles so the cold air only goes up between the servers, thus turning old-style data centres into cold-aisle containment systems.
The trouble is the cooling units are a long way from the servers, so a lot of the cooling effect is lost on the way, as the cold air comes through the area under the raised floor, past lots of cables and obstructions.
Hot aisle systems have become more fashionable recently, in which the servers are once again in rows, arranged back to back, but now the air flows from the front of the server to the back, where it emerges, into a hot narrow space between the server rows: the “hot aisle”.
These systems are supposed to be more efficient because they don’t require really cold air to be provided – it’s enough to take the hot air away from the back of the servers, and cool it down to normal room temperature, which is quite cool enough to go back into the front of the servers. There’s a lot less waste cooling.
Google has yet to fully explain its patent 8,320,125, which has just been published, but it appears to be a way to increase that efficiency still further by including heat exchanger units within the hot aisle, so the air is cooled immediately it comes out of the system, and returned (through the top of the unit) to circulate in the room.
It’s a short air circuit, and I guess that should make it more efficient. The heat exchangers in the system are connected to a cooling system for the whole data centre, outside the room. This could be a conventional chiller unit but, for most of the world, should be nothing more than an evaporative cooling unit.
There are still big questions around this patent. It’s not clear whether this is a system widely used inside Google’s data centres, though I guess one could pore over Google’s recent photo story about its data centres to try and find out.
It’s also not clear why Google is patenting it through its Exaflop company. Does the search giant intend to capitalise on this, and make everyone who uses Google’s particular hot-aisle system pay for the privilege? Or is it getting the idea out there to make sure it is widely used?
Time will tell, but it’s good to see a greater circulation of data centre knowledge, and a smaller circulation of hot air.
Are you a green IT guru? Take our quiz!