Greening The Data Centre – What You Need To Know

From HVAC to rack density to hot/cool aisles, eWEEK looks at the computing models and energy-saving practices to focus on for the biggest rewards

Rack density is a very important aspect of modern data centre design. Server consolidation and virtualisation are leading us toward denser, and fewer, racks. Blades and 1U to 3U servers are the norm. The denser the data centre, the more efficient it can be, especially if we’re talking in terms of construction costs per square foot: With the average data centr costing $200 to $400 per square foot to construct, if you can cut the size of your data centere by 75 percent, you could save significant construction costs – perhaps ranging into the millions of dollars.

However, denser racks mean increased power requirements and the generation of more heat.

In the past, a rack might consume 5 kW, whereas today’s denser designs consume 20 kW or more. Conventional HVAC solutions could be used to cool a 5-kW rack, but a 20-kW (or even 30- or 40-kW) rack requires a high-density cooling solution, as well.

Look to implement rack-level cooling technologies using either water or forced air. The IBM/Syracuse project converts exhaust heat to chilled water that is then run through cooling doors on each rack. A high-density cooling solution such as this removes heat much more efficiently than a conventional system. A study conducted by Emerson in 2009 calculated that roughly 35 percent of the cost of cooling the data centre is eliminated by using such a solution.

No More Raised Floor

Believe it or not, 2010 will toll the death knell for the raised floor. As hot air rises, cool air ends up below the raised floor, where it isn’t doing much good. In addition, raised floors simply can’t support the weight demands placed on them by high-density racks. A 42u rack populated with 14 3u servers can weigh up to 1,000 pounds.

Raised floors are simply not efficient operationally. I had the experience many years ago of building a 10,000-foot data centre in a large city. Several months after it was built, we began to have intermittent network outages. It took many man-hours to locate the problem: Rats were chewing through the insulation on cables run below the raised floor. Rats aside, additions, reconfigurations and troubleshooting of the cable plant are much easier on your staff when cables are in plain sight.

Many organisations have found that keeping the server room at 68 or even 72 degrees can yield immediate and meaningful cost savings. As much as I like working in a 62-degree room, newer equipment is rated for a higher operating temperature. Check the manufacturer’s specifications on existing equipment before raising the temperature and monitor performance and availability afterward.

Finally, consider switching power from AC to DC, and from 110V to 220V. Power typically starts at the utility pad at 16,000 VAC (volts alternating current), and is then converted multiple times to get to 110 VAC to power equipment. It is then converted internally to 5 VDC (volts direct current) and 12 VDC. All of this conversion wastes up to 50 percent of electricity and generates excess heat.

As the use of DC power gains some traction in data centres, many server manufacturers – including HP, IBM, Dell and Sun – are making DC power supplies available on some or all of their server lines, allowing the machines to run on 48 VDC. Look for server chassis that utilise modular power supplies to make the switch from AC to DC easier.

Matthew D. Sarrel is executive director of Sarrel Group, an IT test lab, editorial services and consulting firm in New York.