Servers Can Be Both Eco and Logical, Claims Rackable

Is there room in the market for a specialist builder of data-centre servers? Rod Evans, the boss of Rackable Systems’ new European operation, clearly thinks so

Evans adds that Rackable can run its data-centres at up to 40 degrees Centigrade, thanks to its decision to use DC power, and the way it manages the airflow through its server trays and then up into a kind of chimney at the back or in the middle of the rack.

“It all means you can use ambient air,” he says. “We’re not actually recommending people to run at 40 degrees, but you could take your data-centre up by 10 degrees, say. I see the set-point moving from 25 or 28 degrees to maybe 35.”

Connectors and mountings

Better server management can also mean simple things such as putting the connectors on the front of the rack for ease of access, because you can’t get to the back, he adds. Each tray can also be powered via an auxiliary connector on the front, so it can be slid out for servicing one of its servers without taking the others offline.

Rackable mounts its densest servers in 1U trays, using compact PC-format motherboards to get up to six single-socket servers – and therefore up to 12 processor cores – in each tray. Evans adds that it is also making extensive use of solid-state boot disks now, which means a server with no moving parts.

The company has even joined the movement, started byu Sun, for building complete self-contained data-centres in 20 and 40-foot shipping containers. “We can get 22,000 cores into 320 square feet. It means that instead of building a data-centre, I can now put containers into a warehouse,” explains Evans.

So what can other organisations learn from Rackable’s experience? One thing is the value of designing and building exactly what you need, says Evans.

Building to order

“Years ago we built the first Google servers, now they build their own to our design,” he adds. “Everything is built to order, there’s nothing off the shelf. It’s not like Dell’s website, you can choose different motherboards, power supplies and so on.

“The pro is we can build exactly what the customer wants and guarantee all the systems will be identical, we can design for lower voltage and power consumption. Larger vendors build for maximum system capacity, we can engineer for less.

“For example we had a European customer that mandated Intel processors and a certain memory amount. Knowing that, we could use a lower voltage motherboard and memory, and save some power. The ability to choose from more components allows us to be more creative.

“The biggest challenge is keeping track of all the options, plus we have no standard products, so you don’t have the ease of certification. It means more inventory management too, but if we know the customer is buying for a few years we will buy parts ahead.”

He concludes: “Companies need to look at more than just technology – they need to ask ‘Will this save me money and power?'”