Boffins Create Algorithm To Run Data Centres

New formula shows how to keep users happy and keep the energy bills low by increasing server utilisation

A three-man research team has come up with a four-step algorithm that may shed light on the expenses/revenue dilemma when it comes to managing data centre energy consumption.

The boffins considered how servers handle jobs given them, and what impact their loading has on efficiency, creating a forumla which determines the best time to switch servers on and off, in order to save power, but make sure users are not kept waiting for service.

A tricky balancing act

According to the research, expectations in terms of performance and responsiveness have increased significantly over the years, with 75 percent of people stating that they would not go back to a web site that took more than 4 seconds to load. Google reports that an extra 0.5 seconds in search page generation entails degraded user satisfaction, with a consequent 20 percent traffic drop, while trimming the page size of Google Maps by 30 percent resulted in a 30 percent traffic increase.

Data centre owners have a balancing act between maintaining acceptable levels of performance and managing costs, including power consumption.  However, despite the trend towards energy efficient data centres, as well as efforts made in designing servers whose power consumption is proportional to their utilisation, an idle server still performs at about 60 percent of its peak consumption.

What does it do?

The group produced an algorithm that will determine the best number of servers to have turned on at a given time, in a dynamic siutation with jobs that last about 0.1 second being submitted at a variable rate. It assumed a server farm of 250 servers (250Gb dual core Xeons), with a power usage effectiveness (PUE) of 1.7. The servers were running WordPress on the open source LAMP stack, and used between 140W and 200W of power – which cost $0.1 per kWh. The jobs each took on average 0.1 seconds , and each was assumed to generate a tiny amount of money – six millionths of a dollar – and cost a smaller amount (fortunately), around two ten-millionths of a dollar.

Given these parameters (which aren’t that different from a live site such as Wikipedia, the authors claim), their algorithm determined when to power servers up and down, in order to keep customer satisfied but not run servers longer in the background than they were needed.

The difficulty, of course, is that users get fed up and go away in a certain time, but having all the servers running all the time in case they show up would waste power. The group’s algorithm provides enough servers to satisfy the users, but outperforms other similar algorithms, they claim.

Improving utilisation is the goal

“A typical server farm may contain thousands of servers, which require large amounts of power to operate and to keep cool,” writes team member Marios Dikaiakos, chairman of computer science and director of the Laboratory for Internet Computing at the University of Cyprus, “and [the] only way to significantly reduce power consumption is to improve the server farm’s utilisation. “

There are other strategies, such as dynamic scaling of CPU frequencies or switching off excess servers, but these are either too performance oriented or energy-efficiency oriented, said team member and University of Saskatchewan, Canada doctoral student, Dmytro Dyachuk.

“In our research,” adds team member Michele Mazzucco, software engineering research fellow at the University of Tartu, Estonia, “several experiments were carried out, with the aim of evaluating the effects of [a] proposed scheme on the maximum achievable revenues, and thus we derived a numerical algorithm for computing the optimal number of servers required for handling a certain user demand.”

Dikaiakos concludes: “We introduced and evaluated an easily implementable policy for dynamically adaptable Internet services. The algorithm we propose can find the best trade-off between consumed power and delivered service quality. The number of running servers can have a significant effect on the revenue earned by the provider and our experiments show that our approach works well under a range traffic conditions.”

Peter Judge contributed to this report