Intel Ponders Atom-Based Computing Clusters

Intel has confirmed that while it is not looking to position Atom in the mainstream server market, it is considering compute clusters of smaller, Atom-based devices

An Intel executive has reportedly confirmed that the chip giant is not interested in pushing its energy-efficient Atom processor for the server space.

Kirk Skaugen, vice president and general manager of Intel’s Data Centre Group, said in an interview with IDG News that while there are some vendors that are using Atom chips in server designs, and chip designer ARM is looking to push its processor designs into the data centre, most businesses are looking for systems with the power and energy efficiency of the latest Xeon chips.

At its developer forum last month, Intel showed off its upcoming “Sandy Bridge” microarchitecture, which promises to ramp up both the performance and energy efficiency of Intel’s processor offerings.

Atom Compute Clusters

However, while Intel may not be looking to position Atom in the mainstream server market, researchers with Intel Labs here are working on creating compute clusters of smaller, Atom-based devices that can run some workloads while driving down power consumption.

The project, dubbed FAWN – or Fast Array of Wimpy Nodes – was on display at an open house 28 September at the Intel Labs facility. With power consumption becoming an increasingly important one in data centres, Intel Labs and Carnegie Mellon University are investigating whether certain workloads can be taken from a small number of more powerful servers and put onto a cluster of more, smaller and lower-power nodes that aggregate large amounts of compute power, memory and I/O.

“Power is becoming a significant burden,” Intel researcher Michael Kaminsky said in an interview with eWEEK during the open house. Through Project FAWN, Intel is trying to “reduce energy consumption two or three times for data-intensive workloads.”

FAWN could hold the promise of creating energy-efficient clusters that could run particular web 2.0-style workloads in a much more energy-efficient way.

During the event, Kaminsky showed a line of systems boards that could be networked together to create a compute cluster. Each board included an Atom chip and Intel SSD (solid-state disk) for local storage, items that he noted can be bought and put together by anyone.

Ongoing Research

The key is getting the cluster to work in the most efficient way and developing the techniques that will enable software to work well in such a highly parallel environment, Kaminsky said.

There are several areas of exploration within the FAWN project. One is load balancing, a key to ensuring the ability to scale the performance within the cluster. The FAWN-KV (key-value) storage system uses one or more fast front-end nodes that essentially route request to other back-end nodes, according to Intel Labs. Research results indicate that a fairly small cache can ensure proper load balancing and performance scalability.

Another area of research, dubbed WideKV, is looking at how to more efficiently and consistently replicate data between multiple data centres, according to Intel. In addition, Intel Labs is looking at algorithms that would improve the performance of the Map-Reduce paradigm of parallel programming that is common in cloud computing environments on FAWN nodes, a move that is taking advantage of the strong random-read performance of SSDs and would further increase energy efficiency.

The FAWN project also is looking at ways to reduce the memory footprint in the cluster.

Much of the effort now is around software, Kaminsky said. Most current applications are not designed to run in such resource-constrained, energy- and memory-efficient, and highly parallel environments.

Intel Labs and Carnegie Mellon researchers are looking at techniques that can be used to create applications that can take advantage of clusters such as those in the FAWN project, such as reducing the memory footprint of the software and operating systems, Kaminsky said.