IBM: Why Xeon Servers Aren’t A Me-Too Game

Intel Xeon servers promised to replace specialised RISC processors with bog-standard systems. So why is IBM consultant Tikiri Wanduragala telling us IBM’s Xeons are different?

The one thing everyone knows about x86 servers is they have turned enterprise servers into a commodity, replacing hand-crafted mainframes with identikit systems that came out of a cookie-cutter. But now, server vendors are starting to say that’s not true

A few years ago, competition in servers was all about how good your proprietary RISC processor was, with IBM pitching its Power chip against Sun’s SPARC, and Intel getting into the game with Itanium. Against that sort of market, the pitch for Intel servers was that they were the same, they were an (albeit proprietary) standard, which would have to bring economies of scale.

Intel-based servers would be cheaper thanks to the volume in the x86 market, they’d be easier to service, have better-supported software and more readily available technology skills to keep them running.

But now, with Intel-based machines crowding the aisles and racks of the data centre, the equipment makers have started to change that story, spinning tales about how well-designed their blades are, how cool their power supplies are and how modular their memory is. And IBM is pushing that message pretty hard.

Part of the reason for that is, ironically, that the Xeon boxes are now kicking their way into the high-performance spaces previously reserved for RISC processors, which could leave some vendors with a message problem – unless they can plausibly back both.

Even while IBM pushed the specialised strength of its Power architecture for all it was worth, it has been in the Xeon game since 2001, making blades and rack systems with its own supporting chipsets.

We met senior consultant Tikiri Wanduragala at an Intel event promoting Intel’s new Xeon 7500 processor, and asked him what is so different about IBM’s take on the bog-standard Xeon server.

IBM has backed Xeon since 2001

To start with, it’s the long development history, Wanduragala said: “We’re on version 5 of our silicon, while others are on version 1. We’ve invested around $800 million in the chipset.”

But what does that investment buy us? “We started with the memory controller chipset in the first generated. Then we optimised that and reduced the chip count. In the last generation of Dunningtons we got a performance boost.”

Some of that is water under the bridge now – the memory controller is embedded however, so IBM no longer makes its own. It concentrates on making best use of Intel’s QuickPath Interconnect (QPI) links, plugging them into its support chipset: “From that we can give customers either more memory, or an interconnection into another memory pool and another machine.”

The new machies allow a massive reduction in physical footprint, said Wanduragala, and exciting new features, suitable for virtualisation. “The virtualisation market has grown and spread to larger customers,” he said. “It now needs blade, 2U and 4U shapes, all scalable from one socket up.”

The new IBM servers are impressive as they build on already good systems, said Wanduragala: “We aren’t replacing something old and tired, we are replacing a top-of-the-line machine with something that is even faster. These are amazing times!”

Wanduragala got most excited by the memory scaling allowed by IBM’s non-uniform memory architecture (NUMA), present since the beginning of its Xeon range. The new range can have a “memory drawer” shared between multiple servers.

The IBM system allows building blocks: processors can be chosen between blades, 2U two-socket machines, and 4U four socket systems, while memory can be chosen from a range, that gives each server up to 1.5 Tbyte, and then the possibility of multiplying that by using multiple memory units, connected to multiple servers.