Open Compute’s latest initiatives might just re-define how the data centre industry works
Facebook and Open Compute Project partners have sent shockwaves through the server community, announcing a host of initiatives designed to break up the “monolithic” hardware of yore.
The Open Compute Project was set up in April 2011 and has made great strides in setting new standards for data centres, as it looks to bring the efficiency of Facebook facilities to the general market. It has a host of powerful backers too, including Dell, Intel and HP, amongst many others.
Open Compute gets going
Yesterday, during the Open Compute Summit, it promised to disrupt the server market, and end some of its prohibitive aspects. In particular, Open Compute is hoping to end the days where IT teams have to replace whole servers at a time, and buy servers which include faceplates and casing features that are not needed in the data centre.
Open Compute wants to bring about a massively modular approach, where motherboards and other components can be ripped and replaced with ease. This disaggregation, where compute, storage, networking and power distribution are separated into modules, should save companies money and time.
“We need to break up some of these monolithic designs – to disaggregate some of the components of these technologies from each other so we can build systems that truly fit the workloads they run and whose components can be replaced or updated independently of each other,” wrote Frank Frankovsky, Facebook hardware chief, in a blog post.
Two announcements stood out. The first was the introduction of a “Group Hug” board – a common slot architecture specification for motherboards, designed to eradicate vendor-defined compatibility issues.
AMD, Calxeda, Applied Micro and Intel have all announced support for the board, whilst the latter two chip makers have already built demos on the specification.
The second major announcement also involved Intel. It has worked with Facebook on networking within the rack, devising a photonic rack architecture, based on fibre-optics. It promises 100Gbps interconnects as well as fewer cables and “extreme power efficiency”, said Justin Rattner, Intel’s chief technology officer, during his keynote address at Open Computer Summit in Santa Clara.
The approach uses light to move data around over thin optical fibre. The approach gives high speed connectivity without the resistive and capacitive losses on electrical connectors – but does require conversion from electrical to light signalling. This should provide “enough bandwidth to serve multiple processor generations”, said Frankovsky.
Intel said the prototype it is donating to the Open Compute Project would support its Xeon processors and the 22 nanometer system-on-chip (SoC) Intel Atom processor, codenamed Avoton, which will be made available later this year.
In another tech-led announcement at the summit, flash-based storage expert Fusion-io is working with Facebook on making the latter’s data centres speedier, hinting the social network is heading towards entirely flash-driven data centres.
Fusion-io yesterday made its ioScaler hyperscale flash card product generally available, and open sourced its design as part of its Open Compute commitments.
What do you know about open source stuff? Try our quiz.