Facebook’s Open Compute Project isn’t reliable enough for critical storage, says EMC’s Pat Gelsinger. Or is it just too open?
EMC has hinted it won’t be putting its name to the Facebook Open Compute Project in an official capacity anytime soon, as the storage giant told TechWeekEurope businesses were not using the project’s storage designs for mission critical loads.
The Open Compute Project, launched by Facebook in April 2011, could shift the balance of power in data centre hardware as users, led by Facebook and cloud providers, impose more open and more efficient designs on the vendors. Facebook kicked it off by effectively open-sourcing the custom designs for low-energy server and storage equipment used in its Oregon data centre, and some big name vendors have signed up.
For storage, Open Compute specifies the Open Vault design that “offers high disk densities, holding 30 drives in a 2U chassis, and can operate with almost any host server”, according to the project’s organisers.
HP and Dell have pledging to create server and storage boxes to fit with the new, spacier, greener Open Rack design. By contrast, EMC is staying clear of Open Compute’s storage specs; the company’s COO Pat Gelsinger explained to TechWeekEurope this is because the specifications aren’t ready for running mission critical storage.
Carping on about criticality
Speaking at EMC World in Las Vegas, Gelsinger said that when it came to mission critical workloads, customers weren’t really after the kinds of products being toyed with by Open Compute contributors. EMC is partially involved, but only via its VMware subsidiary, which was recently welcomed on board the initiative.
“No one is running mission critical infrastructure on Open Compute today. They are taking ideas and bringing them into their products for their mission critical infrastructure,” Pat Gelsinger, president and chief operating officer of EMC, told TechWeekEurope. “And even some of those who initiated those projects have backed away from them.
“When you look at open storage as an element, it is the most mission critical of all of them,” said Gelsinger. “If you lose storage, that is a huge loss. The requirements of mission criticality typically require a multi-layered degree of resilience.”
Most EMC customers are looking for that resilience in the storage layer as opposed to just putting it all in the application layer, Gelsinger said.
Yet he said EMC and VMware would continue to look at the innovation that comes out of such open projects, picking what they liked as the industry and customers’ needs evolve. “There are some good ideas there and I can see us participating… there are a lot of ideas going around and we are going to participate in them very effectively,” Gelsinger added.
“You can be certain that we, as well as VMware, will be participating in open computing, open networking as well as open storage to determine which ones we need to participate in, how industry adopts it and how we can take advantage of that innovation.”
EMC has continued to push some of its big proprietary boxes at its premier conference this week. It launched a new family of VMAX arrays, including the turbo-powered VMAX 40K.
At the same time, however, EMC has realised it has to offer greater interoperability. Its Federated Tiered Storage (FTS) capabilities with VMAX boxes allow data mobility between in-house and external arrays either from EMC or third parties like rivals IBM and Hitachi Data Systems.
EMC’s response to Open Compute will be viewed with interest by CIOs and storage managers who want flexibility to mix and match equipment in their data centres. EMC is still not viewed as the most open player in the storage market, but it is opening its doors a little wider.
Think you know green IT? Try our quiz!