One in the eye for Cisco! Facebook will make its own SDN switches using its own Open Compute server hardware
Facebook has designed an open network switch for its own data centres, which uses standard hardware from the Open Compute Project and will be available for others to use.
The “Wedge” top-of rack switch can be managed by the same software as Facebook’s servers, simplifying the social media giant’s data centre operations, and it emerges a year after the Facebook-led Open Compute Project announced plans for open data centre switches. OCP has had reference designs submitted by vendors including Broadcom and Intel, but appears to have gone it substantially alone, not specifically crediting any specific vendors for work on the Wedge design and the FBOSS software it will run, which have been announced in a Facebook blog.
Driving a Wedge
“Last year, we kicked off a new networking project within OCP, with a goal of developing designs for OS-agnostic top-of-rack (TOR) switches,” says the blog by Facebook’s Yuval Bachar and Adam Simpkins. “This was the first step toward disaggregating the network – separating hardware from software, so we can spur the development of more choices for each – and our progress so far has exceeded our expectations.”
The Wedge switch (illustrated) uses the modular “Group Hug” architecture specified by OCP, and plugs in the same microservers that Facebook uses elsewhere in its architecture.
“For our own deployment, we’ve started with a microserver that we’re using elsewhere in our infrastructure,” the blog says. “But the open form factor will allow us to use a range of processors, including products from Intel, AMD, or ARM.”
Using the same servers means Facebook can manage the switches with the same “fleet management” software it uses for its other servers. It will have its own variant of Linux, called FBOSS tuned for network tasks. Facebook’s blog says: “With “FBOSS,” all our infrastructure software engineers instantly become network engineers.”
As a software-defined network (SDN) switch, Wedge separates the control logic from the data switch, so it can be centralised or distributed to the switches, as appropriate.
The hardware consists of the Group Hug microserver module, along with a commercially-available 40Gbps switching ASIC, driving 16 40Gbps Ethernet ports and backed by standard cooling and power supplies.
Under the OCP rules, anyone is free to take the design and replace any components, or rejig it for other switch duties outside of data centre racks.
Are you an expert on Facebook? Take our quiz!