Facebook To Build Its Own ‘Wedge’ Datacentre SDN Switch

Facebook has designed an open network switch for its own data centres, which uses standard hardware from the Open Compute Project and will be available for others to use.

The “Wedge” top-of rack switch can be managed by the same software as Facebook’s servers, simplifying the social media giant’s data centre operations, and it emerges a year after the Facebook-led Open Compute Project announced plans for open data centre switches. OCP has had reference designs submitted by vendors including Broadcom and Intel, but appears to have gone it substantially alone, not specifically crediting any specific vendors for work on the Wedge design and the FBOSS software it will run, which have been announced in a Facebook blog.

Driving a Wedge

“Last year, we kicked off a new networking project within OCP, with a goal of developing designs for OS-agnostic top-of-rack (TOR) switches,” says the blog by Facebook’s Yuval Bachar and Adam Simpkins. “This was the first step toward disaggregating the network – separating hardware from software, so we can spur the development of more choices for each – and our progress so far has exceeded our expectations.”

The Wedge switch (illustrated) uses the modular “Group Hug” architecture specified by OCP, and plugs in the same microservers that Facebook uses elsewhere in its architecture.

“For our own deployment, we’ve started with a microserver that we’re using elsewhere in our infrastructure,” the blog says. “But the open form factor will allow us to use a range of processors, including products from Intel, AMD, or ARM.”

Using the same servers means Facebook can manage the switches with the same “fleet management” software it uses for its other servers. It will have its own variant of Linux, called FBOSS tuned for network tasks. Facebook’s blog says: “With “FBOSS,” all our infrastructure software engineers instantly become network engineers.”

As a software-defined network (SDN) switch, Wedge separates the control logic from the data switch, so it can be centralised or distributed to the switches, as appropriate.

The hardware consists of the Group Hug microserver module, along with a commercially-available 40Gbps switching ASIC, driving 16 40Gbps Ethernet ports and backed by standard cooling and power supplies.

Under the OCP rules, anyone is free to take the design and replace any components, or rejig it for other switch duties outside of data centre racks.

Are you an expert on Facebook? Take our quiz!

Peter Judge

Peter Judge has been involved with tech B2B publishing in the UK for many years, working at Ziff-Davis, ZDNet, IDG and Reed. His main interests are networking security, mobility and cloud

Recent Posts

Russia Accused Of Cyberattack On Germany’s Ruling Party, Defence Firms

German foreign minister warns Russia will face consequences for “absolutely intolerable” cyberattack on ruling party,…

53 mins ago

Alphabet Axes Hundreds Of Staff From ‘Core’ Organisation

Google is reportedly laying off at least 200 staff from its “Core” organisation, including key…

2 hours ago

Apple Announces Record Share Buyback, Amid iPhone Sales Decline

Investor appeasement? Apple unveils huge $110 billion share buyback program, as sales of iPhone decline…

5 hours ago

Tesla Backs Away From Gigacasting Manufacturing – Report

Tesla retreats from pioneering gigacasting manufacturing process, amid cost cutting and challenges at EV giant

22 hours ago

US Urges No AI Control Of Nuclear Weapons

No skynet please. After the US, UK and France pledge human only control of nuclear…

23 hours ago