Google Reveals Secrets Behind Its Data Centre Networks

data centre

First ever look inside of Google’s data centre networks as firm opens up to developers at the Open Network Summit

Google has revealed for the first time exactly what kind of technology powers its data centre networking infrastructure.

Up until now, the search giant has been relatively tight-lipped about its networking infrastructure. But a blog post from Google coinciding with the 2015 Open Network Summit has lifted the lid on some Google’s best kept secrets.

Five generations

“Today we are revealing for the first time the details of five generations of our in-house network technology,” wrote Amin Vahdat, Google Fellow and Technical Lead for Networking on the blog post.

“From Firehose, our first in-house datacenter network, ten years ago to our latest-generation Jupiter network, we’ve increased the capacity of a single datacenter network more than 100x. Our current generation — Jupiter fabrics — can deliver more than 1 Petabit/sec of total bisection bandwidth. To put this in perspective, such capacity would be enough for 100,000 servers to exchange information at 10Gb/s each, enough to read the entire scanned contents of the Library of Congress in less than 1/10th of a second.”

Vahdat said that when Google first started in the data centre, no one made network equipment that could meet its distributed computing needs.

Google
Google’s Jupiter superblock

“So, for the past decade, we have been building our own network hardware and software to connect all of the servers in our datacenters together, powering our distributed computing and storage systems,” wrote Vahdat.

Vahdat said that Google uses custom networking protocols that have been modified for use in its data centres, rather than using standardised Internet protocols.

“We used three key principles in designing our datacenter networks:

  • We arrange our network around a Clos topology, a network configuration where a collection of smaller (cheaper) switches are arranged to provide the properties of a much larger logical switch.
  • We use a centralized software control stack to manage thousands of switches within the data center, making them effectively act as one large fabric.
  • We build our own software and hardware using silicon from vendors, relying less on standard Internet protocols and more on custom protocols tailored to the data center.”

 

“Taken together, our network control stack has more in common with Google’s distributed computing architectures than traditional router-centric Internet protocols. Some might even say that we’ve been deploying and enjoying the benefits of Software Defined Networking (SDN) at Google for a decade,” wrote Vahdat.

Vahdat said that all of this makes Google’s networks capable of unprecedented speed, and networks that are built for modularity and constant upgrading.

“Most importantly, our datacenter networks are shared infrastructure,” wrote Vahdat. “This means that the same networks that power all of Google’s internal infrastructure and services also power Google Cloud Platform. We are most excited about opening this capability up to developers across the world so that the next great Internet service or platform can leverage world-class network infrastructure without having to invent it.”

Take our brand new cloud computing quiz right here!