A California-based startup called Cerebras has revealed what it is calling the largest computer chip in the world, to aid machine learning and artificial intelligence (AI).
The mammoth chip is slightly bigger than an Apple iPad, and is called the Cerebras Wafer-Scale Engine (WSE).
The startup said that AI is currently constrained by the available computing tech. It points out that a typical powerful desktop CPU contains about 30 processor cores. GPUs on the other hand (which are typically used for machine learning and AI purposes) have more cores but these are usually less powerful cores, that can be up to as many as 5,000 cores per chip.
But now in a blog posting, Cerebras has said that current processors are not best suited for machine learning and AI.
“Deep learning also has unique, massive, and growing computational requirements,” it blogged. “And it is not well-matched by legacy machines like graphics processing units, which were fundamentally designed for other work,” Andy Hock of Cerebras blogged. “As a result, AI today is constrained not by applications or ideas, but by the availability of compute.”
He wrote that the reason Cerebras was founded was to develop a new type of computer optimised exclusively for deep learning, “starting from a clean sheet of paper.”
“To meet the enormous computational demands of deep learning, we have designed and manufactured the largest chip ever built,” blogged Hock. “The Cerebras Wafer Scale Engine (WSE) is 46,225 square millimeters and contains more than 1.2 Trillion transistors and is entirely optimized for deep learning computation”
According to Hock, the WSE is more than 56X larger than the largest graphics processing unit, containing 3,000X more on-chip memory and capable of achieving more than 10,000X the memory bandwidth.
Modern processors have been getting smaller and smaller over the years, but this plate sized processor is a break from this trend.
The bigger size allows the WSE to contain 400,000 AI-optimized cores for example, that are connected by a ‘specialised memory architecture’ to ensure each core operates at maximum efficiency.
The WSE comes complete with 18GB of memory “distributed among the cores in a single-level memory hierarchy, one clock cycle away from each core.
According to Hock, the “WSE takes the fundamental properties of cores, memory, and interconnect to their logical extremes. A vast array of programmable cores provides cluster-scale compute on a single chip. High-speed memory close to each core ensures that cores are always occupied doing calculations. And by connecting everything on-die, communication is many thousands of times faster than what is possible with off-chip technologies like InfiniBand.”
This means the WSE has been designed not to be a bottleneck when carrying out AI and machine learning tasks.
Cerebras has reportedly already started shipping the hardware to a small number of customers.
But there is no word on what type of specially designed servers will be needed to house these mammoth processors, nor cooling and power requirements of the new chips.
There is also no word on the price of the chip at the time of writing.
It should be noted that Cerebras is not alone in designing customised chips for AI and machine learning purposes.
Google for example in 2017 touted its own in-house custom accelerators for machine learning applications.
Google revealed that its TPUs are 15x to 30x faster than contemporary GPUs and CPUs.
And these devices are also much more energy efficient.
In January this year Intel revealed at the Consumer Electronics Show in Las Vegas that it was working with Facebook on an artificial intelligence (AI) chip.
Number of ransomware attacks on SMBs on the rise, and the cost of downtime has risen over 200 percent