Press release

Cerebras Systems Receives Honors in 2019 HPCwire Readers’ and Editors’ Choice Awards

Sponsored by Businesswire

Cerebras Systems, a company dedicated to accelerating Artificial Intelligence (AI) compute, has been recognized in the annual HPCwire Readers’ and Editors’ Choice Awards, presented at the 2019 International Conference for High Performance Computing, Networking, Storage and Analysis (SC19), in Denver, Colorado. The list of winners were revealed at the HPCwire booth at the event, and on the HPCwire website. Cerebras Systems was recognized in the “Editor’s Choice: Top 5 New Products or Technologies to Watch” category for its Wafer Scale Engine (WSE), which was unveiled in August 2019.

This press release features multimedia. View the full release here:

Cerebras Systems Receives Honors for Its Wafer Scale Engine in 2019 HPCwire Readers’ and Editors’ Choice Awards (Photo: Business Wire)

Cerebras Systems Receives Honors for Its Wafer Scale Engine in 2019 HPCwire Readers’ and Editors’ Choice Awards (Photo: Business Wire)

The Cerebras WSE is the largest chip ever made and the industry’s only trillion transistor processor. It contains more cores, with more local memory, and more fabric bandwidth, than any chip in history. This enables fast, flexible computation at lower latency and with less energy. The WSE is 46,255 millimeters square, which is 56 times larger than the largest GPU. In addition, with 400,000 cores, 18 Gigabytes of on-chip SRAM, 9.6 Petabytes/sec of memory bandwidth, and 100 Petabits/sec of interconnect bandwidth, the WSE contains 78 times more compute cores, 3,000 times more high speed, on-chip memory, 10,000 times more memory bandwidth and 33,000 times more fabric bandwidth than its GPU competitors.

The coveted annual HPCwire Readers’ and Editors’ Choice Awards are determined through a nomination and voting process with the global HPCwire community, as well as selections from the HPCwire editors. The awards are an annual feature of the publication and constitute prestigious recognition from the HPC community and are revealed each year to kick off the annual supercomputing conference, which showcases high performance computing, networking, storage, and data analysis.

“Every year it is our pleasure to connect with and honor the HPC community through our Readers’ and Editors’ Choice Awards, and 2019 marked an exceptional showing of industry innovation,” said Tom Tabor CEO of Tabor Communications, publisher of HPCwire. “Between our worldwide readership of HPC experts and an unparalleled panel of editors, the Readers’ and Editors’ Choice Awards represent resounding recognition throughout the industry. Our congratulations goes out to all of the winners.”

More information on these awards can be found at the HPCwire website or on Twitter through the hashtag #HPCwireAwards.

About HPCwire

HPCwire is the #1 news and information resource covering the fastest computers in the world and the people who run them. With a legacy dating back to 1986, HPCwire has enjoyed a legacy of world-class editorial and journalism, making it the news source of choice selected by science, technology and business professionals interested in high performance and data-intensive computing. Visit HPCwire at

About Cerebras Systems

Cerebras Systems is a team of pioneering computer architects, computer scientists, deep learning researchers, and engineers of all types. We have come together to build a new class of computer to accelerate artificial intelligence work by three orders of magnitude beyond the current state of the art. The CS-1 is the fastest AI computer in existence. It contains a collection of industry firsts, including the Cerebras Wafer Scale Engine (WSE). The WSE is the largest chip ever built. It contains 1.2 trillion transistors and covers more than 46,225 square millimeters of silicon. The largest graphics processor on the market has 21.1 billion transistors and covers 815 square millimeters. In artificial intelligence work, large chips process information more quickly producing answers in less time. As a result, neural networks that in the past took months to train, can now train in minutes on the Cerebras WSE.