Stanford Researchers Use GPUs To Create The World’s Largest ‘Virtual Brain’

The artificial neural network will be used to improve machine learning algorithms

Nvidia and researchers from Stanford University have created the world’s largest artificial neural network – a collection of processors designed to replicate the inner workings of a human brain.

Based on just 16 servers stacked with Nvidia’s high-performance GPUs, the Stanford project can handle about 6.5 times more parameters than the previous record-setting network – a 1,000 server, 16,000 core machine developed by Google in 2012.

The announcement was made at the International Supercomputing Conference (ISC) in Leipzig, Germany. At the same event, Nvidia revealed CUDA 5.5, an update to its parallel programming and computing model that, for the first time ever, features native support for ARM CPUs.

Intelligent machines

A ‘neural network’ virtually represents connections between billions of neurons in the brain. In most cases, it is an adaptive system that changes its structure as it ‘learns’. Such networks are used to study the processes responsible for recognition of objects, characters, voices and sounds.

Servidores-NVIDIA-They can help improve machine learning algorithms and get computers to act without the need for a specific program, moving us closer to the creation of true artificial intelligence.

Using methods radically different from those employed by Google engineers, the team led by Professor Andrew Ng of the Stanford Artificial Intelligence Lab based its neural network on just 16 servers packed with GPUs. This network was then capable of taking into account 11.2 billion parameters, as opposed to Google’s 1.7 billion.

According to Nvidia, the bigger and more powerful the neural network, the more accurate it is likely to be in tasks such as object recognition, enabling computers to model “more human-like behaviour”.

This might sound like science fiction, but the technology has clear business uses. For example, Nuance, the developer of popular language recognition solutions such as Dragon NaturallySpeaking, has been using GPU-accelerated artificial neural networks to “train” its software products to understand users’ speech by processing terabytes of audio data.

“Delivering significantly higher levels of computational performance than CPUs, GPU accelerators bring large-scale neural network modelling to the masses,” said Sumit Gupta, general manager of the Tesla Accelerated Computing Business Unit at Nvidia.  “Any researcher or company can now use machine learning to solve all kinds of real-life problems with just a few GPU-accelerated servers.”

Do you know how many raspberries are used in the making of Raspberry Pi? Take our quiz!