Stanford Researchers Use GPUs To Create The World’s Largest ‘Virtual Brain’

Nvidia and researchers from Stanford University have created the world’s largest artificial neural network – a collection of processors designed to replicate the inner workings of a human brain.

Based on just 16 servers stacked with Nvidia’s high-performance GPUs, the Stanford project can handle about 6.5 times more parameters than the previous record-setting network – a 1,000 server, 16,000 core machine developed by Google in 2012.

The announcement was made at the International Supercomputing Conference (ISC) in Leipzig, Germany. At the same event, Nvidia revealed CUDA 5.5, an update to its parallel programming and computing model that, for the first time ever, features native support for ARM CPUs.

Intelligent machines

A ‘neural network’ virtually represents connections between billions of neurons in the brain. In most cases, it is an adaptive system that changes its structure as it ‘learns’. Such networks are used to study the processes responsible for recognition of objects, characters, voices and sounds.

They can help improve machine learning algorithms and get computers to act without the need for a specific program, moving us closer to the creation of true artificial intelligence.

Using methods radically different from those employed by Google engineers, the team led by Professor Andrew Ng of the Stanford Artificial Intelligence Lab based its neural network on just 16 servers packed with GPUs. This network was then capable of taking into account 11.2 billion parameters, as opposed to Google’s 1.7 billion.

According to Nvidia, the bigger and more powerful the neural network, the more accurate it is likely to be in tasks such as object recognition, enabling computers to model “more human-like behaviour”.

This might sound like science fiction, but the technology has clear business uses. For example, Nuance, the developer of popular language recognition solutions such as Dragon NaturallySpeaking, has been using GPU-accelerated artificial neural networks to “train” its software products to understand users’ speech by processing terabytes of audio data.

“Delivering significantly higher levels of computational performance than CPUs, GPU accelerators bring large-scale neural network modelling to the masses,” said Sumit Gupta, general manager of the Tesla Accelerated Computing Business Unit at Nvidia.  “Any researcher or company can now use machine learning to solve all kinds of real-life problems with just a few GPU-accelerated servers.”

Do you know how many raspberries are used in the making of Raspberry Pi? Take our quiz!

Max Smolaks

Max 'Beast from the East' Smolaks covers open source, public sector, startups and technology of the future at TechWeekEurope. If you find him looking lost on the streets of London, feed him coffee and sugar.

Recent Posts

Meta To Show Marketplace Ads From Rival Ad Providers

After huge fine, Meta launches 'Facebook Marketplace Partner Program' so rival service providers can list…

3 hours ago

Improved Indoor Connectivity Could Add Billions To UK Economy – Survey

New research from Freshwave finds a better mobile signal indoors could grow the UK economy…

5 hours ago

Musk Says He Will Withdraw OpenAI Bid If It Remains Non-Profit

Elon Musk says he will abandon $97.4 billion offer to buy the non-profit behind OpenAI…

6 hours ago

Apple To Integrate Alibaba’s AI Into iPhones In China

Apple Intelligence in China is reportedly not going to utilise OpenAI's ChatGPT, after Alibaba confirmed…

9 hours ago

UK Minister To State CMA Must Be ‘Less Risk Averse’

Labour's business secretary Jonathan Reynolds to set out ‘strategic steer’ for UK competition regulator, after…

11 hours ago