Categories: Big DataData Storage

AMD Instinct GPUs Target Machine Learning In Data Centres And HPC Servers

AMD is releasing its Instinct range of graphics processing units (GPUs) designed for accelerating machine learning workloads in servers.

Rather than push out pixels onto displays or render video workloads, the Instinct GPUs have been specifically created for powering deep learning algorithms, which use artificial neural networks to dissect and find patterns in data in a similar fashions to the human brain.

The parallel processing nature of GPUs over the more serial processing capabilities of central processing units, is what makes GPUs better equipped for pushing large amounts of data through deep learning neural networks needed for training smart algorithms.

AMD aims at accelerating AI

The Instinct GPUs are being offered in three guises, built around the three iterations of AMD’s graphics processing architecture which makes uses of  and uses a 14 nanometre FinFET fabrication process.

The Radeon Instinct MI25 accelerator has been designed for large scale artificial intelligence (AI) and deep learning applications, offering 24.6 teraflops of 16bit floating point performance through 54 compute units and 16GB of second-generation high bandwidth memory (HBBM2), derived from AMD’s Vega GPU architecture.

With a memory bandwidth of 484GB/s the MI25 is targeted at handing applications with large data sets as well as high performance computing workloads.

The Radeon Instinct MI8 uses AMD’s Fiji architecture and offers 8.2 teraflops of  16bit floating point performance and makes use of 4GB of high bandwidth memory, It has been aimed more at the inference of machine learning; essentially putting trained algorithms into use.

The Radeon Instinct MI6 accelerator is the third in the line up, and is based on AMD’s Polaris architecture, commonly found in AMD’s consumer grade graphics cards.

Offering 5.7 terafops of 16bit floating point performance and sporting 16GB of fast GDDR5 memory, the card is being aimed at both machine learning inference and the use of parallel processing on-board smaller devices at the edge of IT networks, rather than be consigned to just large server use.

AMD appears to be making a major play for the server and data centre arena, given it has launched its Epyc line of server chipsets designed to challenge Intel’s dominance in the market. And the Instinct GPUs look to shake up the use of Nvidia graphics accelerators and their strong position in servers and machines used for training smart algorithms and systems.

Quiz: How much do you know about supercomputing?

Roland Moore-Colyer

As News Editor of Silicon UK, Roland keeps a keen eye on the daily tech news coverage for the site, while also focusing on stories around cyber security, public sector IT, innovation, AI, and gadgets.

Recent Posts

Meta To Spend $10 Billion On Largest Data Centre To Date

Facebook parent confirms its 23rd data centre in the US will be located in Louisiana,…

12 hours ago

Musk’s Neuralink Animal Lab Cited For ‘Objectionable Conditions’

Federal regulator reportedly cites animal lab at Elon Musk's Neuralink for “objectionable conditions or practices”

14 hours ago

Trump Nominates Cryptocurrency Advocate Paul Atkins As SEC Chair

President-elect Donald Trump nominates a new chairman to head the SEC, who is a noted…

16 hours ago

CMA Clears Vodafone, Three Merger After Price Promise

UK regulator approves Vodafone, Three merger, after receiving commitments over price, network, and virtual mobile…

17 hours ago

Amazon Sued For Halting Deliveries To Two Black Neighbourhoods

District of Columbia sues Amazon, alleging it secretly stopped fastest delivery service to two predominantly…

19 hours ago

Bitcoin Surges Past $100,000 For First Time, Amid Trump Optimism

Crypto optimism under Trump's presidency, pushes the price of Bitcoin past $100,000 for the first…

21 hours ago