ARM Debuts DynamIQ Processor Architecture WIth An Eye On AI & Machine Learning

The Cambridge tech firm is expanding the scope of its big.LITTLE design

ARM is gearing up for artificial intelligence (AI) with is latest mobile processor design dubbed DynamIQ, expanding the scope of its big.LITTLE architecture.

The aforementioned architecture is so called due to its design, which combines a brace or quad configuration of lower power processing cores with the same number of higher power cores.

This produces a processor that can fire up powerful cores for demanding tasks and then shut them down and use their less powerful brethren for less demanding workloads in order to reduce energy consumption.

ARM’s big.LITTLE design has been incorporated in into Qualcomm’s latest processors and the A series chip found in the iPhone 7.

Being DynamIQ

DynamIQWith the DynamIQ, with is essentially an extension of ARM’s Cortex-A processor design and will form the next family of Cortex-A processors due to emerge later this year, the Cambridge-based tech firm is enabling a expanded combination of cores on a single chip, with each core able to support an individual power profile allowing chipmakers to implement custom frequencies in each core.

Previously, chips based on the big.LITTLE design were limited to specific power consumption and frequency profiles for each set of cores.

In more straightforward terms, DynamIQ allows for chipmakers to tweak the big.LITTLE architecture to suit very specific use cases, such as running and accelerating machine learning algorithms within the silicon of ARM-based processors.

“DynamIQ big.LITTLE carries on the ‘right processor for the right task’ approach and enables configurations of big and LITTLE processors on a single compute cluster which were previously not possible. For example, 1+3 or 1+7 DynamIQ big.LITTLE configurations with substantially more granular and optimal control are now possible,” said Nandan Nayampally, vice president of marketing for the CPU Group at ARM.

With the rise of smart technology and AI based systems in everything from autonomous cars through to virtual assistants in smartphones, having a chip that can be configures for running the inference side of machine learning, essentially putting trained smart algorithms into action, is likely to be in more demand in the coming years.

Continuing to push its ambitions to help power the next wave of AI systems, ARM will also reveal new CPU instructions sets and software libraries for mobile processors based on its designs with a focus on accelerating AI software through its processors and core architecture.

The new instruction sets will also help chipmakers develop their own custom cores based on ARM’s instruction set that have been configured for running machine learning code within a processors silicon.

With the slow rise of smart cities and the growth of the Internet of Things, there is likely to be an increasing appetite for processors and chips sets that can handle the smart side of this building wave of technology.

Quiz: What do you know about the Internet of Things?

Read also : AI In Your Pocket