Intel has a new strategy as it seeks to pursue a slice of the increasingly crowded artificial intelligence (AI) market, and will launch new chips and partner with Google.
The chip maker acquired AI specialist Nervana Systems for around $350 million in August, and is now looking at taking its AI technology to create a platform on which intelligent systems and applications can be built upon.
The process involves throwing a load of data at a deep learning system and leaving it to figure out identifying patterns and features to find answers to questions without being rigorously programmed to do so.
This requires large amounts of simple data processing, something that the parallel processing abilities of graphics cards and chips excel at, while Intel’s central processing units (CPUs) are more suited for more complex processing tasks.
Intel plans to buck this trend by working Nervana’s technology into new slices of silicone and its Xeon and Xeon Phi processor range.
Code-named ‘Lake Crest’ the first chip out of the gates will make its debut early 2017 and will feature the Nervana Engine, an application specific integrated circuit designed for running deep learning algorithms, notably in the computationally demanding training part of the neural network rather than the data ingestion part.
From there Intel then plans to add Nervana capabilities to its next generation of Xeon processors, code-named ‘Knights Crest’, and in its Xeon Phi processors with a new chip dubbed ‘Knights Mill’.
These chips will be targeted at servers and data centres supporting machine learning and AI systems, likely delivered through the cloud. And Intel is confident it will have CPUs that will drive deep learning rather than support the GPUs that usually do all the ehavy processing lifting in such applications.
“We expect the Intel Nervana platform to produce breakthrough performance and dramatic reductions in the time to train complex neural networks,” said Diane Bryant, executive vice president and general manager of the Data Center Group at Intel.
“Before the end of the decade, Intel will deliver a 100-fold increase in performance that will turbocharge the pace of innovation in the emerging deep learning space.”
The two firms plan to work together on creating an open, secure and multi-cloud infrastructure that supports the use of machine learning and AI systems.
It will involve focusing on optimising Google’s open source TensorFlow machine learning library for performance on Intel architecture, as well as the optimisation of the Kubernetes open source container management platform from Google that can be used to better run enterprise applications including AI workloads in the cloud.
With Google already making solid advancements in AI technology, having Intel as a partner to provide the processing grunt could see the evolution of AI and machine learning systems speed up.
How much do you know about the world’s technology leaders? Take our quiz!
After US Supreme Court last week removed women's reproduction rights, Google tells staff they can…
Victory for irate neighbours? Airbnb confirms its temporary Covid ban on parties in its listings…