IBM Chip Prototypes Mirror The Human Brain

IBM researchers have created prototype processors that mirror the workings of the human brain, enabling them not only to collect and analyse information, but essentially to learn from their mistakes, understand the data they are seeing and react accordingly.

The “cognitive computing” chips are able to recognise patterns and make predictions based on data, learn through experiences, find correlations among the information and remember outcomes, according to IBM officials.

Radical Departure From Traditional Chips

The chips represent a significant departure from how computers are traditionally programmed and operated, and open opportunities in a wide range of fields, they said.

“Future applications of computing will increasingly demand functionality that is not efficiently delivered by the traditional architecture,” Dharmendra Modha, project leader for IBM Research, said in a statement. “These chips are another significant step in the evolution of computers from calculators to learning systems, signalling the beginning of a new generation of computers and their applications in business, science and government.”

IBM has been pushing efforts to drive more intelligence into an increasingly wider range of devices, and to create ways to more quickly and intelligently collect, analyse, process and respond to data. Those efforts were on public display in January when IBM’s “Watson” supercomputer beat human contestants on the game show “Jeopardy”.

Watson, like many projects at IBM Research Labs, is focused on analytics, or the ability to process and analyse data to arrive at the most optimal decision. Watson was a revelation because of its ability to think in a humanlike fashion and answer questions posed in natural language – with puns, riddles and nuances, etc. – by quickly running through its vast database of information, making the necessary connections and returning not with a list of possible correct answers, but the correct answer itself.

The cognitive computing chips echo those efforts. IBM officials are calling the prototypes the company’s first neurosynaptic computing chips, which they said work in a fashion similar to the brain’s neurons and synapses. It is done through advanced algorithms and silicon circuitry, they said.

Neurosynaptic Artificial Intelligence

It is through this mimicking of the brain’s functionality that the chips are expected to understand, learn, predict and find correlations, according to IBM. Digital silicon circuits create what IBM is calling the chips’ neurosynaptic cores, which include integrated memory (replicating synapses), computation (replicating neurons) and communication (replicating axons).

With those capabilities, computing can move away from the current if-then programming scenario and toward one where computers dynamically react, learn and problem-solve on the go.

The two working prototypes offer 45-nanometer SOI-CMOS cores that contain 256 neurons. One core contains 262,144 programmable synapses while the other holds 65,536 learning synapses. The chips are undergoing testing and have worked with simple applications such as navigation, machine vision, pattern recognition, associative memory and classification.

The effort is getting $21 million in new funding through DARPA (the Defence Advanced Research Projects Agency) for phase 2 of what IBM is calling the SyNAPSE (Systems of Neuromorphic Adaptive Plastic Scalable Electronics) project. The project’s goal is to create a computing system that not only collects and analyses complex information gathered simultaneously from multiple sensors, but can dynamically rewire itself as it goes, and to do this in a compact, energy-efficient form factor.

Smarter Planet Applications

IBM officials see countless applications for cognitive computing systems.

In one, such a system that is used to monitor the world’s water supply – collecting and analysing such data as temperature, pressure, wave height, acoustics and ocean tides – could determine the threat of a tsunami and decide to issue a warning based on its findings.

Another cognitive system could monitor sights, smells, texture and temperatures to warn grocers of bad or contaminated produce.

“Imagine traffic lights that can integrate sights, sounds and smells and flag unsafe intersections before disaster happens or imagine cognitive coprocessors that turn servers, laptops, tablets and phones into machines that can interact better with their environments,” IBM’s Modha said.

Jeffrey Burt

Jeffrey Burt is a senior editor for eWEEK and contributor to TechWeekEurope

Recent Posts

Meta Declines On Heavy AI Spending Plans, Despite Strong Q1

Share price hit after Meta admits heavy AI spending plans, after posting strong first quarter…

10 hours ago

Google Delays Removal Of Third-Party Cookies, Again

For third time Google delays phase-out of third-party Chrome cookies after pushback from industry and…

11 hours ago

Tesla Posts Biggest Revenue Drop Since 2012

Elon Musk firm touts cheaper EV models, as profits slump over 50 percent in the…

12 hours ago

Apple iPhone Q1 Sales In China Fall 19 Percent, Says Counterpoint

Bad news for Tim Cook, as Counterpoint records 19 percent fall in iPhone sales in…

16 hours ago

President Biden Signs TikTok Ban Or Divest Bill Into Law

TikTok pledges to challenge 'unconstitutional' US ban in the courts, after President Joe Biden signs…

17 hours ago