The US defence agency DARPA is proposing an initiative to tackle the Achilles Heel of today’s computing technology
DARPA is the agency within the United States Department of Defense that is responsible for the development of new technology for use by the military.
It has realised that computing power is a finite resource, and that currently the computational capability at our disposal today is increasingly limited by power requirements and the constraints on the ability to dissipate heat.
DARPA says that its sensors for intelligence, surveillance and reconnaissance systems collect more information than can be realistically processed in real time, even with the aid of supercomputers. This means that potentially valuable real-time intelligence data is not provided in a timely manner.
“To continue to increase processing speed, new methods for controlling power constraints are required,” said DARPA.
It pointed to the fact that in the past computers could rely on increasing performance with each processor generation. As laid down in Moore’s Law, each generation brought with it double the number of transistors and, according to Dennard’s Scaling, clock speed could increase 40 percent each generation without increasing power density. “This allowed increased performance without the penalty of increased power,” said DARPA.
“That expected increase in processing performance is at an end,” said DARPA Director Regina E. Dugan. “Clock speeds are being limited by power constraints. Power efficiency has become the Achilles Heel of increased computational capability.”
To this end DARPA has announced a new initiative, dubbed the Power Efficiency Revolution for Embedded Computing Technologies (PERFECT). The initiative will begin with a workshop in Arlington, Virginia on 15 February.
What PERFECT essentially aims to do is to find a way to improve power efficiency for embedded computer systems, so that more computing per watt of electrical power can be provided.
This is because currently transistor operating voltages are approaching a logic threshold voltage, and as this threshold nears, the device operating characteristics changes dramatically, decreasing both reliability and maximum operating frequency.
But the demand for faster and more powerful computing resources continues unabated, so commercial companies have little wriggle room left to reduce operating voltage to avoid these clock frequency decreases.
“PERFECT seeks revolutionary approaches to processing-power efficiency to overcome these limitations,” said DARPA. “This approach includes near threshold voltage operation and massive heterogeneous processing concurrency, combined with techniques to effectively use the resulting concurrency and tolerate the resulting increased rate of soft errors.”
Essentially, DARPA is seeking a way to ramp up system power output.
“Current embedded processing systems have power efficiencies of around 1 giga floating point operation per second per watt (GFLOPS/w),” said DARPA, but it believes the military will need at least 75 GFLOPS/w in the future. The goal of the PERFECT program therefore is to provide this power efficiency by taking so called “novel approaches.”
“These approaches include near threshold voltage operation and massive heterogeneous processing concurrency, combined with techniques to effectively use the resulting concurrency and tolerate the resulting increased rate of soft errors,” said DARPA. The PERFECT program will apparently use industry fabrication geometry advances to 7nm.
In order to be considered a success, organisations do not have to build operational hardware but rather need to simulate the hardware, and demonstrate progress. DARPA of course continues to work on other projects. Last May it said it would build a cloud-based network that can continue supporting military missions even while under cyber-attack.