Venray Touts Combined CPU/DRAM Chip For Heavy-Duty Applications

A US company emerging from stealth mode says combining CPU cores and DRAM on a single piece of silicon brings benefits

An American company, Venray Technology, is reportedly touting its ability to combine CPU cores as well as DRAM memory on the same piece of silicon.

The Dallas-based company has been around for seven years, but is reportedly now seeking publicity about its technology to aid in its search for a potential buyer, according to the HPCWire news website.

The technology the company is keen to promote is its TOMI (Thread Optimised Multiprocessor technology).

DRAM/CPU Combination

The company believes TOMI has the ability to punch through the so called ‘walls’ that stand in the way of achieving seriously powerful computational capabilities.

This comes after concerns that Moore’s Law, which has driven the microprocessor industry for the past 40 years, has about five more years of relevancy before the cost of manufacturing smaller and smaller chips becomes too great.

Venray is looking to crack the so called ‘power wall’ (which means that faster computers get really hot) and the ‘memory wall’ (which means that 1000 pins on a CPU package is just too many). It aims to do this with an approach that puts CPU cores and DRAM on the same die.

The company is developing a way to place a run-of-the-mill processor in generic DRAM. The thinking is that the physical proximity of the CPU and memory, as well as extra-wide buses, will help minimise or even remove the memory wall. The units would be useful in high performance computing (HPC), big data applications, and for those handling large amounts of unstructured data.

The Core Problem

It seems that one of the fundamental problems facing chip designers is that when more and more cores are added to a processor, it does not necessarily increase the performance of the processor. Indeed processor expert Russell Fish, the CTO of Venray, explained the problem in an EDN article late last year.

“In small amounts multi-core can have an effect,” Fish wrote. “Two or even four cores can improve performance. However, doubling the number of cores does not double the performance. But the news is particularly alarming for popular data intensive cloud computing applications such as managing unstructured data.”

Fish then went on to cite the Sandia Labs, which performed an analysis of multi-core microprocessors running just such data intensive applications back in 2008.

Sandia Labs apparently discovered that as the number of cores increased, the processor power increased at substantially less than linear improvement and then decreased at an exponential rate.

Bandwidth Constraints

“The problem is the lack of memory bandwidth as well as contention between processors over the memory bus available to each processor,” said Sandia Labs.

According to the HPCWire article, Venray’s Fish believes that the Memory Wall could be substantially eliminated if CPUs were merged with DRAM, but the problem is that CPUs are highly complex devices compared to more straight forward DRAM, with CPUs requiring ten or more layers of material to be laid down on the die, compared to just three for DRAM.

Venray’s approach is apparently to design much more simple processors in an effort to reduce the number and complexity of logic gate connections, so as to flatten the layout and use just three layers.