Oracle Moves Prime Focus From Big Data To Big Memory

Chris Preimesberger preferred

Oracle hopes Big Memory Machine will give it a shot in the arm, says Chris Preimesberger

Oracle has moved its prime focus from “big data” to “big memory,” carving out yet another IT buzz phrase in the process.

Mere hours after his Oracle America’s Cup team won two races against New Zealand to close the score to 8-to-5 in the international sailing competition on nearby San Francisco Bay, an ebullient Oracle CEO Larry Ellison was shuttled downtown to Moscone Center to introduce his company’s new 12c in-memory database machine to a full-house audience of Oracle OpenWorld 2013 attendees.

‘Big Memory Machine’ Unveiled

Ellison then took the wraps off what he called a “Big Memory Machine”—officially, the Oracle M6-32—an in-memory block of database servers that might store more dynamic memory than an entire vertical market. Each M6-32 block, when loaded to the fullest, can house a whopping 32TB of dynamic RAM (DRAM).

Oracle Americas CupTo add a bit of perspective, many small-to-midrange enterprises don’t have 32TB of disk storage in their entire IT systems.

Oracle’s M6-32, which Ellison described is the new optimal machine to run the Oracle 12c database, is powered by 32 of the company’s new SPARC M6 processors. Each of the M6 processors houses 12 processors running at 3.6GHz.

So this is Oracle’s latest venture into the future, a topic the company has been describing for more than a generation. Most of the time, Oracle gets pretty close to predicting the future accurately; that’s a big reason the company has been in business since the Carter administration.

‘Engineered Together’

While everybody knows that data centres are a conglomeration of different types and brands of systems aimed at widely varying workloads, Oracle has long espoused that commodity servers, storage and networking are just fine for handling most of a company’s less-time-critical workloads. Most of the industry believes the same.

That design, by its very nature, leaves an opening—depending on line-of-business requirements—for million-dollar-plus special systems like the M6-32, which will be needed to do the heavy lifting in 24/7 processing environments.

“What we think the data centre of the future looks like is really a core of commodity machines and a collection of these purpose-built machines [like the M6-32] that give you better database performance, lower database costs, more reliable backups and faster analytics,” Ellison said in closing his opening keynote.

Designed for Special Workloads

So the M6-32 server block, then, isn’t designed to be the only one for a data centre, even though on paper it certainly looks like it could handle the job.

“If you look at where data centres are going, everyone seems to want to buy Intel-based servers running virtualized Linux and core Ethernet. The conventional wisdom says that it’s cheap and good for anything. Yes, it is cheap, but it is not good for everything,” Ellison said.

Ellison claimed that Oracle’s new whiz-bang machine enables queries to run 100 times faster than traditional disk-based approaches to database queries. He also said that transactions in the database are also an “order of magnitude faster” than what’s available elsewhere today.

This is because the 12c database now stores data in both column and row formats. Columnar database bits are stored in DRAM, which replaces the need for the on-disk database to store certain classes of index files, Ellison said.

Having data in DRAM enables an Oracle database to process data queries at a rate of several billion rows per second, Ellison said.

“This all means that the results [of queries] are instantaneous, at the speed of thought, with answers coming back faster than you can ask the questions,” Ellison said. “We can process data at ungodly speeds.”

Speed Is the Operative Topic

In case you haven’t gotten the message yet, this whole exercise is all about speed—not unlike the America’s Cup races Ellison had just observed.

Ellison also said that the M6-32 is quickly able to discover data stores, with little or no modifications to other applications needed.

“All you do is flip a switch and all your existing applications run much faster,” Ellison said. “Everything runs with no changes to SQL or your applications. Everything that works today works with the in-memory option turned on.”

Thanks to the M6-32’s data intelligence, the entire database doesn’t have to be in-memory to take advantage of all options. Selected data can be maintained on disk, and it can be migrated later to in-memory as needed.

The M6-32—orders for which are being taken now—will also be available as a clustering unit, so that it can be paired with an Oracle Exadata storage array. In such a configuration, the Big Memory Machine can put an entire 12c database into memory with the Exadata database machine deploying the storage subsystem.

“We think by designing the hardware and software together you get extreme performance and, therefore, you need fewer of them, and you spend less buying them. You use less floor space and less electricity running it. And you use less labour maintaining them,” Ellison said.

Oracle needs a shot in the arm in its hardware business; it has been struggling on that side of the company since it acquired Sun Microsystems’ servers, storage and networking in January 2010. Since then, Oracle’s commodity server sales have slowed, and its more expensive engineered-together systems haven’t sold enough units to compensate.

For now, however, Oracle plunges ahead with bigger/fast/better/more expensive data centre machines, and it remains to be seen as to whether this strategy will work over the long haul.

Are you a repository of information? Try our storage quiz!

Originally published on eWeek.