Amazon Aims At Big Data Crunchers With High Memory Cluster Offer

Database © Semisatch Shutterstock 2012

Want to do data crunching but on an on-demand basis? AWS may have the answer

Amazon Web Services is hoping to attract the attention of organisations planning on crunching large data sets with the launch of High Memory Cluster instances for its compute cloud.

The new service on Amazon Elastic Compute Cloud (EC2) will be based on  two Intel Xeon E5-2670 processors, two 120 GB solid state drives, high bandwidth networking, and 244 GiB of RAM.

High memory = Big Data fun

As Big Data becomes more mainstream, Amazon will be trying to ensure it is the cloud compute platform of choice amongst research communities, hence the push for hugely scalable high memory capacity.

“Memory-intensive workloads such as real-time applications used by healthcare providers, social networking companies and advertising technology providers require large amounts of memory to maintain high-performance,” said Peter DeSantis, vice president of compute services at AWS.

“We designed the High Memory Cluster instances specifically for these workloads, and this is the third instance family (along with High Storage instances and High I/O instances) launched in the past six months designed specifically for high performance applications. We’ll continue to listen and learn from customers and provide additional instance families and capabilities to meet the requirements of their applications.”

SAP is hoping to attract more customers by running its HANA  in-memory database technology on the AWS cloud, backed up by High Memory Cluster instances. “We look forward to enabling these instances to help our customers run real-time applications in the cloud,” said Sapan Panigrahi, vice president of HANA Cloud at SAP.

The service isn’t open to UK customers yet. It is only running in Amazon’s US East Region in North Virginia. AWS could only say it would be spreading to other areas “soon”.

What do you know about the Cloud? Find out with our quiz!