Database Virtualisation: The Next Big Thing?


IT managers are well aware of the cost savings that can be achieved by virtualising servers, but now experts say they should focus on virtualising inhouse databases.

Over the past few years IT managers have been busy virtualising their servers in order to improve hardware utilisation rates, as well as reduce their server admin costs and energy consumption. But now experts are advising that it is time to virtualise your databases.

Database virtualisation is a way to improve flexibility, maximise efficiency, lower costs and ease administrative overhead. There’s no hardware involved. Instead, the resources IT saves is the money they pay to database vendors and salaries they pay to a larger staff.

“The most common virtualisation is server virtualisation, which allows it to run anywhere – there are no boundaries,” said Brian Babineau, senior consulting analyst at the Enterprise Strategy Group. “In database virtualisation, it’s very similar. We take an instance of rows and columns and allow it to be fluid. We can move it anywhere. We can shrink the size of it, we can write to it anywhere, we can allow the table to be split up multiple times.”

Babineau said that by removing data from a proprietary database, you can accomplish a number of important things. First, he said, you can make better use of the databases you already have without having to buy more licenses than you actually need. Second, you can support communication between applications that normally would use different databases.

But for many early users of database virtualisation, the reasons for implementing the technology are similar to the reasons for implementing server virtualisation: easier management, higher availability and better performance.

“The primary goal of database virtualisation is to enable a standard database to run on a shared-nothing cluster of commodity servers, thereby improving scalability and high availability at a lower cost compared to purpose-built shared-disk cluster databases,” said Matthew Aslett, an analyst at The 451 Group.

A shared-nothing database, according to Aslett, is based on independent servers with no single point of contention.

“The main usage scenario is high availability, although unlike traditional simple database clustering systems (where one database installation is active and the other is passive), with database virtualisation deployments, all database servers are active at all times,” he said.


The advantage of having all database servers active at once is that you get both real-time updating and better performance, since multiple servers are sharing the load that would otherwise be supported by one. The downside to multiple instances of a database is that you have to pay for multiple licenses and you have increased administration load. Fortunately, by using database virtualisation and decoupling the database from the data and the specific server, you can overcome this.

“Our focus has been that many databases are accessible and manageable as if they were a single database,” said Noel Yuhanna, principal analyst at Forrester Research. “One of the biggest problems we’ve seen over the last five or 10 years is the data explosion. Some organisations are running 15,000 databases. Managing them and provisioning them has become a big problem.”

Yuhanna said database virtualisation can help here, as well. “Virtualisation provides a common framework for better availability, scalability, manageability and security,” he said. “The biggest challenge is around managing these environments. It’s becoming difficult to maintain the SLAs [service-level agreements] for these applications.”

Author: Wayne Rash
Click to read the authors bio  Click to hide the authors bio