Cyber-Attacks Show Need For Strong Data Security

data security

Protecting the data where it lives is better than relying on perimeter defences alone

When cyber-attackers breach an organisation’s network, the database is usually their target. However, many organisations are so focused on protecting the perimeter that they do not think about protecting the database itself, according to several security experts.

Many organisations still think that protecting the perimeter is sufficient to protect the data, but as recent data breaches at Epsilon and Sony have shown, traditional perimeter security cannot be relied on to protect the data, Josh Shaul, CTO of Application Security, told eWEEK. It is a “losing battle” to try to protect every single endpoint within the organisation, Shaul said.

Multiple Gatekeepers Shielding The Data

That’s not to suggest organisations should not be investing in firewalls and other security products. Shaul recommended the layered model, where attackers have to get past multiple gatekeepers before they even get to the database. Organisations should be thinking: “When the perimeter fails, what’s next?” and combining all the layers to pinpoint when something is wrong, according to Shaul.

It is ironic that “the closer we get to the data, we see fewer preventive controls and more detection measures,” Shaul said. IT departments are more likely to have deployed products that send out alerts that a breach has occurred, than ones that actively block the threat from getting in to the database. Most blocking technologies are still deployed on the perimeter, according Shaul.

Organisations still assume that all activity hitting the database is “untrusted”, Shaul said. Instead, they should monitor all requests to figure out whether the activity is normal or malicious.

Continuous, real-time monitoring is crucial to detect suspicious or unauthorised activity within the database, Phil Neray, vice president of data security strategy and information management at IBM, told eWEEK. Database activity monitoring allows security managers to catch anyone who is trying to get access to information they should not be able to obtain.

“Outsiders typically look like insiders once they can log in to the network,” Neray said.

Suspicious activity could take the form of a single user account, such as a customer service representative, downloading hundreds of sensitive data records in a single day. Organisations should also be monitoring “privileged users”, or users with special authority or permissions over multiple applications or systems, to ensure they have not been hijacked.

Once In, Increment Permission Levels

Attackers often go after “softer, easier targets” such as support systems and use that to gain a foothold in the network, Shaul said. Once in, they can expand to more critical and valuable systems by looking for other user accounts that have access. The idea that requests from some user accounts are safe should be “thrown out the window”, Shaul said.

Attackers often gain control of privileged accounts via SQL injection, according to Neray. Database activity monitoring can detect third-party intrusions as well as detect “behavioural” issues such as when user accounts are being shared, he said.

SQL injection attacks, where attackers embed database queries into a form on a Website and submit them to trick the database into returning results, remains a popular attack vector because they lead an attacker directly to the database, Shaul said.

While database activity monitoring is not new, it has only been “within the last couple years” that the technology has really taken off, according to Adrian Lane, CTO and analyst at Securosis.

Hardware, Software Or Virtual?

Many customers are not sure whether they should be investing in hardware, virtual systems or software products to protect the database, Lane said. Even after they figure out what product to use, they are still unsure about basic setup and administration.

In a white paper, Software vs Appliance: Database Activity Monitoring Deployment Tradeoffs, Lane noted that while there was no “single ‘best’ deployment model”, the “functional” capabilities within software-based, hardware-based and virtualised database activity monitoring products were “shrinking”. Organisations should consider the products in the context of their environment, such as not getting an appliance if the corporate goal is to virtualise all servers, for example.