IBM DeepLocker Turns AI Into Hacking Weapon

hacking team

Artificial intelligence can power a new generation of malware that can bypass top tier cyber defences

IBM researchers are to demonstrate a worrying cyber-attack development where artificial intelligence (AI) could be used to weaponise hacking tools.

The IBM researchers created what they are calling DeepLocker, a novel class of highly targeted and evasive attacks powered by artificial intelligence (AI).

The IBM presentation of DeepLocker at the Black Hat USA 2018 conference on Wednesday comes amid concern that cybercriminals will turn to AI to help them bypass the very best cyber defences.

AI hacking

The IBM researchers developed DeepLocker as a proof of concept so as to understand how “several AI and malware techniques already being seen in the wild could be combined to create a highly evasive new breed of malware, which conceals its malicious intent until it reached a specific victim.”

It provided an indepth explanation of DeepLocker here.

“IBM Research developed DeepLocker to better understand how several existing AI models can be combined with current malware techniques to create a particularly challenging new breed of malware,” it said. “This class of AI-powered evasive malware conceals its intent until it reaches a specific victim. It unleashes its malicious action as soon as the AI model identifies the target through indicators like facial recognition, geolocation and voice recognition.”

At the moment, state-of-the-art cyber defences tend to rely on examining what the attack software is doing, rather than the more common technique of analysing software code for threatening attack code.

But the concern is that the new genre of AI-driven programs can be instructed to remain dormant until they reach a very specific target.

This would make these new types of attacks very hard to stop.

IBM Research says this new approach is more similar to a sniper attack, in contrast to the “spray and pray” approach of traditional malware. DeepLocker apparently hides its malicious payload in benign carrier applications, such as a video conference software, to avoid detection by most antivirus and malware scanners.

“I absolutely do believe we’re going there,” Jon DiMaggio, a senior threat analyst at cyber security firm Symantec was quoted by Reuters as saying. “It’s going to make it a lot harder to detect.”

Malicious use

In February this year, a new report from the Future of Humanity Institute, warned that while artificial intelligence promises many positive developments, the technology could be exploited for malicious purposes.

Among the risks the report highlights are that AI could misused by rogue states, criminals and lone-wolf attackers, the report stated.

It also warned that the malicious use of AI posed imminent threats to digital, physical and political security by allowing for large-scale and much more efficient attacks within the next five years.

Do you know all about security? Try our quiz!