DolphinAttack Exposes Speech Recognition Software Vulnerability

Siri, Alexa and Google Now can be used to hack and control smart devices, including connected cars.

Speech recognition software, including Siri, Alexa and Google Now, can be hacked by giving them inaudible (ultrasonic) voice commands, researchers have found.

The attack, dubbed the DolphinAttack by the researchers at Zhejiang University in China, can hack into smart devices by taking advantage of their microphones, which can pick up ultrasound frequencies and frequencies of up to 20,000Hz.

Taking control of devices

The researchers tested the hack on 16 VCS (Version Control System) models including Apple iPhone, Amazon Echo, Google Nexus and connected cars. They manged to control the navigation of an Audi, play music on Echo and launch Facetime on iPhones.

qualcomm snapdragon audi smart car

The attack involves translating audible voice commands into commands in frequencies above 20,000Hz. While devices can still pick up the commands clearly they cannot be heard by humans. The ‘secret’ voice commands can also be used to navigate to malicious websites on a targeted device.

The researchers’ paper detailing the vulnerability read: “The fundamental idea of DolphinAttack is (a) to modulate the low-frequency voice signal (i.e., baseband) on an ultrasonic carrier before transmitting it over the air, and (b) to demodulate the modulated voice signals with the voice capture hardware at the receiver.”

There are some restrictions on the attack, though, with a series of unlikely conditions required for a successful hack.

The DolphinAttack can only be triggered if the target device is in within five to six feet of the attacker’s transmitter. The device mus also be unlocked with voice-assistant activated on it.

Do passwords have a future in cybersecurity?

View Results

Loading ... Loading ...

On top of this, alarms bells should ring for victims of the attack as voice assistants reply to commands during the hack.

To protect devices from a DolphinAttack, the researchers have urged smart device manufacturers to prevent devices from reacting to commands in ultrasound.

The explained: “We propose hardware and software defence solutions. We validate that it is feasible to detect DolphinAttack by classifying the audios using supported vector machine (SVM), and suggest to re-design voice controllable systems to be resilient to inaudible voice command attacks.”

How much do you know about hackers and viruses? Try our quiz!