Categories: Security

Smartphones At Risk From ‘Dalek’ Voice Attacks

Concealed voice commands can be used to take control of smartwatches and smartphones, according to researchers, who are keen to highlight the security risks inherent in increasingly popular voice-only devices such as Amazon Echo, the Apple Watch and Android Wear.

A group of researchers from the University of California, Berkeley and Georgetown University demonstrated that malicious commands can be disguised so as to go unnoticed by humans, but can still be recognised by voice-activated devices, which could carry out their instructions.

Voice attacks

The researchers demonstrated the use of two types of commands – one is distorted but still understandable by humans, sounding like the voice of a Dalek from Doctor Who, while the other is more heavily garbled and only works with devices whose voice systems are known in detail to the attacker.

The first type of command was verified to work with most currently available voice-activated devices, such as those from Apple or Google, the researchers said.

It could be emitted within earshot of the device in a public area or concealed in a popular online video in such a way as to go unnoticed by the user, they said.

“Depending upon the device, attacks could lead to information leakage (e.g., posting the user’s location on Twitter), cause denial of service (e.g., activating airplane mode), or serve as a stepping stone for further attacks (e.g., opening a web page hosting drive-by malware),” they wrote in a paper.

Broad range of targets

Previous research has shown that hidden voice commands can be made in such a way as to be recognised by devices while remaining unnoticed by humans, but the new paper is the first to demonstrate that such commands can be constructed even with very little knowledge about the target speech recognition system, the researchers said.

“Our attacks demonstrate that these attacks are possible against currently-deployed systems, and that when knowledge of the speech recognition model is assumed more sophisticated attacks are possible which become much more difficult for humans to understand,” they wrote.

They said the attacks work well against Google Now’s speech recognition system, while Apple’s Siri seemed to be more selective about recognising distorted speech.

The researchers demonstrated a screening system that they said recognised nearly 70 percent of the hidden voice commands as being malicious.

Users can protect themselves by setting up devices to require a fingerprint or password before accepting voice commands, but such protections make voice activation significantly more difficult to use, they said.

“Active defenses, such as audio CAPTCHAs, have the advantage that they require users to affirm voice commands before they become effected,” they wrote. “Unfortunately, active defenses also incur large usability costs, and the current generation of audio-based reverse Turing tests seem easily defeatable.”

They said filters that slightly degrade audio recognition quality were more promising, and “can be tuned to permit normal audio while effectively eliminating hidden voice commands”.

The research is to be presented at the Usenix Security Symposium in August.

Are you a security pro? Try our quiz!

Matthew Broersma

Matt Broersma is a long standing tech freelance, who has worked for Ziff-Davis, ZDnet and other leading publications

Recent Posts

Intel To Invest More Than $28 Billion In Ohio Chip Factories – Report

Troubled chip giant Intel will invest more than $28 billion to construct two new chip…

2 days ago

Apple Returns To Top 5 Smartphone Ranks In China, Amid Tim Cook Visit

In Q3 Apple rejoins ranks of top five smartphone makers in China, as government welcomes…

2 days ago

Apple Cuts Orders iPhone 16, Says Analyst

Industry supply chain analyst says Apple cut orders for the iPhone 16 for Q4 2024…

2 days ago

LinkedIn Fined €310m By Irish Data Protection Commission

Heavy fine for LinkedIn, after Irish data protection watchdog cites GDPR violations with people's personal…

3 days ago

CMA Begins Probe Into Alphabet Partnership With Anthropic

UK competition regulator begins phase one investigation into Alphabet's partnership with AI startup Anthropic

3 days ago