Spies Urged To Adopt AI To Counter Augmented Threats

The UK’s foes are likely to use artificial intelligence to augment future threats, a study has warned, arguing that Britain’s intelligence forces must adopt the technology to keep pace.

The study, commissioned by GCHQ and conducted by the Royal United Services Institute, found that AI is likely to be used to bolster threats including cyber-attacks on national infrastructure and convincing “deepfakes” used to spread disinformation.

For their part, the UK’s spies can use the technology to improve cyber defence and to analyse data that can help detect militant activity, the study argues.

But it is more circumspect about AI’s ability to predict militant attacks before they happen.

Cyber-attacks

The independent study is based on broad access to the UK’s intelligence community.

RUSI argues in the report that both nation states and criminals “will undoubtedly seek to use AI to attack the UK”.

Alexander Babuta, a RUSI fellow and an author of the report, argued the infrastructure needs to be put into place for national security forces to innovate and adapt if they are to keep up with changing technology.

AI could be used to create convincing faked media to manipulate public opinion and elections, and to alter malware to make it more difficult to detect.

In both cases, it’s necessary to use AI-based defence mesasures to counter AI, Babuta argued.

But AI is of only “limited value” in making predictions in fields such as counter-terrorism, the report says.

Human element

Militant attacks occur too rarely and are too different from one another for a machine learning system to be able to detect a pattern.

In such uses technology could, however, augment the ability of humans to sift through data.

This secondary role means humans would remain accountable for critical decision-making, RUSI said.

The think tank said increased use of AI could raise human rights concerns, with profiling techniques creating the potential for discrimination.

New guidance may need to be put into place to govern the way such technologies are used in the future, RUSI said.

Such issues have become more visible since 2013, when Edward Snowden revealed the extent of data collection on US citizens via a series of leaks of classified information.

Matthew Broersma

Matt Broersma is a long standing tech freelance, who has worked for Ziff-Davis, ZDnet and other leading publications

Recent Posts

Tesla Wins Case Against Former Staffer Who Stole Data

Tesla wins court case against former employee at Tesla's Giga Nevada factory, who hacked systems…

20 hours ago

Patient Dies In Germany After Hospital Ransomware Attack

Real world consequence of ransomware attacks. A female patient has died as a result of…

23 hours ago

Tesla Driver Charged For Sleeping As Car Drove At 90mph

Unbelievable! Driver in Canada charged with dangerous driving, after he slept in fully reclined seat…

1 day ago

ByteDance Majority Stake Puts Oracle-TikTok Deal At Risk – Report

Plan to keep majority stake in TikTok, will hinder White House approval reports suggest, as…

2 days ago

Nintendo Shuts the Lid On 3DS

Nearly a decade after it first launched, Japanese gaming giant Nintendo discontinues its popular 3DS…

2 days ago

Aussie Regulator Refuses To Back Down After Facebook News Warning

Blunt warning from Facebook about blocking news sharing down under, receives equally blunt response from…

2 days ago