Spies Urged To Adopt AI To Counter Augmented Threats

gchq

UK’s intelligence agencies must use artificial intelligence to repel increasingly sophisticated cyber-attacks and disinformation campaigns, finds study

The UK’s foes are likely to use artificial intelligence to augment future threats, a study has warned, arguing that Britain’s intelligence forces must adopt the technology to keep pace.

The study, commissioned by GCHQ and conducted by the Royal United Services Institute, found that AI is likely to be used to bolster threats including cyber-attacks on national infrastructure and convincing “deepfakes” used to spread disinformation.

For their part, the UK’s spies can use the technology to improve cyber defence and to analyse data that can help detect militant activity, the study argues.

But it is more circumspect about AI’s ability to predict militant attacks before they happen.

Big Data Ecosystems and Infrastructures, ai, artificial intelligenceCyber-attacks

The independent study is based on broad access to the UK’s intelligence community.

RUSI argues in the report that both nation states and criminals “will undoubtedly seek to use AI to attack the UK”.

Alexander Babuta, a RUSI fellow and an author of the report, argued the infrastructure needs to be put into place for national security forces to innovate and adapt if they are to keep up with changing technology.

AI could be used to create convincing faked media to manipulate public opinion and elections, and to alter malware to make it more difficult to detect.

In both cases, it’s necessary to use AI-based defence mesasures to counter AI, Babuta argued.

But AI is of only “limited value” in making predictions in fields such as counter-terrorism, the report says.

Human element

Militant attacks occur too rarely and are too different from one another for a machine learning system to be able to detect a pattern.

In such uses technology could, however, augment the ability of humans to sift through data.

This secondary role means humans would remain accountable for critical decision-making, RUSI said.

The think tank said increased use of AI could raise human rights concerns, with profiling techniques creating the potential for discrimination.

New guidance may need to be put into place to govern the way such technologies are used in the future, RUSI said.

Such issues have become more visible since 2013, when Edward Snowden revealed the extent of data collection on US citizens via a series of leaks of classified information.