Scientists believe they can use artificial intelligence to learn more about the military strategy of Islamic State.
A study by researchers at Arizona State University concluded that using AI algorithms to study patterns and behaviour of IS extremists could be of “significant” help for both the US military and policymakers in the future.
The US and its allies already employ forms of pattern-of-life analysis to determine threat levels of potential targets for its hunter-killer drones.
One threat that was analysed in the study, titled ‘Mining for Causal Relationships: A Data-Driven Study of the Islamic State’, was that of the improvised explosive device (IED). Algorithms used by AI would look for spikes in IED incidents and then determine location and frequency of future incidents.
“IED incidents have been a common tactic used by Iraqi insurgents throughout the U.S.-led Operation Iraqi Freedom. Though the IED comprises a smaller weapons system normally employed by local insurgent cells, spikes in such activity could be meaningful,” said that research paper.
“For instance, it may indicate action ordered by a strategic-level command that is being carried out on the city level, or it may indicate improved logistic support to provide local cells the necessary munitions to carry out such operations in larger numbers.”
In the analysis, researchers gave the artificial intelligence algorithm 2,200 incidents of Islamic State activity from 2014.
“We conducted a data-driven study on the insurgent group ISIS using a combination of APT-logic, rule learning, and causal reasoning to identify cause-and-effect behavior rules concerning the group’s actions,” said the research paper.
“In the future, we look to extend this work in several ways. We also look to leverage additional variables about the environment, including data about weather, information (including social media operations), and the political situation to find more interesting relationships.”
Whilst we are not quite there yet, last month saw a plethora of leading scientists and thinkers condemn governments and militaries who are planning to hand control of weapon systems over to artificial intelligence.
A ban on offensive autonomous weapons beyond meaningful human control, such as AI drones, should be implemented to save humanity from a military AI arms race, suggested an open letter written by the Future of Life Institute.
The letter was signed by Stephen Hawking, PayPal and Tesla founder Elon Musk and Apple co-founder Steve Wozniak, as well as hundreds of others top AI and robotic scientists.
The letter calls for governments to consider the detrimental effects of artificial intelligence-controlled drones and robots.