The move follows Google’s decision to step back from controversial drone AI research scheme Project Maven
Darpa, the US military’s research agency, has said it plans to spend $2 billion (£1.54bn) over the next five years on artificial intelligence.
While the sum is small by Pentagon standards, it is substantial in the world of AI, and indicates the strategic importance attached to the technology by the US military.
The investment is similar in size to the $2.1bn China said in January it would sink into a major AI research park in western Beijing as part of that country’s plans to be the world leader in AI by 2030.
Darpa said it plans to focus on a number of areas, including AI-assisted security clearance vetting, AI reliability and what it calls “explainable AI”, considered critical for gaining acceptance for artificial intelligence by commanders.
Darpa made the announcement at the end of a Washington, D.C. conference marking the sixtieth anniversary of its founding.
The initiative follows Google’s highly publicised decision to stop working with the US military on “Project Maven”, the largest single military AI project, which aims to improve machines’ ability to distinguish objects in images for military use, for instance in the analysis of drone imagery.
One of Darpa’s key aims is to make AI decisions more “explainable”, enabling such systems to justify in real-time, in combat conditions, why they made the decisions they made.
According to the agency, when asked to justify a particular selection, current AI systems can only deliver a confidence rating in percentage points that, for instance, a target that was singled out is the one the operator was looking for.
Darpa director Steven Walker said the agency’s aim is for such systems to be able to explain to humans how they arrived at a particular answer, the Centre for Public Integrity reported.
Officials said being able to do this is “critically important” for giving commanders confidence that they can rely on AI.
Darpa didn’t mention the controversial idea of autonomous weapons — systems that would be able to select targets and take lethal action without human intervention.
But the Pentagon released a strategy document in August mentioned the idea, according to the Centre for Public Integrity.
The report, signed by Pentagon acquisition and research officials Kevin Fahey and Mary Miller, said that new technologies could “make it possible to develop and deploy autonomous systems that could independently select and attack targets with lethal force”.
The US government has backed the integration of more AI into the country’s weapons systems as a way of better competing with the militaries of Russia and China.