US Military Backs Research Into AI ‘Common Sense’

The US military is backing artificial intelligence research into areas including systems with “common sense” and networks that can spot images manufactured by other AIs.

Darpa, a US Defence Department research agency best known for backing early work on the internet and self-driving vehicles, has said it is working with outside researchers on AI systems that could have the ability to adapt to unexpected circumstances, said programme manager David Gunning.

The research could result in machines with “common sense”, he said, leading to greater flexibility and adaptability and the ability to communicate with people in a more natural way.

He said current AI systems are “brittle” and can’t take on problems outside the narrow range they were built for, according to the Financial Times.

AI ‘common sense’

AI has recently become a popular way to automate complex or repetitive tasks, such as spotting patterns in large amounts of data, but there’s been a recent revival in research focused on giving machines an intuitive awareness of the world.

Microsoft co-founder Paul Allen doubled investment into his own AI research institute earlier this year in order to focus on this idea.

Darpa typically backs a range of commercial and academic groups to carry out research on its behalf, something that recently caused an internal backlash at Google over the company’s military work on image recognition.

Gunning said the agency called together third-party researchers earlier this year for a “brainstorming” session on AI “common sense” and is now putting together a formal proposal for the project.

Spotting ‘deepfakes’

In a separate project, Darpa is bringing together forensic experts this summer to test technologies aimed at spotting artificially generated images.

Called “deepfakes”, because they rely on “deep learning” techniques, they typically involve projecting one individual’s face onto the body of another, and can be surprisingly realistic.

Of particular concern is a new technique that involves the use of generative adversarial networks, or GANs, which are specifically engineered to bypass automated forgery-detection techniques.

Concerned that GANs could be used for misinformation, Darpa wants to develop more advanced detection methods, Gunning told the MIT Technology Review.

At the event this summer, the agency wants experts to compete at producing the most convincing AI-generated fake audio, imagery and video, as well as automatically spotting the counterfeits.

Google is one of the companies at the forefront of developing natural-seeming AIs, having produced a system earlier this year that carries out automated telephone conversations.

Put your knowledge of artificial intelligence (AI) to the test. Try our quiz!

Matthew Broersma

Matt Broersma is a long standing tech freelance, who has worked for Ziff-Davis, ZDnet and other leading publications

Recent Posts

The Sustainability of AI

While AI promises unparalleled efficiency, productivity, and innovation, questions regarding its environmental impact loom large.…

3 hours ago

Trump’s Truth Social Makes Successful Market Debut

Shares in Donald Trump’s social media company rose about 16 percent after first day of…

3 hours ago

Dutch PM Raises Cyber Espionage Case With China’s Xi

Beijing visit sees Dutch Prime Minister Mark Rutte discuss cyber espionage incident with Chinese President…

4 hours ago

Vodafone Germany Confirms 2,000 Job Losses, Amid European Restructuring

More downsizing at Vodafone after German operation announces 2,000 jobs will be axed, as automation…

20 hours ago

AI Poses ‘Jobs Apocalypse’, Warns Report

IPPR report warns AI could remove almost 8 million jobs in the United Kingdom, with…

21 hours ago

Europe’s Longest Hyperloop Test Track Opens

European Hyperloop Center in the Netherlands seeks to advance futuristic transport technology, despite US setbacks

22 hours ago