Austria Conference Calls For Controls On ‘Killer Robots’

robot Image credit: Pexels

Internatinal conference in Vienna calls for controls on AI-powered autonomous weapons to ensure humans remain in control

Attendees of an international conference in Vienna this week have called for renewed efforts to regulate the development of artificial intelligence-powered autonomous weapons systems, or “killer robots”, amidst surging development in the AI field and largely stagnant efforts at controlling its application to the military.

“Humanity is at a crossroads and must come together to address the fundamental challenge of regulating these weapons,” said organisers of the conference.

The conference, hosted by the Austrian Federal Ministry for European and International Affairs, is called “Humanity at the Crossroads: Autonomous Weapons Systems and the Challenge of Regulation” and runs from Monday to Tuesday of this week at the Hofburg Palace in Vienna.

It follows a separate civil society forum on Sunday at Palais Wertheim organised by the International Campaign to Stop Killer Robots.

ai, artificial intelligence, science, research, killer robots
Image credit: Tara Winstead/Pexels

Human control

“We cannot let this moment pass without taking action. Now is the time to agree on international rules and norms to ensure human control,” Austrian Foreign Minister Alexander Schallenberg told the main conference, attended by non-governmental and international organisations as well as envoys from 143 countries, according to a Reuters report.

“At least let us make sure that the most profound and far-reaching decision, who lives and who dies, remains in the hands of humans and not of machines,” he said in the conference’s opening speech.

Efforts at the United Nations to regulate autonomous weapons have stalled in recent years, even as public interest in AI has reached new highs due to the popularity of services such as OpenAI’s ChatGPT or AI-powered image-generation tools.

Panels at the conference are due to cover areas including the current direction of technological development, the implications of “autonomy” on international security and society in general and positive obligations that could ensure adequate human judgement and control of weapons systems.

‘Moral failures’

Panelists are also to examine areas including the ethical and human rights implications of processing people as data through sensors and algorithms to make decisions about subjecting them to physical force, and the risk of an “autonomy” arms race, which it is feared could lower the threshold for military confrontation and proliferation of non-state armed groups.

The president of the International Committee of the Red Cross, Mirjana Spoljaric, told a panel discussion that the current context shows evidence of “moral failures in the face of the international community” and that such failures risk acceleration if the responsibility for violence is given “over to machines and algorithms”.

Programmer and tech investor Jaan Tallinn said in a keynote speech that AI is already making errors in areas such as football and self-driving cars.

“We must be extremely cautious about relying on the accuracy of these systems, whether in the military or civilian sectors,” he said.

The US highway safety regulator has in recent days opened investigations into the AI-powered driver-assistance features offered by Ford and Tesla over potential safety issues that may have contributed to multiple fatal accidents.