AI ‘Judge’ Can Predict Outcome Of Human Rights Court Cases

artifical intelligence, AI

A British study suggests artificial intelligence could quickly spot fact patterns likely to lead to particular decisions

British computer scientists have devised an artificial intelligence capable of reaching the same decisions as human judges in nearly 80 percent of the cases studied.

Researchers at the University College London (UCL) said their study is the first of its kind and that the technology could be used to improve efficiency at top courts.

Texts analysed

Teamwork, business intelligence, cogs © alphaspirit, Shutterstock 2014Previous studies have predicted outcomes based on the nature of a crime or by looking at the policy position of each judge, but the UCL study is the first to be based on the analysis of the texts figuring in the case.

“We don’t see AI replacing judges or lawyers, but we think they’d find it useful for rapidly identifying patterns in cases that lead to certain outcomes,” stated Dr Nikolaos Aletras of UCL’s department of computer science, the paper’s lead researcher.

“It could also be a valuable tool for highlighting which cases are most likely to be violations of the European convention on human rights.”

The study used an algorithm to analyse English-language data for 584 cases presented to the European Court of Human Rights involving torture, degrading treatment and privacy, including equal numbers of cases found to be violations and non-violations.

In 79 percent of the cases assessed, the software reached the same verdict as that delivered by the court.

The study found that the facts of each case, rather than purely legal arguments, were the most important factor in determining the final decision. The topics involved also had a strong correlation to the judgement delivered, according to the study.

Privacy issues

The findings suggest artificial intelligence could be used in the legal profession to quickly find fact patterns that would be likely to lead to a particular outcome, the researchers argued.

But they said training the AI using data such as texts of individual applications and briefs from more sources, such as national authorities and law firms, could be difficult due to the privacy issues involved.

“Data access issues pose a significant barrier for scientists to work on such kinds of legal data,” they wrote.

Similar concerns have beset programmes designed to assist doctors, but which require giving private companies such as Google or IBM access to sensitive medical data.

The research was published in the journal Computer Science.

Do you know all about public sector IT – the triumph and the tragedy? Take our quiz!