DeepMind Diagnostic AI ‘More Accurate’ Than Human Eye Specialists

Matt Broersma is a long standing tech freelance, who has worked for Ziff-Davis, ZDnet and other leading publications

Follow on: Google +

DeepMind’s algorithm outperforms human doctors in initial tests, but questions hang over the use of AI in clinical practice

Google’s artificial intelligence company, DeepMind, is planning clinical trials of a “ground-breaking” diagnostic tool for eye diseases after tests found it was more accurate than human specialists.

The plans come amidst growing concern over the role of AI in healthcare, with clinical data from the NHS being used to train systems over which the company is to retain sole ownership.

A review panel set up to scrutinise the relations of London-based DeepMind and its partnerships with the NHS found in June that its present business model could allow it to use its access to data to make the NHS dependent upon its technology in order to “drive monopolistic profits”.

DeepMind was acquired by Google in 2014 and is now owned by Google parent Alphabet.

Accuracy

Initial findings published in the journal Nature Medicine on Monday found that DeepMind’s AI algorithm had a lower error rate than eight retinal specialists at Moorfields Eye Hospital, based on their analyses of 997 3D scans.

DeepMind developed the algorithm after striking a deal with  Moorfields to gain access to patient data, using 14,884 anonymised scans from 7,500 patients to train the system. Moorfields is the largest eye hospital in Europe and North America.

DeepMind showed an error rate of 5.5 percent compared with 6.7 to 24.1 percent for the eight human specialists, said Pearse Keane, a consultant opthamologist at Moorfields, who co-wrote the paper.

When the specialists were provided with the contextual information they would usually be given access to, for a more realistic comparison, their error rates dropped to between 5.5 percent and 13.1 percent, so that the AI still came out ahead.

DeepMind’s system would also be able to deliver an immediate analysis, rather than a wait of several days that would ordinarily be required for a specialist to carry out a review.

Dr. Keane called the findings “jaw-dropping” and said the system could be in use by the NHS within three years for “direct patient benefit”.

“If we can diagnose and treat eye conditions early, it gives us the best chance of saving people’s sight,” Dr. Keane said in a statement. “With further research it could lead to greater consistency and quality of care for patients with eye problems in the future.”

Headed for clinical practice?

DeepMind’s system is based on two neural networks, one that identifies diseased areas in scans and another that makes referrals to clinicians with suggested diagnoses.

The system is envisaged as a way of helping doctors prioritise scans, rather than supplanting specialists.

Dominic King, clinical lead for DeepMind Health, said the company was now considering the “thoughtful and safe way” for acquiring the further patient data that would allow the technology to enter clinical practice.

He said similar partnerships with University College London Hospitals and Imperial College London, which focus on radiotherapy scans and mammograms, were also showing “promising signs”.

DeepMind Health head Mustafa Suleyman said the project had the potential to “transform the diagnosis, treatment, and management of patients with sight threatening eye conditions… around the world.”

In June a review panel said DeepMind needed to clarify its business model and its relationship to Alphabet as it develops technology for use by the NHS.

Under its deal with DeepMind, Moorfields retains control over the database of retinal scans and receives free access to the AI algorithm for at least five years, but the technology itself is solely owned by DeepMind, which retains the right to sell it to other hospitals around the world.

A previous deal between DeepMind and the NHS was struck down as illegal by the Information Commissioner’s Office (ICO) last year because the regulator found patients had not been properly notified that their data was being used.

Read also :