Researchers from Google publish a paper describing a new deep learning algorithm for detecting signs of diabetic retinopathy.
Google is hoping to apply its machine learning expertise to help doctors identify patients at risk of diabetic retinopathy (DR) early enough in the disease cycle to be able to treat them effectively.
Researchers from the company this week published a paper in the Journal of the American Medical Association (JAMA) describing a deep learning algorithm for interpreting early signs of DR from retinal photographs.
The paper titled “Development and Validation of a Deep Learning Algorithm for Detection of Diabetic Retinopathy in Retinal Fundus Photographs” is based on data that Google researchers developed with help from doctors and researchers in various hospitals and universities in the U.S. and India.
The goal is to help doctors screen and identify patients in need for DR treatment especially in areas where the specialized ophthalmological skills needed for such diagnosis are in short supply.
“Diabetic retinopathy (DR) is the fastest growing cause of blindness, with nearly 415 million diabetic patients at risk worldwide,” Google researchers Lily Peng and Varun Gulshan said in a blog post this week. It is only by catching the disease in its early stage that doctors have a chance of staving off the irreversible blindness that it causes, they said.
The National Eye Institute describes diabetic retinopathy as a disease affecting the blood vessels in the retina. It is the most common cause of blindness among patients with diabetes, according to the NEI.
Typically, ophthalmologists examine pictures of the back of the eye to detect and determine if a patient has DR. Lesions in the eye caused by microaneurysms, cholesterol deposits and hemorrhages serve as indicators of the severity of the diseases in an individual.
Interpreting such images often require highly specialized skills that are simply not available in enough numbers to screen all those at risk, Peng and Gulshan wrote.
Creating an algorithm
To develop the algorithm, Google researchers, working with counterparts in India and the U.S., created a dataset comprised of some 128,000 images of the back of the eye. A team of between 3 and 7 ophthalmologists evaluated each of the images to ensure they were relevant.
The dataset was then used to train a Google deep neural network how to reliably recognize early signs of diabetic retinopathy. Google uses the same neural network technology in many of its core applications and services including image classification and speech recognition.
The researchers ran the algorithm against two separate data sets of around 12,000 images each to see how well it would detect signs of DR from them. The results were then compared against that of a panel of U.S. board certified ophthalmologists who reviewed the same set of images.
The results showed that the algorithm performed nearly on par with the ophthalmologists, the two researchers said in the blog.
While the initial results are encouraging more work needs to be done on refining the algorithm, they added. To that end, the researchers are currently working with more retinal specialists to define reliable reference standards for the algorithm. The dataset used to develop the algorithm also only involved 2D photographs, whereas doctors also use 3D imaging technology and other approaches such as Optical Coherence Tomography to diagnose diabetic eye disease they noted.
Originally published on eWeek