• Center on Health Equity and Access
  • Clinical
  • Health Care Cost
  • Health Care Delivery
  • Insurance
  • Policy
  • Technology
  • Value-Based Care

Code-Free Models Effective at Classifying Retinal Pathologies

News
Article

Models that are code-free were able to perform as accurately as expert-designed bespoke models when classifying retinal pathologies.

A study published in International Journal of Retina and Vitreous1 found that models created by clinicians who did not have coding experience were able to classify retinal pathologies at a similar rate to bespoke models creased by experts. This could indicate potential for more widespread usage of artificial intelligence (AI) in ophthalmology.

AI has been developed for ophthalmology by using imaging data in the past, with AI capable of diagnosing certain eye conditions, such as diabetic retinopathy.2 Deep learning (DL) can be used to establish patterns in datasets, which has been used to help with classification in ophthalmology. However, DL has been hard to develop for clinicians due to their lack of coding knowledge, which makes code-free deep learning (CFDL) an interesting tool in this vein, as it could develop machine learning without all of the knowledge of coding needed.

This study aimed to compare the performance of multiclass CFDL models compared with bespoke models when it came to diagnosing retinal pathologies based on optical coherence tomography (OCT) images.

Eye scan | Image credit: oz - stock.adobe.com

Eye scan | Image credit: oz - stock.adobe.com

The comparative analysis used an OCT video dataset to test the capabilities of DL models created by 2 groups when designed to diagnose. The first model was created by an ophthalmology trainee who did not have coding experience, and the second was created by an expert in AI. Both models were tasked with determining if an eye was normal/healthy retina or had 1 of macular hole (MH), epiretinal membrane (ERM), wet age-related macular degeneration (AMD), and diabetic macular edema (DME) based on an OCT image. The models were trained using videos collected from the Maisonneuve-Rosemont Hospital in Montreal. All OCTs were taken by an experienced OCT technician.

The CFDL was developed in Google Cloud AutoML Video Intelligence and Google Cloud AutoML Vision. All videos and images were uploaded into the Cloud bucket for evaluation. A postdoctoral researcher with experience in DL was the developer of the bespoke model.

There were 1173 videos in the overall dataset used to train and validate the 2 models. Overall, the CFDL video model had an area under the precision-recall curve (AUPRC) of 0.984, which was 94.1% precision, 94.1% recall, and 94.1% calculated accuracy at the 0.5 confidence threshold cutoff. The AUPRC was lowest for ERM at 0.954 and highest for MH and AMD at 1.0. Normal retina videos were also classified correctly 96% of the time, with the lowest accuracy coming from classifying ERM at 90%.

When compared with the best model developed by the expert, the CFDL model had a higher AUPRC value (0.984 vs 0.9097). The AI model was better in precision and sensitivity but only by 0.0009 in both categories; accuracy was higher in the AI model by a more significant amount (0.941 vs 0.9667).

The image model for CFDL was higher than the video model, as it had an AUPRC of 0.990 with a sensitivity and precision of 96.6%; the overall accuracy was 95.8%. The performance was the same across all metrics between the CFDL image model and the best bespoke algorithm.

There were some limitations to this study. The models were trained on a single dataset, which does not take into account datasets of different sizes or complexities. External validation was not included as a part of this study. The OCT images were not focused on the main pathological findings but were instead centered on the fovea. This could have affected the performance of the model.

The researchers concluded that AI developed by those who were not experts in AI tools were just as effective as those developed by experts. CFDL being an effective model for diagnosing could make AI more accessible for ophthalmologists and in other fields of medicine.

References

  1. Touma S, Hammou BA, Antaki F, Boucher MC, Duval R. Comparing code-free deep learning models to expert-designed models for detecting retinal diseases from optical coherence tomography. Int J Retina Vitreous. 2024;10:37. doi:10.1186/s40942-024-00555-3
  2. Bonavitacola J. AI effective in screening for referable diabetic retinopathy. AJMC. September 27, 2023. Accessed April 29, 2024. https://www.ajmc.com/view/ai-effective-in-screening-for-referable-diabetic-retinopathy
Related Videos
ISPOR 2024 Recap
Chris Pagnani, MD, PC
Dr Chris Pagnani
Shawn Tuma, JD, CIPP/US, cybersecurity and data privacy attorney, Spencer Fane LLP
Will Shapiro, vice president of data science, Flatiron Health
Will Shapiro, vice president of data science, Flatiron Health
Kathy Oubre, MS, Pontchartrain Cancer Center
Emily Touloukian, DO, Coastal Cancer Center
dr krystyn van vliet
Related Content
© 2024 MJH Life Sciences
AJMC®
All rights reserved.