An active deep learning method for diabetic retinopathy detection in segmented fundus images using artificial bee colony algorithm

Retinal fundus image analysis (RFIA) is frequently used in diabetic retinopathy (DR) scans to determine the risk of blindness in diabetic patients. Ophthalmologists receive support from various RFIA programs to cope with the detection of visual impairments. In this article, active deep learning (ADL...

Full description

Saved in:
Bibliographic Details
Published inThe Artificial intelligence review Vol. 56; no. 4; pp. 3291 - 3318
Main Author Özbay, Erdal
Format Journal Article
LanguageEnglish
Published Dordrecht Springer Netherlands 01.04.2023
Springer
Springer Nature B.V
Subjects
Online AccessGet full text
ISSN0269-2821
1573-7462
DOI10.1007/s10462-022-10231-3

Cover

More Information
Summary:Retinal fundus image analysis (RFIA) is frequently used in diabetic retinopathy (DR) scans to determine the risk of blindness in diabetic patients. Ophthalmologists receive support from various RFIA programs to cope with the detection of visual impairments. In this article, active deep learning (ADL) using new multi-layer architecture for automatic recognition of DR stages is presented. In order to facilitate the detection of retinal lesions in the ADL system preprocessing, the image is segmented using the artificial bee colony (ABC) algorithm with a threshold value determined according to the results of the image histogram. Besides, a tag-efficient convolutional neural networks (CNN) architecture known as ADL-CNN has been developed to automatically extract segmented retinal features. This model has a two-stage process. In the first, images are selected to learn simple or complex retinal features using basic accuracy labels in the training examples. Second, useful masks are provided with key lesion features and segment areas of interest within the retinal image. Performance evaluation of the proposed ADL-CNN model is made by comparing the most advanced methods using the same dataset. The efficiency of the system is made by measuring statistical metrics such as classification accuracy (ACC), sensitivity (SE), specificity (SP), and F-measure. The ADL-CNN model applied to the EyePacs dataset containing 35,122 retinal images yielded 99.66% ACC, 93.76% SE, 96.71% SP, and 94.58% F-measure. In this respect, it can be said that the proposed method shows high performance in detecting DR lesions from various fundus images and determining the severity level.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0269-2821
1573-7462
DOI:10.1007/s10462-022-10231-3