Detection of anemic sheep using ocular conjunctiva images and deep learning algorithms

•A model was developed to predict sheep anemia using ocular conjunctiva images and deep learning.•Three deep neural network (DNN) models for regression and classification were compared.•For regression, Xception model had the best performance with an R2 score of 0.24.•VGG19 outperformed other models...

Full description

Saved in:
Bibliographic Details
Published inLivestock science Vol. 294; p. 105669
Main Authors Freitas, Luara A., Ferreira, Rafael E.P., Alves, Anderson A.C., Dórea, João R.R., Paz, Claudia C.P., Rosa, Guilherme J.M.
Format Journal Article
LanguageEnglish
Published Elsevier B.V 01.04.2025
Subjects
Online AccessGet full text
ISSN1871-1413
DOI10.1016/j.livsci.2025.105669

Cover

More Information
Summary:•A model was developed to predict sheep anemia using ocular conjunctiva images and deep learning.•Three deep neural network (DNN) models for regression and classification were compared.•For regression, Xception model had the best performance with an R2 score of 0.24.•VGG19 outperformed other models in classifying sheep as anemic or non-anemic.•The integration of the ocular conjunctiva images with DNN can aid farm management decisions. In sheep, severe anemia often results from gastrointestinal nematode infections, commonly caused by Haemonchus contortus, a blood-sucking nematode. The objective of this study was to develop a model to predict packed cell volume (PCV) in sheep through ocular conjunctiva images as a real-time anemia diagnosis approach. The dataset consisted of 3,441 ocular conjunctiva images collected from 392 sheep from three different farms using a smartphone camera. To identify the region of interest in the images (ocular conjunctiva), we annotated 480 images using the Segment Anything Model (SAM). Subsequently, we employed an image segmentation algorithm based on U-net, utilizing the original images and annotations obtained from SAM. We then cropped the segmented images to retain only the ocular conjunctiva region. These cropped and segmented images were used as input data, with PCV as the target variable, in both regression and classification models. We assessed the performance of three different deep neural network (DNN) architectures: VGG19, Inception v3, and Xception. For the classification tasks, a threshold of 27 % (anemic < 27 %, non-anemic ≥ 27 %) was used to convert PCV into a binary variable. The dataset was split into training, validation, and testing sets using random sampling by sheep. The segmentation was evaluated using intersection over union (IoU). To compare the predictive quality in the testing set, we computed the R2, Concordance Correlation Coefficient (CCC), and Root Mean Square Error of Prediction (RMSEP) for the regression models, and the accuracy, precision, recall, and F1 score for the classification tasks. The U-net segmentation model demonstrated reliable segmentation ability, with an average IoU of 0.93, 0.84, and 0.68 in the training, validation, and testing sets, respectively. For regression, the Xception architecture provided the best performance with an R2 of 0.24. For the classification models, VGG19 outperformed the other models in classifying individuals as anemic or non-anemic, achieving an F1 score of 0.62, indicating its moderate ability to distinguish between these two classes. This innovative approach not only expands the possibilities for integrated high-throughput phenotyping through computer vision, but also aids in identifying anemic animals. The results suggest that integrating ocular conjunctiva images with DNN algorithms can contribute to supporting farm-level management decisions, and potentially reduce economic losses due to parasitic infections like H. contortus. Furthermore, this approach not only facilitates real-time anemia diagnosis but also optimizes the use of blood tests, potentially reducing associated costs.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:1871-1413
DOI:10.1016/j.livsci.2025.105669