Identification of herbarium specimen sheet components from high‐resolution images using deep learning

Advanced computer vision techniques hold the potential to mobilise vast quantities of biodiversity data by facilitating the rapid extraction of text‐ and trait‐based data from herbarium specimen digital images, and to increase the efficiency and accuracy of downstream data capture during digitisatio...

Full description

Saved in:
Bibliographic Details
Published inEcology and evolution Vol. 13; no. 8; pp. e10395 - n/a
Main Authors Thompson, Karen M., Turnbull, Robert, Fitzgerald, Emily, Birch, Joanne L.
Format Journal Article
LanguageEnglish
Published England John Wiley & Sons, Inc 01.08.2023
John Wiley and Sons Inc
Wiley
Subjects
Online AccessGet full text
ISSN2045-7758
2045-7758
DOI10.1002/ece3.10395

Cover

More Information
Summary:Advanced computer vision techniques hold the potential to mobilise vast quantities of biodiversity data by facilitating the rapid extraction of text‐ and trait‐based data from herbarium specimen digital images, and to increase the efficiency and accuracy of downstream data capture during digitisation. This investigation developed an object detection model using YOLOv5 and digitised collection images from the University of Melbourne Herbarium (MELU). The MELU‐trained ‘sheet‐component’ model—trained on 3371 annotated images, validated on 1000 annotated images, run using ‘large’ model type, at 640 pixels, for 200 epochs—successfully identified most of the 11 component types of the digital specimen images, with an overall model precision measure of 0.983, recall of 0.969 and moving average precision (mAP0.5–0.95) of 0.847. Specifically, ‘institutional’ and ‘annotation’ labels were predicted with mAP0.5–0.95 of 0.970 and 0.878 respectively. It was found that annotating at least 2000 images was required to train an adequate model, likely due to the heterogeneity of specimen sheets. The full model was then applied to selected specimens from nine global herbaria (Biodiversity Data Journal, 7, 2019), quantifying its generalisability: for example, the ‘institutional label’ was identified with mAP0.5–0.95 of between 0.68 and 0.89 across the various herbaria. Further detailed study demonstrated that starting with the MELU‐model weights and retraining for as few as 50 epochs on 30 additional annotated images was sufficient to enable the prediction of a previously unseen component. As many herbaria are resource‐constrained, the MELU‐trained ‘sheet‐component’ model weights are made available and application encouraged. An effective object detection model has been built to enable the automated segmentation of specimen images into their component parts and the isolation of text‐bearing labels from the herbarium specimen. This baseline model can increase the efficiency and accuracy of downstream data capture during digitisation of and data extraction from herbarium specimen images. Comprehensive testing of these models to specimens from global herbaria indicates the potential application of these methods and models for rapid extraction of biodiversity data from high‐resolution specimen images.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:2045-7758
2045-7758
DOI:10.1002/ece3.10395