Fruit quality and defect image classification with conditional GAN data augmentation
•CGAN data augmentation improves fruit quality classification.•GRAD-CAM shows CGAN generates useful features for classification.•Pruning reduces model size to 50% of original size, retaining high accuracy.•Code and models made publicly available for future work. Contemporary Artificial Intelligence...
Saved in:
| Published in | Scientia horticulturae Vol. 293; p. 110684 |
|---|---|
| Main Authors | , , , , |
| Format | Journal Article |
| Language | English |
| Published |
Elsevier B.V
05.02.2022
|
| Subjects | |
| Online Access | Get full text |
| ISSN | 0304-4238 1879-1018 |
| DOI | 10.1016/j.scienta.2021.110684 |
Cover
| Summary: | •CGAN data augmentation improves fruit quality classification.•GRAD-CAM shows CGAN generates useful features for classification.•Pruning reduces model size to 50% of original size, retaining high accuracy.•Code and models made publicly available for future work.
Contemporary Artificial Intelligence technologies allow for the employment of Computer Vision to discern good crops from bad, providing a step in the pipeline of selecting healthy fruit from undesirable fruit, such as those which are mouldy or damaged. State-of-the-art works in the field report high accuracy results on small datasets (<1000 images), which are not representative of the population regarding real-world usage. The goals of this study are to further enable real-world usage by improving generalisation with data augmentation as well as to reduce overfitting and energy usage through model pruning. In this work, we suggest a machine learning pipeline that combines the ideas of fine-tuning, transfer learning, and generative model-based training data augmentation towards improving fruit quality image classification. A linear network topology search is performed to tune a VGG16 lemon quality classification model using a publicly-available dataset of 2690 images. We find that appending a 4096 neuron fully connected layer to the convolutional layers leads to an image classification accuracy of 83.77%. We then train a Conditional Generative Adversarial Network on the training data for 2000 epochs, and it learns to generate relatively realistic images. Grad-CAM analysis of the model trained on real photographs shows that the synthetic images can exhibit classifiable characteristics such as shape, mould, and gangrene. A higher image classification accuracy of 88.75% is then attained by augmenting the training with synthetic images, arguing that Conditional Generative Adversarial Networks have the ability to produce new data to alleviate issues of data scarcity. Finally, model pruning is performed via polynomial decay, where we find that the Conditional GAN-augmented classification network can retain 81.16% classification accuracy when compressed to 50% of its original size. |
|---|---|
| Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 23 |
| ISSN: | 0304-4238 1879-1018 |
| DOI: | 10.1016/j.scienta.2021.110684 |