Cv-straw ga.jpg
FA info icon.svg Angle down icon.svg Source data
Type Paper
Year 2024
Location London, ON, Canada
Cite as Citation reference for the source document. Aghamohammadesmaeilketabforoosh, K.; Nikan, S.; Antonini, G.; Pearce, J.M. Optimizing Strawberry Disease and Quality Detection with Vision Transformers and Attention-Based Convolutional Neural Networks. Foods 2024, 13, 1869. https://doi.org/10.3390/foods13121869 academia open access, preprint

Machine learning and computer vision have proven to be valuable tools for farmers to streamline their resource utilization to lead to more sustainable and efficient agricultural production. These techniques have been applied to strawberry cultivation in the past with limited success. To build on this past work, in this study, two separate sets of strawberry images, along with their associated diseases, were collected and subjected to resizing and augmentation. Subsequently, a combined dataset consisting of nine classes was utilized to fine-tune three distinct pretrained models: vision transformer (ViT), MobileNetV2, and ResNet18. To address the imbalanced class distribution in the dataset, each class was assigned weights to ensure nearly equal impact during the training process. To enhance the outcomes, new images were generated by removing backgrounds, reducing noise, and flipping them. The performances of ViT, MobileNetV2, and ResNet18 were compared after being selected. Customization specific to the task was applied to all three algorithms, and their performances were assessed. Throughout this experiment, none of the layers were frozen, ensuring all layers remained active during training. Attention heads were incorporated into the first five and last five layers of MobileNetV2 and ResNet18, while the architecture of ViT was modified. The results indicated accuracy factors of 98.4%, 98.1%, and 97.9% for ViT, MobileNetV2, and ResNet18, respectively. Despite the data being imbalanced, the precision, which indicates the proportion of correctly identified positive instances among all predicted positive instances, approached nearly 99% with the ViT. MobileNetV2 and ResNet18 demonstrated similar results. Overall, the analysis revealed that the vision transformer model exhibited superior performance in strawberry ripeness and disease classification. The inclusion of attention heads in the early layers of ResNet18 and MobileNet18, along with the inherent attention mechanism in ViT, improved the accuracy of image identification. These findings offer the potential for farmers to enhance strawberry cultivation through passive camera monitoring alone, promoting the health and well-being of the population.

Source code: https://osf.io/ej5qv/ Part of WIRED and the Agrivoltaic agrotunnel

Keywords[edit | edit source]

computer vision; monitoring; strawberries; yield monitoring; image classification; machine learning; vision transformers; MobileNetV2; ResNet18

See also[edit | edit source]

FA info icon.svg Angle down icon.svg Page data
Part of FAST Completed
Keywords computer vision, monitoring, strawberries, yield monitoring, image classification, machine learning, vision transformers, mobilenetv2, resnet18
SDG SDG02 Zero hunger, SDG09 Industry innovation and infrastructure
Authors Aghamohammadesmaeilketabforoosh, K.; Nikan, S.; Antonini, G.; Pearce, J.M.
License CC-BY-SA-4.0
Organizations FAST, Western University
Language English (en)
Related 0 subpages, 20 pages link here
Impact page views
Created June 14, 2024 by Joshua M. Pearce
Modified June 14, 2024 by Joshua M. Pearce
Cookies help us deliver our services. By using our services, you agree to our use of cookies.