Please use this identifier to cite or link to this item: http://localhost:8080/xmlui/handle/123456789/3538
Title: A modern deep learning framework in robot vision for automated bean leaves diseases detection
Authors: Abed, Sudad H
Keywords: Deep learning
Robot vision
Issue Date: Apr-2021
Publisher: International Journal of Intelligent Robotics and Applications
Abstract: The bean leaves can be afected by several diseases, such as angular leaf spots and bean rust, which can cause big damage to bean crops and decrease their productivity. Thus, treating these diseases in their early stages can improve the quality and quantity of the product. Recently, several robotic frameworks based on image processing and artifcial intelligence have been used to treat these diseases in an automated way. However, incorrect diagnosis of the infected leaf can lead to the use of chemical treatments for normal leaf thereby the issue will not be solved, and the process may be costly and harmful. To overcome these issues, a modern deep learning framework in robot vision for the early detection of bean leaves diseases is proposed. The proposed framework is composed of two primary stages, which detect the bean leaves in the input images and diagnosing the diseases within the detected leaves. The U-Net architecture based on a pre-trained ResNet34 encoder is employed for detecting the bean leaves in the input images captured in uncontrolled environmental conditions. In the classifcation stage, the performance of fve diverse deep learning models (e.g., Densenet121, ResNet34, ResNet50, VGG-16, and VGG-19) is assessed accurately to identify the healthiness of bean leaves. The performance of the proposed framework is evaluated using a challenging and extensive dataset composed of 1295 images of three diferent classes (e.g., Healthy, Angular Leaf Spot, and Bean Rust). In the binary classifcation task, the best performance is achieved using the Densenet121 model with a CAR of 98.31%, Sensitivity of 99.03%, Specifcity of 96.82%, Precision of 98.45%, F1-Score of 98.74%, and AUC of 100%. The higher CAR of 91.01% is obtained using the same model in the multi-classifcation task, with less than 2 s per image to produce the fnal decision.
URI: http://localhost:8080/xmlui/handle/123456789/3538
ISSN: 235–251
Appears in Collections:مركز الحاسبة الالكترونية

Files in This Item:
File Description SizeFormat 
A modern deep learning framework in robot vision for automated.pdf3.49 MBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.