Abstract:The timely identification of plant diseases prevents the negative impact on crops. Convolutional neural network, particularly deep learning is used widely in machine vision and pattern recognition task. Researchers proposed different deep learning models in the identification of diseases in plants. However, the deep learning models require a large number of parameters, and hence the required training time is more and also difficult to implement on small devices. In this paper, we have proposed a novel deep lea… Show more
“…IoT plays a crucial role in this disease as an IoT-based system is able to manage and balance the impact of SARS-CoV-2 in smart cities by cluster identification which identifies people who are not wearing face masks (Herath et al. 2021 ). Other examples include monitoring vaccine temperature using IoT (Almars et al.…”
Internet of Things (IoT) images are captivating growing attention because of their wide range of applications which requires visual analysis to drive automation. However, IoT images are predominantly captured from outdoor environments and thus are inherently impacted by the camera and environmental parameters which can adversely affect corresponding applications. Deep Learning (DL) has been widely adopted in the field of image processing and computer vision and can reduce the impact of these parameters on IoT images. Albeit, there are many DL-based techniques available in the current literature for analyzing and reducing the environmental and camera impacts on IoT images. However, to the best of our knowledge, no survey paper presents state-of-the-art DL-based approaches for this purpose. Motivated by this, for the first time, we present a Systematic Literature Review (SLR) of existing DL techniques available for analyzing and reducing environmental and camera lens impacts on IoT images. As part of this SLR, firstly, we reiterate and highlight the significance of IoT images in their respective applications. Secondly, we describe the DL techniques employed for assessing the environmental and camera lens distortion impacts on IoT images. Thirdly, we illustrate how DL can be effective in reducing the impact of environmental and camera lens distortion in IoT images. Finally, along with the critical reflection on the advantages and limitations of the techniques, we also present ways to address the research challenges of existing techniques and identify some further researches to advance the relevant research areas.
“…IoT plays a crucial role in this disease as an IoT-based system is able to manage and balance the impact of SARS-CoV-2 in smart cities by cluster identification which identifies people who are not wearing face masks (Herath et al. 2021 ). Other examples include monitoring vaccine temperature using IoT (Almars et al.…”
Internet of Things (IoT) images are captivating growing attention because of their wide range of applications which requires visual analysis to drive automation. However, IoT images are predominantly captured from outdoor environments and thus are inherently impacted by the camera and environmental parameters which can adversely affect corresponding applications. Deep Learning (DL) has been widely adopted in the field of image processing and computer vision and can reduce the impact of these parameters on IoT images. Albeit, there are many DL-based techniques available in the current literature for analyzing and reducing the environmental and camera impacts on IoT images. However, to the best of our knowledge, no survey paper presents state-of-the-art DL-based approaches for this purpose. Motivated by this, for the first time, we present a Systematic Literature Review (SLR) of existing DL techniques available for analyzing and reducing environmental and camera lens impacts on IoT images. As part of this SLR, firstly, we reiterate and highlight the significance of IoT images in their respective applications. Secondly, we describe the DL techniques employed for assessing the environmental and camera lens distortion impacts on IoT images. Thirdly, we illustrate how DL can be effective in reducing the impact of environmental and camera lens distortion in IoT images. Finally, along with the critical reflection on the advantages and limitations of the techniques, we also present ways to address the research challenges of existing techniques and identify some further researches to advance the relevant research areas.
“…Sharma et al ( 2020 ) obtained 98.6% accuracy on PlantVillage by manually segmenting a subset of the images. Hassan and Maji ( 2022 ) obtain significant results on three datasets: 99.39% on PlantVillage, 99.66% on Rice, and 76.59% on imbalance cassava. Syed-Ab-Rahman et al ( 2022 ) obtained 94.37% accuracy in detection and an average precision of 95.8% on the Citrus leaves dataset, distinguishing between three different citrus diseases, namely citrus black spot, citrus bacterial canker, and Huanglongbing.…”
Section: Related Work On Disease Detectionmentioning
Recent years have seen an increased effort in the detection of plant stresses and diseases using non-invasive sensors and deep learning methods. Nonetheless, no studies have been made on dense plant canopies, due to the difficulty in automatically zooming into each plant, especially in outdoor conditions. Zooming in and zooming out is necessary to focus on the plant stress and to precisely localize the stress within the canopy, for further analysis and intervention. This work concentrates on tip-burn, which is a plant stress affecting lettuce grown in controlled environmental conditions, such as in plant factories. We present a new method for tip-burn stress detection and localization, combining both classification and self-supervised segmentation to detect, localize, and closely segment the stressed regions. Starting with images of a dense canopy collecting about 1,000 plants, the proposed method is able to zoom into the tip-burn region of a single plant, covering less than 1/10th of the plant itself. The method is crucial for solving the manual phenotyping that is required in plant factories. The precise localization of the stress within the plant, of the plant within the tray, and of the tray within the table canopy allows to automatically deliver statistics and causal annotations. We have tested our method on different data sets, which do not provide any ground truth segmentation mask, neither for the leaves nor for the stresses; therefore, the results on the self-supervised segmentation is even more impressive. Results show that the accuracy for both classification and self supervised segmentation is new and efficacious. Finally, the data set used for training test and validation is currently available on demand.
“…They used LeNet architecture to diagnose rice blast disease based on the rice leaf image, wherein every rice leaf image was divided into patches, each with 128 × 128 pixels, and then each patch was used to identify the rice blast disease by an approximate LeNet-5 architecture [16], which adopts SVM as the classifier. Hassan et al [45] has also proposed a CNN architecture for plant disease diagnosis which uses depthwise separable convolution to improve the inception architecture. For diagnosing the nutritional deficiency of rice plants, Sharma et al [46] has combined such classifiers as InceptionResNetV2, Xception, DenseNet201, and VGG19 to extract different features and fuse them into the average strategy.…”
Section: Image Processing Based Rice Leaf Spots Identificationmentioning
confidence: 99%
“…In this paper, the performances of the CNN architectures with multi-feature fusion proposed by Hassan et al [45] and Sharma et al [46]…”
Section: Comparison With Existing Well-known Modelmentioning
It is time-consuming and labor-intensive to detect rice diseases manually. The purpose of this research is to develop a convolutional neural networks (CNNs)-based system to automatically detect the diseased rice leaf infected with rice leaf blast, helminthosporium leaf blight, and bacterial leaf blight. The sizes of rice leaf spots vary with the severity of disease infection. A single model based CNN cannot effectively classify images, especially for images with objects of small size as well as multiple object scales, and complicated image background. In this research, a multiscale serial convolution neural network (MSSCNN) and a multiscale parallel convolution neural network (MSPCNN) are proposed to identify diseased rice leaf spots based on multi-modal fusion to extract different perception features and combine them to improve the performances obtained by using only one modality. Experimental results delineate that MSSCNN and MSPCNN can get better performance in identifying the diseased rice leaves. In MSPCNN, the features of tiny spots on diseased rice leaves can be completely preserved. MSPCNN can hence offer better performances than MSSCNN. Additionally, MSPCNN architecture is suitable for parallel computing environment.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.