Timely detection of rice diseases can help farmers to take necessary action and thus reducing the yield loss substantially. Automatic recognition of rice diseases from the rice leaf images using computer vision and machine learning can be beneficial over the manual method of disease recognition through visual inspection. During the recent years, deep learning, a very popular and efficient machine learning algorithm, has shown great promise in image classification task. In this paper, a segmentation-based method using deep neural network for classifying rice diseases from leaf images has been proposed. Disease-affected regions of the rice leaves have been segmented using local segmentation method and the Convolutional Neural Network (CNN) has been trained with those images. Proposed method has been applied on three different datasets including the one created by us which consists of the rice leaf images collected from Bangladesh Rice Research Institute (BRRI). Three state-of-the-art CNN architectures VGG, ResNet and DenseNet, used in the proposed method, have been trained with these three datasets for classifying the diseases. Classification performance of the proposed method using the said three CNN architectures for the three datasets have been analyzed and compared. These results show that this model is quite promising in classifying rice leaf diseases. Outcome of this research is an enhancement in the performance of rice disease classification which is quite significant for the viability of this work to be transformed into a real-time application for the farmers.
Pedestrian detection is an established instance of computer vision task. Pedestrian detection from the color images has achieved robust performance but in the night time or in bad light conditions it has low detection accuracy. Thermal images are used for detecting people at night time, foggy weather or in bad lighting situations when color images have a lower vision. But in the daytime where the surroundings are warm or warmer than pedestrians then the thermal image has lower accuracy. Hence thermal and color image pair can be a solution but it is expensive to capture color-thermal pair and misaligned imagery can cause low detection accuracy. We proposed a network that achieved better accuracy by extending the prior works which introduced the use of the saliency map in pedestrian detection tasks from the thermal images into instance-level segmentation. We worked on a subdivision of KAIST Multispectral Pedestrian Detection Dataset [8] which has pixel-level annotations. We have trained Mask-RCNN for pedestrian detection task and report the added effect of saliency maps generated using PiCA-Net. We have achieved an accuracy of 88.14% over day and 91.84% over night images.. So, our model has reduced the miss rate by 24.1% and 23% over the existing state-of-the-art method in day and night images.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.