This paper intends to present an automated mango grading system under four stages (1) pre-processing, (2) feature extraction, (3) optimal feature selection and (4) classification. Initially, the input image is subjected to the pre-processing phase, where the reading, sizing, noise removal and segmentation process happens. Subsequently, the features are extracted from the pre-processed image. To make the system more effective, from the extracted features, the optimal features are selected using a new hybrid optimization algorithm termed the lion assisted firefly algorithm (LA-FF), which is the combination of LA and FF, respectively. Then, the optimal features are given for the classification process, where the optimized deep convolutional neural network (CNN) is deployed. As a major contribution, the configuration of CNN is fine-tuned via selecting the optimal count of convolutional layers. This obviously enhances the classification accuracy in grading system. For finetuning the convolutional layers in the deep CNN, the LA-FF algorithm is used so that the classifier is optimized. The grading is evaluated on the basis of healthydiseased, ripeunripe and bigmediumvery big cases with respect to type I and type II measures and the performance of the proposed grading model is compared over the other state-of-the-art models. 1 INTRODUCTION Mango (Mangifera indica L.) belongs to the family Anacardiaceae. These are cultivated commercially and extensively in India, tropical Australia, Thailand, Philippines, Hawaii, the lowlands of SouthEast Africa, and in the lowlands of South and Central America. When exporting the mangoes over other countries, the grading [1-4] is essential for quality consideration. Conventionally, the fruit grading is handled by those trained inspectors and this is considered to be labour-intensive, time-consuming, and inefficient. The majority of the countries consider the size feature for mango grading. Still, it remains to be a complex task due to inappropriate grading. Therefore, the automatic grading process [5-7] is very necessary and helpful. On grading the mangoes, the features such as shape, size, firmness, maturity, and visual defects have to be essentially considered. Due to the advancement of technologies, the grading can be effectively made using image processing and computer vision systems [3-8]. This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.
Oral or mouth neoplasm is the type of head & neck cancers. This type of cancer starts in the throat or mouth due to uncontrollable growth of tissues, and it looks like a lump or bump. In the pre-processing step, anisotropic diffusion filter used to filter unwanted distortions from MRI image. Next, the lesion separated from MRI image using a hybrid approach KFCM clustering in segmentation and features extracted using Intensity of Histogram, GLCM & GLRLM. The comparison between these three algorithms is performed to obtain the best feature extraction technique. Next, SVM classifier used to classify the lesion. Classification accuracy obtained for the developed system is 98.04% using GLRLM feature extraction technique.
The high-pace rise in the demands of medicinal plants towards pharmaceutical significances as well as the different ayurvedic or herbal remedials have forced agro-industries However, rising plant disease cases have limited the cumulative growth and hence both volumetric production as well as quality of medicine. In this paper a first of its kind evolutionary computing driven ROI-specific hybrid deep-spatio temporal textural feature learning model is developed for medicinal plant disease detection (HDST-MPD). To alleviate any possible class-imbalance problem, HDST-MPD model at first applied firefly heuristic driven fuzzy C-means clustering to retrieve ROI-specific RGB regions. Subsequently, to exploit maximum possible deep spatiotemporal textural features, it applied gray-level co-occurrence matrix (GLCM) and AlexNet transferable network. Here, the use of multiple GLCM features helped in exploiting textural feature distribution, while AlexNet deep model yielded high-dimensional features. These deep-spatio temporal textural feature (deep-STTF) features were fused together to yield a composite vector, which was trained over random forest ensemble to perform two-class classification to classify each sample medicinal image as normal or diseased. Depth performance assessment confirmed that the proposed model yields accuracy of 98.97%, precision 99.42%, recall 98.89%, F-measure 99.15%, and equal error rate of 1.03%, signifying its robustness towards real-time medicinal plant disease detection and classification.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.