Remote sensing technology has penetrated all the natural resource segments as it provides precise information in an image mode. Remote sensing satellites are currently the fastest-growing source of geographic area information. With the continuous change in the earth’s surface and the wide application of remote sensing, change detection is very useful for monitoring environmental and human needs. So, it is necessary to develop automatic change detection techniques to improve the quality and reduce the time required by manual image analysis. This work focuses on the improvement of the classification accuracy of the machine learning techniques by reviewing the training samples and comparing the post-classification comparison with the image differencing in the algebraic technique. Landsat data are medium spatial resolution data; that is why pixel-wise computation has been applied. Two change detection techniques have been studied by applying a decision tree algorithm using a separability matrix and image differencing. The first change detection, e.g., the separability matrix, is a post-classification comparison in which individual images are classified by a decision tree algorithm. The second change detection is, e.g., the image differencing change detection technique in which changed and unchanged pixels are determined by applying the corner method to calculate the threshold on the changing image. The performance of the machine learning algorithm has been validated by 10-fold cross-validation. The experimental results show that the change detection using the post-classification method produced better results when compared to the image differencing of the algebraic change detection technique.
The field of image processing is distinguished by the variety of functions it offers and the wide range of applications it has in biomedical imaging. It becomes a difficult and time-consuming process for radiologists to do the manual identification and categorization of the tumour. It is a complex and time-consuming procedure conducted by radiologists or clinical professionals to remove the contaminated tumour region from magnetic resonance (MR) pictures. It is the goal of this study to improve the performance and reduce the complexity of the image segmentation process by investigating FCM predicted image segmentation procedures in order to reduce the intricacy of the process. Furthermore, relevant characteristics are collected from each segmented tissue and aligned as input to the classifiers for autonomous identification and relegation of encephalon cancers in order to increase the accuracy and quality rate of the neural network classifier. An evaluation, validation, and presentation of the experimental performance of the suggested approach have been completed. A unique APSO (accelerated particle swarm optimization) based artificial neural network model (ANNM) for the relegation of benign and malignant tumours is presented in this study effort, which allows for the automated identification and categorization of brain tumours. Using APSO training to improve the suggested ANNM model parameters would give a unique method to alleviate the stressful work of radiologists performing manual identification of encephalon cancers from MR images. The use of an APSO-based ANNM (artificial neural network model) model for automated brain tumour classification has been presented in order to demonstrate the resilience of the classification model. It has been suggested to utilise the improved enhanced fuzzy c means (IEnFCM) method for image segmentation, while the GLCM (gray level co-occurrence matrix) feature extraction approach has been employed for feature extraction from magnetic resonance imaging (MR pictures).
Glaucoma is prominent in a variety of nations, with the United States and Europe being two of the most famous. Glaucoma now affects around 78 million people throughout the world (2020). By the year 2040, it is expected that there will be 111.8 million cases of glaucoma worldwide. In countries that are still building enough healthcare infrastructure to cope with glaucoma, the ailment is misdiagnosed nine times out of ten. To aid in the early diagnosis of glaucoma, the creation of a detection system is necessary. In this work, the researchers propose using a technology known as deep learning to identify and predict glaucoma before symptoms appear. The glaucoma dataset is used in this deep learning algorithm that has been proposed for analyzing glaucoma images. To get the required results when using deep learning principles for the job of segmenting the optic cup, pretrained transfer learning models are integrated with the U-Net architecture. For feature extraction, the DenseNet-201 deep convolution neural network (DCNN) is used. The DCNN approach is used to determine whether a person has glaucoma. The fundamental goal of this line of research is to recognize glaucoma in retinal fundus images, which will aid in assessing whether a patient has the condition. Because glaucoma can affect the model in both positive and negative ways, the model’s outcome might be either positive or negative. Accuracy, precision, recall, specificity, the F-measure, and the F-score are some of the metrics used in the model evaluation process. An extra comparison study is performed as part of the process of establishing whether the suggested model is accurate. The findings are compared to convolution neural network classification methods based on deep learning. When used for training, the suggested model has an accuracy of 98.82 percent and an accuracy of 96.90 percent when used for testing. All assessments show that the new paradigm that has been proposed is more successful than the one that is currently in use.
When it comes to our everyday life, emotions have a critical role to play. It goes without saying that it is critical in the context of mobile-computer interaction. In social and mobile communication, it is vital to understand the influence of emotions on the way people interact with one another and with the material they access. This study tried to investigate the relationship between the expressive state of mind and the efficacy of the human-mobile interaction while accessing a variety of different sorts of material over the course of learning. In addition, the difficulty of the feeling of many individuals is taken into account in this research. Human hardness is an important factor in determining a person’s personality characteristics, and the material that they can access will alter depending on how they engage with a mobile device. It analyzes the link between the human-mobile interaction and the person’s mental toughness to provide excellent suggestion material in the appropriate manner. In this study, an explicit feedback selection method is used to gather information on the emotional state of the mind of the participants. It has also been shown that the emotional state of a person’s mind influences the human-mobile connection, with persons with varying levels of hardness accessing a variety of various sorts of material. It is hoped that this research will assist content producers in identifying engaging material that will encourage mobile users to promote good content by studying their personality features.
An accurate identification of objects from the acquisition system depends on the clear segmentation and classification of remote sensing images. With the limited financial resources and the high intra-class variations, the earlier proposed algorithms failed to handle the sub-optimal dataset. The building of an efficient training set iteratively in active learning (AL) approaches improves classification performance. The heuristics-based AL provides better results with the inheritance of contextual information and the robustness to noise variations. The uncertainty exists pixel variations make the heuristics-based AL fail to handle the remote sensing image classification. Previously, we focused on the extraction of clear textural pattern information by using the extended differential pattern-based relevance vector machine (EDP-AL). This paper extends that work into the novel pixel-certainty activity learning (PCAL) based on the information about textural patterns obtained from the extended differential pattern (EDP). Initially, distributed intensity filtering (DIF) is used to eliminate noise from the image, and then histogram equalization (HE) is used to improve the image quality. The EDP is used to merge and classify different labels for each image sample, and this algorithm expresses the textural information. The PCAL technique is used to classify the HSI patterns that are important in remote sensing applications using this pattern collection. Pavia University and Indian Pines (IP) are the datasets used to validate the performance of the proposed PCAL (PU). The ability of PCAL to accurately categorize land cover types is demonstrated by a comparison of the proposed PCAL with existing algorithms in terms of classification accuracy and the Kappa coefficient.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.