An edge detection is important for its reliability and security which delivers a better understanding of object recognition in the applications of computer vision, such as pedestrian detection, face detection, and video surveillance. This paper introduced two fundamental limitations encountered in edge detection: edge connectivity and edge thickness, those have been used by various developments in the state-of-theart. An optimal selection of the threshold for effectual edge detection has constantly been a key challenge in computer vision. Therefore, a robust edge detection algorithm using multiple threshold approaches (B-Edge) is proposed to cover both the limitations. The majorly used canny edge operator focuses on two thresholds selections and still witnesses a few gaps for optimal results. To handle the loopholes of the canny edge operator, our method selects the simulated triple thresholds that target to the prime issues of the edge detection: image contrast, effective edge pixels selection, errors handling, and similarity to the ground truth. The qualitative and quantitative experimental evaluations demonstrate that our edge detection method outperforms competing algorithms for mentioned issues. The proposed approach endeavors an improvement for both grayscale and colored images.INDEX TERMS Edge, edge connectivity, edge detection, edge width uniformity, threshold.SUDIPTA ROY received the Ph.D. degree in computer science and engineering from the Department of Computer Science and Engineering, University of Calcutta. He is currently with the Radiological Chemistry and Imaging Laboratory, Washington University in Saint Louis, USA. He has more than five years of experience in teaching and research. He is an author of more than 30 publications in refereed national and international journals and conferences, including the IEEE, Springer, Elsevier, and many others. He is an author of one book and many book chapters. He holds an U.S. patent in medical image processing and filed an Indian patent in smart agricultural system. His research interests include biomedical image analysis, image processing, steganography, artificial intelligence, big data analysis, machine learning, and big data technologies.
Fake news and its consequences carry the potential of impacting different aspects of different entities, ranging from a citizen's lifestyle to a country's global relations, there are many related works for collecting and determining fake news, but no reliable system is commercially available. This study aims to propose a deep learning model which predicts the nature of an article when given as an input. It solely uses text processing and is insensitive to history and credibility of the author or the source. In this paper, authors have discussed and experimented using word embedding (GloVe) for text pre-processing in order to construct a vector space of words and establish a lingual relationship. The proposed model which is the blend of convolutional neural network and recurrent neural networks architecture has achieved benchmark results in fake news prediction, with the utility of word embeddings complementing the model altogether. Further, to ensure the quality of prediction, various model parameters have been tuned and recorded for the best results possible. Among other variations, addition of dropout layer reduces overfitting in the model, hence generating significantly higher accuracy values. It can be a better solution than already existing ones, viz: gated recurrent units, recurrent neural networks or feed-forward networks for the given problem, which generates better precision values of 97.21% while considering more input features.
Data mining is an inevitable task in most of the emerging computing technologies as it debilitates the complexity of datasets by rendering a better insight. Moreover, it entails the efficacy to envisage ingeniously the vast and heterogeneous datasets and thus delineates substantial knowledge from the abundance of data by pragmatic implementation of suitable algorithm. There are galore of algorithms in literature for this purpose. Furthermore, clustering is widely used techniques to analyze the data within the purview of data mining and thus it became as a motivational impetus for the authors to survey the existing literature on this topic rigorously and have consequently identified various key parameters so that concomitant improvement can be possible while selecting a best fit clustering algorithm pertaining to a specific problem domain. Furthermore, clustering, classification and association rule mining are akin and indispensable to data mining and owing to these authors have also included interrelation and intertwining among these terms so that this work will presage chunk of help for the researchers working in this field. The present study also envisages and manifests the challenges associated with the clustering algorithms for two‐ and high‐dimensional databases in a flamboyant fashion. Over and above, this work identifies key parametric attributes to assess the clustering algorithms which in turn benevolent the existing work and paves the way for profound future research in this realm. This article is categorized under: Technologies > Structure Discovery and Clustering Technologies > Classification Technologies > Association Rules Fundamental Concepts of Data and Knowledge > Big Data Mining
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.