2020
DOI: 10.5755/j01.itc.49.4.27118
|View full text |Cite
|
Sign up to set email alerts
|

Deep Learning based Semantic Similarity Detection using Text Data

Abstract: Similarity detection in the text is the main task for a number of Natural Language Processing (NLP) applications. As textual data is comparatively large in quantity and huge in volume than the numeric data, therefore measuring textual similarity is one of the important problems. Most of the similarity detection algorithms are based upon word to word matching, sentence/paragraph matching, and matching of the whole document. In this research, a novel approach is proposed using deep learning models, combining Lon… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
4
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 16 publications
(8 citation statements)
references
References 51 publications
(54 reference statements)
0
4
0
Order By: Relevance
“…To reduce complexity and obtain a compact set of frequent high-utility sequential patterns 43 , Huang et al proposed an efficient algorithm called TMKU based on the TargetUM algorithm to discover the top-k Target High-Utility Itemsets (top-k THUIs) in 2023. The research achievements and latest algorithms 44 mentioned have led to significant advancements in FHUPM algorithms, including algorithm innovation, improvement of data structures, and performance enhancement.…”
Section: Related Work On Fuzzy High-utility Pattern Mining (Fhupm)mentioning
confidence: 99%
See 1 more Smart Citation
“…To reduce complexity and obtain a compact set of frequent high-utility sequential patterns 43 , Huang et al proposed an efficient algorithm called TMKU based on the TargetUM algorithm to discover the top-k Target High-Utility Itemsets (top-k THUIs) in 2023. The research achievements and latest algorithms 44 mentioned have led to significant advancements in FHUPM algorithms, including algorithm innovation, improvement of data structures, and performance enhancement.…”
Section: Related Work On Fuzzy High-utility Pattern Mining (Fhupm)mentioning
confidence: 99%
“…2 . Table 4 presents the original data set from the Pima Indians data set 44 , where the order of attributes represented by the data from left to right corresponds to the order of attributes in Table 3 . On the right side of Table 4 , 1 represents diabetic patients and 0 represents non-diabetic patients.…”
Section: Fhupm Algorithm and Frameworkmentioning
confidence: 99%
“…Te work in reference [21] incorporated frequent itemsets with domain knowledge in the form of a taxonomy to mine negative association rules. Shaheen and Abdullah developed a series of algorithms for diferent felds, such as exploring positive and negative context-based association rules for conventional/characteristic data [24,25], and mining context-based association rules on microbial databases to extract interesting and useful associations of microbial attributes with existence of hydrocarbon reserve [26][27][28][29]. It should be noted that some contradictory rules may be mined when positive and negative rules are mined simultaneously, such as A ⇒ B and A ⇒ ¬B are both strong rules [30][31][32].…”
Section: Pnars Mining Techniquesmentioning
confidence: 99%
“…TC is a machine learning challenge that tries to classify new written content into a conceptual group from a predetermined classification collection [1]. It is crucial in a variety of applications, including sentiment analysis [2,3], spam email filtering [4,5], hate speech detection [6], text summarization [7], website classification [8], authorship attribution [9], information retrieval [10], medical diagnostics [11], emotion detection on smart phones [12], online recommendations [13], fake news detection [14,15], crypto-ransomware early detection [16], semantic similarity detection [17], part-of-speech tagging [18], news classification [19], and tweet classification [20].…”
Section: Introductionmentioning
confidence: 99%