2020
DOI: 10.1109/access.2019.2954106
|View full text |Cite
|
Sign up to set email alerts
|

A Novel Class-Center Vector Model for Text Classification Using Dependencies and a Semantic Dictionary

Abstract: Automatic text classification is a research focus and core technology in information retrieval and natural language processing. Different from the traditional text classification methods (SVM, Bayesian, KNN), the class-center vector method is an important text classification method, which has the advantages of less calculation and high efficiency. However, the traditional class-center vector method for text classification has the disadvantages that the class vector is large and sparse, and its classification a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3

Relationship

1
6

Authors

Journals

citations
Cited by 8 publications
(5 citation statements)
references
References 20 publications
0
5
0
Order By: Relevance
“…For example, if the user's budget is 15000 rupees, a positive 5% threshold would translate to an extended range of 15750 rupees. After leveraging the flexible feature values, the leveraged features are set as the high impact features and the same are updated in user preferences ( R , it's time to personalize the product recommendations [39] further. This personalization step aims to refine the existing weights of products within set R based on the user's unique purchase history and profile information.…”
Section: Extended Partial Matchmentioning
confidence: 99%
“…For example, if the user's budget is 15000 rupees, a positive 5% threshold would translate to an extended range of 15750 rupees. After leveraging the flexible feature values, the leveraged features are set as the high impact features and the same are updated in user preferences ( R , it's time to personalize the product recommendations [39] further. This personalization step aims to refine the existing weights of products within set R based on the user's unique purchase history and profile information.…”
Section: Extended Partial Matchmentioning
confidence: 99%
“…WordNet is a large synonyms semantic dictionary based on cognitive linguistics and is designed and realized by psychologists, linguisticians and computer engineers in Princeton University [35]. It can be widely used for text classification [45] and semantic similarity calculation [46]. WordNet, in our CQACD system, is used to provide more semantic interpretation for student inputs based on BCKO ontology.…”
Section: Wordnetmentioning
confidence: 99%
“…But it lacks the combination of text semantics to understand the text content. In order to overcome the above problems, this article refers to [33], introduces dependencies, and uses an improved TFIDF-based weight calculation algorithm to understand and optimize text features.…”
Section: Dependency Graphmentioning
confidence: 99%
“…For the word i, we calculate the number of times that the word i appears in the text and set it as n. Then, according to the result of the dependency syntax analysis implemented by Stanford Parser, it is obtained that the word i belongs to the m-th(1≤m≤n) sentence component in the text. And according to table 2 in the paper [33], classify the m-th occurrence of word i in the text into k i,m level, and assign weight w i,m to it. Then the improved frequency TF i the weight of word i in the text is calculated by formula (3).…”
Section: Dependency Graphmentioning
confidence: 99%