2022
DOI: 10.14569/ijacsa.2022.0130411
|View full text |Cite
|
Sign up to set email alerts
|

Extended Max-Occurrence with Normalized Non-Occurrence as MONO Term Weighting Modification to Improve Text Classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 0 publications
0
2
0
Order By: Relevance
“…Fig. 1 illustrates the MONO TWS designed by [17] separated into seven (7) steps, and visualized by [18][19], which will be further explained in details.…”
Section: Supervised Term Weighting Schemementioning
confidence: 99%
“…Fig. 1 illustrates the MONO TWS designed by [17] separated into seven (7) steps, and visualized by [18][19], which will be further explained in details.…”
Section: Supervised Term Weighting Schemementioning
confidence: 99%
“…Deep learning involves learning representations directly from data, as opposed to traditional methods. Representation learning is a technique employed by machines to learn the relationships of raw, labeled or unlabeled data, which is essential in the classification process [7][8] [9][10]. This, in turn, leads to the creation of a more effective model to aid the deep learning model [11].…”
Section: Artificialmentioning
confidence: 99%