2024
DOI: 10.1111/exsy.13553
|View full text |Cite
|
Sign up to set email alerts
|

Enhanced text classification through an improved discrete laying chicken algorithm

Fatemeh Daneshfar,
Mohammad Javad Aghajani

Abstract: The exponential growth of digital text documents presents a significant challenge for text classification algorithms, as the vast number of words in each document can hinder their efficiency. Feature selection (FS) is a crucial technique that aims to eliminate irrelevant features and enhance classification accuracy. In this study, we propose an improved version of the discrete laying chicken algorithm (IDLCA) that utilizes noun‐based filtering to reduce the number of features and improve text classification pe… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(3 citation statements)
references
References 41 publications
0
3
0
Order By: Relevance
“…Najy et al (2020) considered a novel and more realistic variant of the uncapacitated hub location problem where both flow-dependent economies of scale and congestion considerations are incorporated into the multiple-allocation version of the problem [17]. Daneshfar (2024) et al proposed an improved version of the discrete laying chicken algorithm (IDLCA) that utilizes noun-based filtering to reduce the number of features and improve text classification performance [18].…”
Section: Literature Reviewmentioning
confidence: 99%
See 1 more Smart Citation
“…Najy et al (2020) considered a novel and more realistic variant of the uncapacitated hub location problem where both flow-dependent economies of scale and congestion considerations are incorporated into the multiple-allocation version of the problem [17]. Daneshfar (2024) et al proposed an improved version of the discrete laying chicken algorithm (IDLCA) that utilizes noun-based filtering to reduce the number of features and improve text classification performance [18].…”
Section: Literature Reviewmentioning
confidence: 99%
“…According to O'Kelly (1987) and Campbell and O'Kelly (2012), the H-S network can take on quite many configurations, some of which are shown in Figure 1 [5,18]. The simplest H-S network in Figure 1a is also called a star network, in which one hub connects trips and transfers flows for all O-D pairs.…”
Section: Hub-and-spoke Network (H-s Network)mentioning
confidence: 99%
“…Specifically, due to the higher number of samples in the majority class, the model is more likely to learn the features of the majority class during the training process, resulting in the classifier being biased towards the majority class during prediction [6]. Bias toward majority class samples makes the classifier less capable of identifying minority class samples, and may even completely ignore the importance of the minority class, which can lead to misclassification in practical applications [7,8]. Therefore, an in-depth study of imbalanced classification algorithms is crucial to improve the performance and generalization ability of classifiers, especially when it comes to minority class samples with practical applications.…”
Section: Introductionmentioning
confidence: 99%