2021
DOI: 10.24003/emitter.v9i1.571
|View full text |Cite
|
Sign up to set email alerts
|

Exploring the Time-efficient Evolutionary-based Feature Selection Algorithms for Speech Data under Stressful Work Condition

Abstract: Initially, the goal of Machine Learning (ML) advancements is faster computation time and lower computation resources, while the curse of dimensionality burdens both computation time and resource. This paper describes the benefits of the Feature Selection Algorithms (FSA) for speech data under workload stress. FSA contributes to reducing both data dimension and computation time and simultaneously retains the speech information. We chose to use the robust Evolutionary Algorithm, Harmony Search, Principal Compone… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
1
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
2

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 24 publications
0
1
0
Order By: Relevance
“…, 𝑦 𝑛 , denotes response, and 𝐡 denotes the bagging repetition. Samples with replacement contents are 𝑋 𝑏 , π‘Œ 𝑏 , whereas the amount of data is denoted by 𝑛 [39]. The regression tree is denoted by 𝑓 𝑏 on 𝑋 𝑏 , π‘Œ 𝑏 , and after the training process, x' denotes the prediction results.…”
Section: Classification Methodsmentioning
confidence: 99%
“…, 𝑦 𝑛 , denotes response, and 𝐡 denotes the bagging repetition. Samples with replacement contents are 𝑋 𝑏 , π‘Œ 𝑏 , whereas the amount of data is denoted by 𝑛 [39]. The regression tree is denoted by 𝑓 𝑏 on 𝑋 𝑏 , π‘Œ 𝑏 , and after the training process, x' denotes the prediction results.…”
Section: Classification Methodsmentioning
confidence: 99%
“…Machine Learning (ML) improvements aim to reduce computation time and resources. Due to their high performance and outstanding semiotic pattern identification, SVMs have become the most common classification algorithms [23]. In short, our experiment can define contribution as to how excellent Deep Learning models can only measure an excellent dataset.…”
Section: Originalitymentioning
confidence: 99%