2021
DOI: 10.3233/ica-210664
|View full text |Cite
|
Sign up to set email alerts
|

A self-adaptive multi-objective feature selection approach for classification problems

Abstract: In classification tasks, feature selection (FS) can reduce the data dimensionality and may also improve classification accuracy, both of which are commonly treated as the two objectives in FS problems. Many meta-heuristic algorithms have been applied to solve the FS problems and they perform satisfactorily when the problem is relatively simple. However, once the dimensionality of the datasets grows, their performance drops dramatically. This paper proposes a self-adaptive multi-objective genetic algorithm (SaM… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
10
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 21 publications
(10 citation statements)
references
References 89 publications
0
10
0
Order By: Relevance
“…The mathematical transcription of this multi-objective optimization problem is as follows [28,[78][79][80][81][82]: The solution đ” 𝑟 to this multi-objective optimization problem is generally not unique and not optimal because they represent a compromise on the two objectives to be achieved. Indeed, the excessive reduction of the number of features could decrease the accuracy.…”
Section: Mathematical Modeling Of the Wrapper Feature Selection In A ...mentioning
confidence: 99%
“…The mathematical transcription of this multi-objective optimization problem is as follows [28,[78][79][80][81][82]: The solution đ” 𝑟 to this multi-objective optimization problem is generally not unique and not optimal because they represent a compromise on the two objectives to be achieved. Indeed, the excessive reduction of the number of features could decrease the accuracy.…”
Section: Mathematical Modeling Of the Wrapper Feature Selection In A ...mentioning
confidence: 99%
“…This will prevent the model from growing excessively while still maintaining accuracy. Concerning space and time complexity, we will use unsupervised learning techniques combined with dimensionality reduction and feature selection techniques [59] to determine which and how many types of fixture exist in a given network, according to their characteristics. Then, we will use their type instead of their unique identifier to identify them.…”
Section: Efficiency Pattern Classification On Optimal Dimming Rangementioning
confidence: 99%
“…Optical data are brain structural magnetic resonance scanning images (sMRI) or brain functional magnetic resonance scanning images (fMRI). Using numerical or visual data to train an ML algorithm for ASD diagnosis is ordinarily possible by determining the distinguishing features or using an automated feature extraction technique [43][44][45]. These features may be structural gray matter (GM) values acquired from cortical thickness (CT) [46][47][48], GM density (GMd) values from voxel-based morphometry (VBM) [49], diffusion-weighted imaging (DWI) [fractional anisotropy (FA)] in white matter (WM)) microorganism changes [50], connectivity matrices [51], parameters from network analysis [52][53][54], and resting/duty state fMRI information [55,56].…”
Section: Introductionmentioning
confidence: 99%