2020
DOI: 10.1007/s10710-020-09375-4
|View full text |Cite
|
Sign up to set email alerts
|

Multi-objective genetic programming for manifold learning: balancing quality and dimensionality

Abstract: Manifold learning techniques have become increasingly valuable as data continues to grow in size. By discovering a lower-dimensional representation (embedding) of the structure of a dataset, manifold learning algorithms can substantially reduce the dimensionality of a dataset while preserving as much information as possible. However, state-of-the-art manifold learning algorithms are opaque in how they perform this transformation. Understanding the way in which the embedding relates to the original high-dimensi… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
2
1

Relationship

2
5

Authors

Journals

citations
Cited by 13 publications
(7 citation statements)
references
References 40 publications
0
7
0
Order By: Relevance
“…This genetic algorithm is based on the idea that neighbors in a new low-dimensional space should have a similar ordering to that in the original high-dimensional space. This algorithm called GP-MaL-MO (Genetic Programming for Manifold Learning using a Multi-objective Approach) [21] is an extension of the GP-MaL algorithm [23]. This algorithm uses a multi-criterial fitness function to build a Pareto front of solutions.…”
Section: B Gp-mal-momentioning
confidence: 99%
See 1 more Smart Citation
“…This genetic algorithm is based on the idea that neighbors in a new low-dimensional space should have a similar ordering to that in the original high-dimensional space. This algorithm called GP-MaL-MO (Genetic Programming for Manifold Learning using a Multi-objective Approach) [21] is an extension of the GP-MaL algorithm [23]. This algorithm uses a multi-criterial fitness function to build a Pareto front of solutions.…”
Section: B Gp-mal-momentioning
confidence: 99%
“…The most common case of solving a dimension reduction problem is to use classical approaches such as PCA when there are a lot of features and it is simply impossible to work with a such large number of features, or t-SNE [18] when it is necessary to make some intuitive conclusions about a data distribution by visualization of feature space. On the other hand, there are a few works about transparent dimension reduction [19], [20], [21], [22]. They use an evolutionary approach to reduce the dimension of data and get interpretable by human features.…”
Section: Introductionmentioning
confidence: 99%
“…These are commonly used as activation functions in neural networks, adding the capacity for non-linear learning. Existing GP for NLDR work has also used these [12,14].…”
Section: Gp Representation Of Encodermentioning
confidence: 99%
“…GP has inherent potential for interpretability, because it evolves solutions that combine userselected terminals and functions. GP has recently been demonstrated to be a capable NLDR technique which produces functional mappings [12,14]. Some of these approaches have used a multi-tree GP representation with a custom fitness function for evaluating embedding quality [12-14, 22, 24], and other work has looked into GP specifically for autoencoding [16,19].…”
Section: Introductionmentioning
confidence: 99%
“…Many EC based techniques have been proposed for multi-objective learning such as nondominated sorting genetic algorithm II (NSGAII) [19], the Strength Pareto Evolutionary Algorithm 2 (SPEA2) [20] and the Multiobjective Evolutionary Algorithm Based on Decomposition (MOEA/D) [21]. The use of these multi-objective techniques has been investigated in variety of GP methods [22], [23], [24], [25], [26]. Bleuler et al [22] proposed a multi-objective GP method using SPEA2 to tackle the bloat issue in GP where the program/solution size and the training accuracy were considered as two independent objectives.…”
Section: Genetic Programming and The Multi-objective Variantsmentioning
confidence: 99%