2016
DOI: 10.1007/978-3-319-45823-6_80
|View full text |Cite
|
Sign up to set email alerts
|

REMEDA: Random Embedding EDA for Optimising Functions with Intrinsic Dimension

Abstract: Where a licence is displayed above, please note the terms and conditions of the licence govern your use of this document. When citing, please reference the published version. Take down policy While the University of Birmingham exercises care and attention in making items available there are rare occasions when an item has been uploaded in error or has been deemed to be commercially or otherwise sensitive.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
25
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 12 publications
(26 citation statements)
references
References 13 publications
1
25
0
Order By: Relevance
“…Beside building an explicit model of the objective function, as is the case with meta-modeling, approximation can be built into problem representation [84] or be achieved by means of dimensionality reduction through transformation [85][86][87][88][89][90][91] or finding intrinsic dimensions of a problem [92]. Wang et al [93] combined the benefit of meta-models and dimensionality reduction by using auto-encoders to find lower dimensional features of graph embedding problems and use them to construct a surrogate model to approximate the robustness value of large-scale graph networks.…”
Section: Approximation and Surrogate Modelingmentioning
confidence: 99%
“…Beside building an explicit model of the objective function, as is the case with meta-modeling, approximation can be built into problem representation [84] or be achieved by means of dimensionality reduction through transformation [85][86][87][88][89][90][91] or finding intrinsic dimensions of a problem [92]. Wang et al [93] combined the benefit of meta-models and dimensionality reduction by using auto-encoders to find lower dimensional features of graph embedding problems and use them to construct a surrogate model to approximate the robustness value of large-scale graph networks.…”
Section: Approximation and Surrogate Modelingmentioning
confidence: 99%
“…However, no technique has been tailored for sparse problems before. It is noteworthy that LOPs with a low intrinsic dimensionality [38] are similar to sparse LOPs to some extent, but the optimal solutions of these two types of problems are significantly different. For sparse LOPs, only a small portion of decision variables contribute to the optimal solutions, and the noncontributing decision variables should be set to 0.…”
Section: A Sparse Mops In Large-scale Optimization Problemsmentioning
confidence: 99%
“…MA-SW-Chain [15] is a hybrid algorithm that as- In doing so, the method will mitigate the curse of dimensionality. A different way to exploit hidden low dimensional structure was proposed in [17], suggesting that a notion of intrinsic dimension of a function can replace the ambient dimension of the inputs to the function, provided that the function has a special structure.…”
Section: Related Work In Large-scale Black-box Optimisationmentioning
confidence: 99%