AIAA SCITECH 2022 Forum 2022
DOI: 10.2514/6.2022-2042
|View full text |Cite
|
Sign up to set email alerts
|

Implementation of a Learning-Based Explicit Reference Governor for Constrained Control of a UAV

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
10
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(12 citation statements)
references
References 20 publications
0
10
0
Order By: Relevance
“…The regularization loss, on the other hand, is given by the Kullback-Leibler divergence of the latent representation and a standard normal distribution. This encourages the latent representation to be smooth and continuous, and moreover aims at having latent variables represent independent generative factors [22,25]. For a single data point this loss can be expressed as…”
Section: Vaesmentioning
confidence: 99%
See 3 more Smart Citations
“…The regularization loss, on the other hand, is given by the Kullback-Leibler divergence of the latent representation and a standard normal distribution. This encourages the latent representation to be smooth and continuous, and moreover aims at having latent variables represent independent generative factors [22,25]. For a single data point this loss can be expressed as…”
Section: Vaesmentioning
confidence: 99%
“…where i runs over the N latent variables. The hyperparameter β in equation ( 5) controls the impact of regularization on the overall optimization objective, regulating the trade-off between the effective encoding capacity of the latent space and the statistical independence of individual latent variables in the learned representation [25].…”
Section: Vaesmentioning
confidence: 99%
See 2 more Smart Citations
“…The minimization of this loss function ensures that the output will be as close as possible to the input, while the values of the latent space closely follow the chosen distribution. In addition, one can add a hyperparameter β [45] to control the importance of each term in the loss. The loss function can be written as:…”
Section: Convae Loss Function and Evaluation Metricmentioning
confidence: 99%