2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2021
DOI: 10.1109/cvprw53098.2021.00014
|View full text |Cite
|
Sign up to set email alerts
|

Out-of-distribution Detection and Generation using Soft Brownian Offset Sampling and Autoencoders

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 18 publications
(5 citation statements)
references
References 7 publications
0
5
0
Order By: Relevance
“…For example, if a real-world energy dataset consists only of European energy data, the synthetic households will also reflect the characteristics of European households. This is referred to as the out-of-distribution (OOD) generalization problem [186], a well-known challenge when working with synthetic data [187]. The OOD problem describes a situation where the data distribution of the test dataset is not identical to the data distribution of the training dataset when developing ML models.…”
Section: Fairnessmentioning
confidence: 99%
See 1 more Smart Citation
“…For example, if a real-world energy dataset consists only of European energy data, the synthetic households will also reflect the characteristics of European households. This is referred to as the out-of-distribution (OOD) generalization problem [186], a well-known challenge when working with synthetic data [187]. The OOD problem describes a situation where the data distribution of the test dataset is not identical to the data distribution of the training dataset when developing ML models.…”
Section: Fairnessmentioning
confidence: 99%
“…The OOD problem describes a situation where the data distribution of the test dataset is not identical to the data distribution of the training dataset when developing ML models. Synthetic data allow augmenting data, thus, reducing the OOD problem [187].…”
Section: Fairnessmentioning
confidence: 99%
“…Some caution is needed with unknown data samples because the model has never processed this data sample, and it is impossible to determine what will happen. Therefore, novel data samples and the analysis of whether they are part of the already known data (indistribution) or not (out-of-distribution) [41,42] are of great interest for model training and the validation of ML models.…”
Section: Noveltymentioning
confidence: 99%
“…The response of the ML model should be consistent and not give inconsistent prediction results. This behavior can also be applied to the feature space, e.g., a variational autoencoder [41] that has learned a data representation, where similar inputs should also be mapped to similar representations.…”
Section: Inconsistencymentioning
confidence: 99%
“…The approach of Möller et al [15] proposes a novel approach to generate out-of-distribution (OOD) datasets.The generated OOD samples are based on in-distribution datasets using methods like Gaussian Hyperspheric Offset and Soft Brownian Offset. The purpose is to enhance OOD detection and validate the generalization performance of neural networks.…”
Section: Simulation Engines and Acquisition Strategiesmentioning
confidence: 99%