2021
DOI: 10.48550/arxiv.2108.04742
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

The information of attribute uncertainties: what convolutional neural networks can learn about errors in input data

Natália V. N. Rodrigues,
L. Raul Abramo,
Nina S. Hirata

Abstract: Errors in measurements are key to weighting the value of data, but are often neglected in Machine Learning (ML). We show how Convolutional Neural Networks (CNNs) are able to learn about the context and patterns of signal and noise, leading to improvements in the performance of classification methods. We construct a model whereby two classes of objects follow an underlying Gaussian distribution, and where the features (the input data) have varying, but known, levels of noise. This model mimics the nature of sci… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 13 publications
0
1
0
Order By: Relevance
“…One often overlooked source of uncertainty is the model's input (Tzelepis et al, 2017), as features and pixel values can be noisy, depending on the data source. Uncertainty in the input is an underexplored research area (Rodrigues et al, 2021;Hüllermeier, 2014;Depeweg et al, 2018), with most works about uncertainty estimation being about output uncertainty (Hüllermeier & Waegeman, 2021). Accounting for the uncertainty in the input can improve the final prediction, as well as its uncertainty.…”
Section: Introductionmentioning
confidence: 99%
“…One often overlooked source of uncertainty is the model's input (Tzelepis et al, 2017), as features and pixel values can be noisy, depending on the data source. Uncertainty in the input is an underexplored research area (Rodrigues et al, 2021;Hüllermeier, 2014;Depeweg et al, 2018), with most works about uncertainty estimation being about output uncertainty (Hüllermeier & Waegeman, 2021). Accounting for the uncertainty in the input can improve the final prediction, as well as its uncertainty.…”
Section: Introductionmentioning
confidence: 99%