Encyclopedia of Machine Learning 2011
DOI: 10.1007/978-0-387-30164-8_251
|View full text |Cite
|
Sign up to set email alerts
|

Empirical Risk Minimization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
103
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 200 publications
(103 citation statements)
references
References 0 publications
0
103
0
Order By: Relevance
“…In this part, we aim at studying different factors influencing the model performance: Data amount, different augmentation strategies (e.g., random shift, mixup (Zhang et al, 2018)), age correction technique (see Section 3.2) and regressor (e.g., Multi‐Layer Perceptron (MLP), SVR). Table 3 shows the comparison results.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…In this part, we aim at studying different factors influencing the model performance: Data amount, different augmentation strategies (e.g., random shift, mixup (Zhang et al, 2018)), age correction technique (see Section 3.2) and regressor (e.g., Multi‐Layer Perceptron (MLP), SVR). Table 3 shows the comparison results.…”
Section: Resultsmentioning
confidence: 99%
“…Finally, we employed different data augmentation techniques to alleviate the overfitting problem. Concretely, we randomly shifted a patch by t}{1,0,1$$ t\in \left\{-\mathrm{1,0,1}\right\} $$ voxel in each dimension (denoted as random shift technique) and then applyed mixup data augmentation (Zhang et al, 2018).…”
Section: Methodsmentioning
confidence: 99%
“…Overfitting is a major challenge in DL models that can degrade generalization performance. To minimize overfitting, various techniques can be employed, including data augmentation, 38 which transforms existing data in various ways, adversarial training, 39 which employs adversarial examples during the training process, and dropouts, 40 which prevent the model from over‐relying on certain neurons.…”
Section: Resultsmentioning
confidence: 99%
“…MixUp [22] is a widely‐used augmentation technique that linearly combines two images at a randomly selected ratio. We applied MixUp to road / non‐road data to generate new XR, XC, and XL according to randomly selected ratios of road data, rR, rC, and rL, respectively.…”
Section: Methodsmentioning
confidence: 99%
“…Several augmentation methods of fusing multiple data into a single training datum have been proposed [20–22]. Specifically, these techniques create a new single image via the combination of multiple images.…”
Section: Related Workmentioning
confidence: 99%