2020
DOI: 10.1016/j.engappai.2020.103554
|View full text |Cite
|
Sign up to set email alerts
|

Robust support vector data description for novelty detection with contaminated data

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
5
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 21 publications
(5 citation statements)
references
References 24 publications
0
5
0
Order By: Relevance
“…Solutions to one-class classification and novelty detection either estimate the density of the inlier distribution [10,11] or determine a geometric property of the inliers, such as their boundary set [12][13][14][15]. When the inlier distribution is nicely approximated by a low-dimensional linear subspace, [16] proposes to distinguish between inliers and outliers via Principal Component Analysis (PCA).…”
Section: Previous Workmentioning
confidence: 99%
See 2 more Smart Citations
“…Solutions to one-class classification and novelty detection either estimate the density of the inlier distribution [10,11] or determine a geometric property of the inliers, such as their boundary set [12][13][14][15]. When the inlier distribution is nicely approximated by a low-dimensional linear subspace, [16] proposes to distinguish between inliers and outliers via Principal Component Analysis (PCA).…”
Section: Previous Workmentioning
confidence: 99%
“…If there are also outliers (with a simple shape) among the inliers (with a complex shape), encoding the inlier distribution becomes even more difficult. Nevertheless, some previous works already explored the possibility of corrupted training set [14,15,19]. In particular, [14,19] test artificial instances with at most 5% corruption of the training set and [15] considers ratios of 10%, but with very small numbers of training points.…”
Section: Previous Workmentioning
confidence: 99%
See 1 more Smart Citation
“…The one-class support vector machine (OC-SVM) 12 and support vector data description (SVDD) 13,14 are two most common OCC algorithms. A support vector machine (SVM) was proposed, 15 which can well solve the binary classification problems of small sample size and high sample dimension.…”
Section: Introductionmentioning
confidence: 99%
“…To reduce the impact of outliers on modeling in single-mode process, Chen et al [23] proposed robust-SVDD by introducing cutoff-distance-based local density of each data sample and the ε-insensitive loss function with negative samples. Wang and Lan [24] used the SD outlyingness to assign lower weight values to outliers. For multimode process, Zhao et al utilized the weighted mean and standard deviation of each sample's neighbors to standardize the dataset and applied weighted local standardization (WLS) strategy to wSVDD [25].…”
Section: Introductionmentioning
confidence: 99%