2017
DOI: 10.1002/widm.1236
|View full text |Cite
|
Sign up to set email alerts
|

Anomaly detection by robust statistics

Abstract: Real data often contain anomalous cases, also known as outliers. These may spoil the resulting analysis but they may also contain valuable information. In either case, the ability to detect such anomalies is essential. A useful tool for this purpose is robust statistics, which aims to detect the outliers by first fitting the majority of the data and then flagging data points that deviate from it. We present an overview of several robust methods and the resulting graphical outlier detection tools. We discuss ro… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
98
0
2

Year Published

2017
2017
2024
2024

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 167 publications
(117 citation statements)
references
References 83 publications
2
98
0
2
Order By: Relevance
“…When using z-score for outlier detection, the outlier data are lied out of z-score. The z-score lies in between negative 2.5 and positive 2.5 (Rousseeuw & Hubert, (2017). These data are then dropped in the analysis as suggested by Orr et al, (1991).…”
Section: B Methods Of Data Analysismentioning
confidence: 99%
“…When using z-score for outlier detection, the outlier data are lied out of z-score. The z-score lies in between negative 2.5 and positive 2.5 (Rousseeuw & Hubert, (2017). These data are then dropped in the analysis as suggested by Orr et al, (1991).…”
Section: B Methods Of Data Analysismentioning
confidence: 99%
“…In [42] it is presented a method for detecting anomalies through a neural network with controlled long-term memory (LSTM). The mean, median and M-score are often used in statistical tests for the detection of anomalies [43]. The aim of the study [44] was to model the structure and functioning of a complex information system that takes into account operators and users [45].…”
Section: Literature Reviewmentioning
confidence: 99%
“…Peter J. Rousseeuw et al introduced to diagnose outliers by the first principal component score and its orthogonal distance of each data points. Regular observations have both a small orthogonal distance and small PCA score [14]. Heiko Hoffman proposes kernel PCA for novelty detection which is a non-linear extension of PCA.…”
Section: Related Workmentioning
confidence: 99%