2019
DOI: 10.48550/arxiv.1904.02666
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Subject Cross Validation in Human Activity Recognition

Abstract: K-fold Cross Validation is commonly used to evaluate classifiers and tune their hyperparameters. However, it assumes that data points are Independent and Identically Distributed (i.i.d.) so that samples used in the training and test sets can be selected randomly and uniformly. In Human Activity Recognition datasets, we note that the samples produced by the same subjects are likely to be correlated due to diverse factors. Hence, k-fold cross validation may overestimate the performance of activity recognizers, i… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(7 citation statements)
references
References 8 publications
0
4
0
Order By: Relevance
“…They evaluate classification outcomes using 10-fold CV and LOUOCV, demonstrating that only the latter method is suitable for classifying unseen user data. Similarly, in [59], the authors explore the impact of the subject CV on the performance of human activity recognition. Their findings indicate that k-fold CV tends to overestimate system performance by approximately 16% when overlapping windows are utilized.…”
Section: Evaluation Methodsmentioning
confidence: 99%
“…They evaluate classification outcomes using 10-fold CV and LOUOCV, demonstrating that only the latter method is suitable for classifying unseen user data. Similarly, in [59], the authors explore the impact of the subject CV on the performance of human activity recognition. Their findings indicate that k-fold CV tends to overestimate system performance by approximately 16% when overlapping windows are utilized.…”
Section: Evaluation Methodsmentioning
confidence: 99%
“…Taking into account the evaluation value of the prediction result, such as the correct answer rate of the output classification and the regression value, we examine whether the model has actual versatility. Typical cross validation methods include the holdout method and the k-fold method [12,13]. The k-fold method is a cross validation method that divides the training data into several (k) pieces and repeats model construction and verification for the number of divided data (i.e., k times).…”
Section: Evaluation Of Predictive Models and Predictive Performancementioning
confidence: 99%
“…Moreover, none of the existing studies reported perfect or near-perfect accuracy, which is important for such sensitive usage. In fact, the adoption of cross-validation alone to measure the performance of the models of almost all the studies [22]- [24], raise serious concern as to whether they truly reflect the models' realistic performances [35]. Additionally, the accuracy of HAR solution for Salat might hamper if a person performs any extra activity in Salat, that does not nullify prayer [36] as shown in Figure 1.…”
Section: Motivations Behind Our Studymentioning
confidence: 99%