2024
DOI: 10.3390/s24072199
|View full text |Cite
|
Sign up to set email alerts
|

A Novel Framework Based on Deep Learning Architecture for Continuous Human Activity Recognition with Inertial Sensors

Vladimiro Suglia,
Lucia Palazzo,
Vitoantonio Bevilacqua
et al.

Abstract: Frameworks for human activity recognition (HAR) can be applied in the clinical environment for monitoring patients’ motor and functional abilities either remotely or within a rehabilitation program. Deep Learning (DL) models can be exploited to perform HAR by means of raw data, thus avoiding time-demanding feature engineering operations. Most works targeting HAR with DL-based architectures have tested the workflow performance on data related to a separate execution of the tasks. Hence, a paucity in the literat… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 73 publications
0
1
0
Order By: Relevance
“…It is essential to highlight that the k-CV approach can require considerable computational resources, particularly with large datasets or when using high k values. Utilizing the k-CV technique aims to guarantee a just and impartial assessment of the model [47]. In our investigation, we opted for a 5-fold cross-validation (k = 5) to find a compromise between computational efficiency and accurate performance estimation.…”
Section: Cross-validationmentioning
confidence: 99%
“…It is essential to highlight that the k-CV approach can require considerable computational resources, particularly with large datasets or when using high k values. Utilizing the k-CV technique aims to guarantee a just and impartial assessment of the model [47]. In our investigation, we opted for a 5-fold cross-validation (k = 5) to find a compromise between computational efficiency and accurate performance estimation.…”
Section: Cross-validationmentioning
confidence: 99%