2017
DOI: 10.1186/s13673-017-0097-2
|View full text |Cite
|
Sign up to set email alerts
|

Feature extraction for robust physical activity recognition

Abstract: IntroductionWith information obtained from sensors, computer based system can make more intelligent actions by adapting their behavior to the context conditions. These days, thanks to the development of multi-sensor networks, related research areas have increased rapidly. Among those areas, Human Activity Recognition (HAR) based on wearable sensors (accelerometer, gyroscope, magnetometer, etc.) has recently received lots of attention due to its large number of promising applications. One of the most interestin… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
41
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
4
4

Relationship

0
8

Authors

Journals

citations
Cited by 56 publications
(41 citation statements)
references
References 29 publications
0
41
0
Order By: Relevance
“…In terms of data processing, additional features can be added to the pool of those considered (e.g. "jerk" [41] or wavelet-based [19] features for wearable sensors, or additional representation domains for the radar data [42]), and additional feature selection methods and metrics for information fusion investigated. The application of deep learning methods may also be considered, in particular the challenge of using deep networks with small amount of experimental data available, for example through transfer learning approaches or through the generation of suitable simulation data.…”
Section: Discussionmentioning
confidence: 99%
“…In terms of data processing, additional features can be added to the pool of those considered (e.g. "jerk" [41] or wavelet-based [19] features for wearable sensors, or additional representation domains for the radar data [42]), and additional feature selection methods and metrics for information fusion investigated. The application of deep learning methods may also be considered, in particular the challenge of using deep networks with small amount of experimental data available, for example through transfer learning approaches or through the generation of suitable simulation data.…”
Section: Discussionmentioning
confidence: 99%
“…Moreover, active developments in gesture recognition techniques have led to the recent developments in virtual reality (VR) technology [1][2][3][4][5][6][7][8][9][10][11][12][13][14][15][16]. The gesture recognition accuracy, which is a key metric to evaluate the effectiveness of gesture recognition systems, has witnessed steady improvements using input data from multiple sensors [17][18][19][20][21][22]. Moreover, researchers have studied visual recognition and learning systems for more intuitive learning of gesture recognition.…”
Section: Related Workmentioning
confidence: 99%
“…First, in the gesture registration stage, the proposed method obtains gesture data from the multi-sensor fusion. Using two or more sensors, various types of gestures can be learnt and precise and accurate gesture recognition can be realized [17][18][19][20][21][22]. Moreover, the proposed method allows the end user to limit the range of gestures to be registered for recognizing specific body parts, thereby improving the recognition accuracy.…”
Section: Overview Of the Proposed Generic Gesture Recognition And Leamentioning
confidence: 99%
See 1 more Smart Citation
“…However, this is a classification problem and we used supervised learning methods. The UCI HAR dataset which is used as benchmark dataset is also used in several studies [14] [15]. In these works, mostly classical machine learning algorithms such as Support Vector Machines (SVM) [16], k Nearest Neighbour, Linear Discriminant Analysis (LDA) [17], Multilayer Perceptron(MLP) [18] and deep learning methods are used on this dataset.…”
Section: Introductionmentioning
confidence: 99%