2020
DOI: 10.1109/access.2020.3032699
|View full text |Cite
|
Sign up to set email alerts
|

Human Activity Recognition Based on Gramian Angular Field and Deep Convolutional Neural Network

Abstract: With the development of the Internet of things (IoT) and wearable devices, the sensor-based human activity recognition (HAR) has attracted more and more attentions from researchers due to its outstanding characteristics of convenience and privacy. Meanwhile, deep learning algorithms can extract high-dimensional features automatically, which makes it possible to achieve the end-to-end learning. Especially the convolutional neural network (CNN) has been widely used in the field of computer vision, while the infl… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
29
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 59 publications
(30 citation statements)
references
References 32 publications
1
29
0
Order By: Relevance
“…The data was collected by smartphone, smartwatch and bluetooth devices and experimental results show the reliability of the proposed method. In [25], inertial data is converted into GAF images and then a multisensor data fusion network called Fusion-MdkResNet is used which can process data collected by different sensors and fuse data automatically.…”
Section: Related Workmentioning
confidence: 99%
“…The data was collected by smartphone, smartwatch and bluetooth devices and experimental results show the reliability of the proposed method. In [25], inertial data is converted into GAF images and then a multisensor data fusion network called Fusion-MdkResNet is used which can process data collected by different sensors and fuse data automatically.…”
Section: Related Workmentioning
confidence: 99%
“…The size of the dance action image is 720 × 480 pixel. We make comparison with other three state-of-the-art action recognition methods including GPRAR (Zhang R. et al, 2018), PGCN-TCA (Xu et al, 2020), MVD (Huynh and Alaghband, 2021). GPRAR, a graph convolutional network based pose reconstruction and action recognition for human trajectory prediction.…”
Section: Experiments and Analysis Experiments Environment And Evaluation Indexmentioning
confidence: 99%
“…The size of the dance action image is 720 × 480 pixel. We make comparison with other three state-of-the-art action recognition methods including GPRAR (Zhang R. et al, 2018 ), PGCN-TCA (Xu et al, 2020 ), MVD (Huynh and Alaghband, 2021 ).…”
Section: Experiments and Analysismentioning
confidence: 99%
“…Angular Field and Deep Convolutional Neural Network (5) Hongji et al proposed a new network, Mdk-ResNet, for large-scale time-series human behavior datasets i ncluding the aforementioned WISDM dataset, (4) which has higher recognition accuracy than the conventional methods. Especially for the aforementioned WISDM dataset, (4) the accuracy is 9.88% higher than that of the MLP-based method by Jennifer R et al…”
Section: Human Activity Recognition Based On Gramianmentioning
confidence: 99%