2020
DOI: 10.1109/lgrs.2019.2930636
|View full text |Cite
|
Sign up to set email alerts
|

A Three-Dimensional Deep Learning Framework for Human Behavior Analysis Using Range-Doppler Time Points

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
23
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
1
1

Relationship

2
5

Authors

Journals

citations
Cited by 24 publications
(23 citation statements)
references
References 24 publications
0
23
0
Order By: Relevance
“…(3) MD-SVM [27], a machine learning method that extracts six statistical features from the micro-Doppler spectrogram and feed them into a support vector machine; (4) R-SVM, a support vector machine that takes the SIFT features [58] of the range-time domain as input instead of the micro-Doppler spectrogram; (5) MDR-SA [11], a recently proposed sparse autoencoder that takes both micro-Doppler spectrograms and range profiles as input, and the hidden features of the autoencoder are fed into a an MLP for classification; and (6) MD-SA and (7) R-SA which use the same architecture proposed in [11] and process on the micro-Doppler and range profiles separately. We also compare our model with its naive version, P-Net [15], which consumes the whole range-Doppler-time point sets with PointNet [36]. All these methods are tested on the CMU dataset and the UTD-MHAD datasets.…”
Section: B Classification Performance Comparisonmentioning
confidence: 99%
See 2 more Smart Citations
“…(3) MD-SVM [27], a machine learning method that extracts six statistical features from the micro-Doppler spectrogram and feed them into a support vector machine; (4) R-SVM, a support vector machine that takes the SIFT features [58] of the range-time domain as input instead of the micro-Doppler spectrogram; (5) MDR-SA [11], a recently proposed sparse autoencoder that takes both micro-Doppler spectrograms and range profiles as input, and the hidden features of the autoencoder are fed into a an MLP for classification; and (6) MD-SA and (7) R-SA which use the same architecture proposed in [11] and process on the micro-Doppler and range profiles separately. We also compare our model with its naive version, P-Net [15], which consumes the whole range-Doppler-time point sets with PointNet [36]. All these methods are tested on the CMU dataset and the UTD-MHAD datasets.…”
Section: B Classification Performance Comparisonmentioning
confidence: 99%
“…The candidate methods are as follows: (1) PointNet-based methods, namely, P-Net (a three-dimensional deep learning framework that proposed in [15]), P-Net+OSVM (an architecture proposed in [46] that combines PointNet model and a one-class support vector machine), P-Net+Scale (a PointNet model that uses temperature scaling [48]), and P-Net+Adv (a PointNet model that uses temperature scaling and perturbation, which was introduced in Eq. 3); and (2) hierarchical PointNet-based methods, namely, HP-Net (the hierarchical PointNet model introduced in Fig.…”
Section: Overlapping Echo Detectionmentioning
confidence: 99%
See 1 more Smart Citation
“…Improving neural networks for human activity recognition or deriving detection estimation theories is out of the scope of this study. Figure 1 summarises the state-of-the-art in human activity recognition using radar with either simulated or measured data, considering the different domains of radar data representations based on [1,2,6,24,25]. The research community has mainly focused on mD signatures for human radar classification.…”
Section: Introductionmentioning
confidence: 99%
“…In terms of the technical principle, it solves the detection problems caused by the complex environment such as no light, shielding, and non-line-of-sight [28]. Existing studies on radar sensors mainly use the micro-Doppler effect of radar to classify the movement of human targets [29,30]. However, these studies can only judge the motion state of human targets, and cannot obtain the spatial location information of different body parts of human targets.…”
Section: Introductionmentioning
confidence: 99%