2013
DOI: 10.1016/j.bica.2013.05.008
|View full text |Cite
|
Sign up to set email alerts
|

3D flow estimation for human action recognition from colored point clouds

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
17
0
1

Year Published

2014
2014
2024
2024

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 26 publications
(20 citation statements)
references
References 29 publications
0
17
0
1
Order By: Relevance
“…The latter verifies the capabilities of the proposed descriptor in efficiently defining an appropriate local-coordinate system and also encoding spatial distribution/surface-related information, as detailed in Section 3. It must be noted that the global 3D flow descriptor of [17] (mentioned in Section 2.2) was not included in the conducted comparative evaluation. This is due to the descriptor of [17] being view-dependant, since it employs a static 3D space grid division that is defined according to the single Kinect sensor that is assumed to be present.…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…The latter verifies the capabilities of the proposed descriptor in efficiently defining an appropriate local-coordinate system and also encoding spatial distribution/surface-related information, as detailed in Section 3. It must be noted that the global 3D flow descriptor of [17] (mentioned in Section 2.2) was not included in the conducted comparative evaluation. This is due to the descriptor of [17] being view-dependant, since it employs a static 3D space grid division that is defined according to the single Kinect sensor that is assumed to be present.…”
Section: Resultsmentioning
confidence: 99%
“…It must be noted that the global 3D flow descriptor of [17] (mentioned in Section 2.2) was not included in the conducted comparative evaluation. This is due to the descriptor of [17] being view-dependant, since it employs a static 3D space grid division that is defined according to the single Kinect sensor that is assumed to be present. Hence, the comparison with the view-invariant HOF3D and the proposed descriptor would not be fair.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Wang et al [33] extract Random Occupancy Patterns from depth sequences, use sparse coding to encode these features, and classify actions. Munaro et al [22] developed a 3D grid-based descriptor from point cloud data and recognize different actions using nearest neighbors.…”
Section: A Input Featuresmentioning
confidence: 99%
“…In the literature, one can find a number of activity recognition approaches based on image sequences, point clouds or depth maps, where occupancy patterns are calculated [47] or different features are extracted such as spatio-temporal context distribution of interest points [48], histogram of oriented principal components [49] or oriented 4D normals [50], and 3D flow estimation [51]. However, the sparsity of Lidar point clouds (versus Kinect) becomes a bottleneck for extracting the above features.…”
Section: Action Recognitionmentioning
confidence: 99%