Tenth IEEE International Conference on Computer Vision (ICCV'05) Volume 1 2005
DOI: 10.1109/iccv.2005.28
|View full text |Cite
|
Sign up to set email alerts
|

Actions as space-time shapes

Abstract: Abstract-Human action in video sequences can be seen as silhouettes of a moving torso and protruding limbs undergoing articulated motion. We regard human actions as three-dimensional shapes induced by the silhouettes in the space-time volume. We adopt a recent approach [14] for analyzing 2D shapes and generalize it to deal with volumetric space-time action shapes. Our method utilizes properties of the solution to the Poisson equation to extract space-time features such as local space-time saliency, action dyna… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

16
1,172
3
12

Year Published

2010
2010
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 1,390 publications
(1,203 citation statements)
references
References 14 publications
16
1,172
3
12
Order By: Relevance
“…Those silhouettes are extracted from an already known background. This simple procedure follows exactly the same way as [3]. Note there are more robust methods which can be used for silhouettes extraction of video sequences, e.g.…”
Section: Action Featuresmentioning
confidence: 99%
See 3 more Smart Citations
“…Those silhouettes are extracted from an already known background. This simple procedure follows exactly the same way as [3]. Note there are more robust methods which can be used for silhouettes extraction of video sequences, e.g.…”
Section: Action Featuresmentioning
confidence: 99%
“…Investigation of those methods beyonds the scope of this paper. Instead, we used the binary sequences directly available from [3]. 1 We found that for binary images sequences, simple inter-frame differencing method performs well in our task.…”
Section: Action Featuresmentioning
confidence: 99%
See 2 more Smart Citations
“…In the past, benchmark datasets such as KTH (Schuldt et al 2004), Weizmann (Blank et al 2005) or Kinect based datasets (Li et al 2010;Cheng et al 2012) have been invaluable in providing comparative benchmarks to examine how competing approaches perform in action recognition and detection. However, these staged datasets now routinely have reported performance rates over 90 %, suggesting that they are reaching the end of their service to the community.…”
Section: Introductionmentioning
confidence: 99%