2010
DOI: 10.1007/978-3-642-17711-8_31
|View full text |Cite
|
Sign up to set email alerts
|

Variations of a Hough-Voting Action Recognition System

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

1
35
0

Year Published

2010
2010
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 51 publications
(36 citation statements)
references
References 7 publications
1
35
0
Order By: Relevance
“…However, only one among them succeeded to submit results for the classification task, which we report with Tables 3 and 4. The team BIWI [19] used a Hough transform-based method to classify interaction videos. Their method is based on [21], which uses a spatio-temporal voting with extracted local XYT features.…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…However, only one among them succeeded to submit results for the classification task, which we report with Tables 3 and 4. The team BIWI [19] used a Hough transform-based method to classify interaction videos. Their method is based on [21], which uses a spatio-temporal voting with extracted local XYT features.…”
Section: Resultsmentioning
confidence: 99%
“…Team BIWI BIWI team from ETH Zurich proposes to use a Hough transformbased voting framework for action recognition [19]. They separate the voting into two stages to bypass the inherent high dimensionality problem in Hough transform representation.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Classification results are shown in Table 3 and compared with state-of-the-art recognition systems [22,[60][61][62][63][64][65][66][67][68]. Accordingly, the proposed method has a great improvement in classification accuracy.…”
Section: Resultsmentioning
confidence: 99%
“…This is very challenging video material with far greater variety in actors, shot editing, viewpoint, locations, lighting and clutter than the typical surveillance videos used previously for classifying interactions [3,21,29] where there is a fixed camera and scene. We provide additional ground truth annotation for the dataset, specifying which shots contain people looking at each other.…”
Section: Introductionmentioning
confidence: 99%