2016 IEEE International Symposium on Multimedia (ISM) 2016
DOI: 10.1109/ism.2016.0072
|View full text |Cite
|
Sign up to set email alerts
|

Action Recognition in the Longwave Infrared and the Visible Spectrum Using Hough Forests

Abstract: Action recognition in surveillance systems has to work 24/7 under all kinds of weather and lighting conditions. Towards this end, most action recognition systems only work in the visible spectrum which limits their general usage to daytime applications. In this work Hough forests are applied to the longwave infrared spectrum which can capture humans both in the dark and in daylight. Further, Integral Channel Features which have shown promising results in the spatial domain are applied to the spatio-temporal do… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 10 publications
(2 citation statements)
references
References 20 publications
0
2
0
Order By: Relevance
“…In this section, we evaluate the proposed model on two challenging AR datasets: IR infrared action recognition (InfAR) [35] and multispectral IOSB [36]. These datasets were chosen for their unique position to enable the evaluation of the proposed model with others in the field as well as investigate the multispectral fusion aspect of the paper.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…In this section, we evaluate the proposed model on two challenging AR datasets: IR infrared action recognition (InfAR) [35] and multispectral IOSB [36]. These datasets were chosen for their unique position to enable the evaluation of the proposed model with others in the field as well as investigate the multispectral fusion aspect of the paper.…”
Section: Resultsmentioning
confidence: 99%
“…This dataset consists of visible and IR action videos that have been recorded on a sunny summer day of ten people; eight males and two females in the age range of 31.2±5.7 [36]. We test our proposed algorithm on three classes of the dataset; namely, film, point, and throw.…”
Section: Resultsmentioning
confidence: 99%