2015
DOI: 10.1007/s00138-014-0653-y
|View full text |Cite
|
Sign up to set email alerts
|

Onboard monocular pedestrian detection by combining spatio-temporal hog with structure from motion algorithm

Abstract: In this paper, we brought out a novel pedestrian detection framework for the advanced driver assistance system of mobile platform under the normal urban street environment. Different from the conventional systems that focus on the pedestrian detection at near distance by interfusing multiple sensors (such as radar, laser and infrared camera), our system has achieved the pedestrian detection at all (near, middle and long) distance on a normally driven vehicle (1-40 km/h) with monocular camera under the street s… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2016
2016
2022
2022

Publication Types

Select...
6

Relationship

2
4

Authors

Journals

citations
Cited by 7 publications
(3 citation statements)
references
References 65 publications
(94 reference statements)
0
3
0
Order By: Relevance
“…Compared to traditional methods that use spatio-temporal features to describe human motion (such as HOGHOF [32], STHOG [33,34], STGGP [35] and 3DHOG [36]), recently developed GCNs can recognize human actions more accurately by using the key points extracted from their bodies as the input features. As illustrated in Figure 11, after obtaining the key skeleton points from a real UAV pilot, all of these sequential key points will be transmitted to the spatial-temporal GCN (ST-GCN [19]) module for the action recognition task.…”
Section: Action Recognition Modulementioning
confidence: 99%
“…Compared to traditional methods that use spatio-temporal features to describe human motion (such as HOGHOF [32], STHOG [33,34], STGGP [35] and 3DHOG [36]), recently developed GCNs can recognize human actions more accurately by using the key points extracted from their bodies as the input features. As illustrated in Figure 11, after obtaining the key skeleton points from a real UAV pilot, all of these sequential key points will be transmitted to the spatial-temporal GCN (ST-GCN [19]) module for the action recognition task.…”
Section: Action Recognition Modulementioning
confidence: 99%
“…Another approach is to augment HOG descriptors with temporal informations obtained from video sequences. In [15], for example, the authors aggregate several descriptors obtained by different techniques to extract temporal information from images. The 3DHOG descriptor proposed in [21] for characterizing motion features with a co-occurence spatio-temporal vector also belongs to this category.…”
Section: Related Workmentioning
confidence: 99%
“…Compared with other human communication methods like facial expression, eye tracking, or head movement, human gestures are more easily understood, 18,19 and more complex information can be expressed with different gestures (as shown in Figure 2). Vision sensors are seldom affected by such electromagnetic interference, and vision-based methods have the advantages of convenient interaction, rich expression, and interactive nature.…”
Section: Introductionmentioning
confidence: 99%