2012 IEEE Intelligent Vehicles Symposium 2012
DOI: 10.1109/ivs.2012.6232307
|View full text |Cite
|
Sign up to set email alerts
|

Frontal object perception using radar and mono-vision

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
17
0

Year Published

2013
2013
2024
2024

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 37 publications
(20 citation statements)
references
References 10 publications
0
17
0
Order By: Relevance
“…To take the advantages of both stereo cameras and radar, Wu et al [17] fused detection results from different sensors by extended Kalman filtering (EKF), which can addresses the problem of accurately estimating the location, size, pose, and motion information of a threat vehicle. Due to the poor radar signal in pedestrians, Chavez-Garcia et al [18] only fused the LiDAR and vison sensor for final decision. However, three kinds of sensors including radar, LiDAR and vision were used for vehicle detection in [18].…”
Section: Object Detection By Decision Level Fusionmentioning
confidence: 99%
See 1 more Smart Citation
“…To take the advantages of both stereo cameras and radar, Wu et al [17] fused detection results from different sensors by extended Kalman filtering (EKF), which can addresses the problem of accurately estimating the location, size, pose, and motion information of a threat vehicle. Due to the poor radar signal in pedestrians, Chavez-Garcia et al [18] only fused the LiDAR and vison sensor for final decision. However, three kinds of sensors including radar, LiDAR and vision were used for vehicle detection in [18].…”
Section: Object Detection By Decision Level Fusionmentioning
confidence: 99%
“…The first fusion scheme works in decision level. The prediction results from radar and vision sensor are fused to generate the final results [12][13][14][15][16][17][18][19][20][21][22][23][24]. However, there are different kinds of detection noises involved in these two kinds of prediction results.…”
Section: Introductionmentioning
confidence: 99%
“…Radars have good longitudinal ranging coupled with crude lateral resolution; monocular vision can localize well in the camera's field of view but lacks ranging. The combination of the two can ameliorate the weakness of each sensor [160], [161]. In [162] and [163], information fusion between radar and vision sensors was used to probabilistically estimate the positions of vehicles and to propagate estimation uncertainty into decision making, for lane change recommendations on the highway.…”
Section: E Fusing Vision With Other Modalitiesmentioning
confidence: 99%
“…In [167], vehicles are detected with a boosted classifier using Haar and Gabor features and ranged using radar. In [160], camera and radar detections were projected into a common global occupancy grid; vehicles were tracked using Kalman filtering in a global frame of reference. In [168], potential vehicles were detected using saliency operations on the inverse perspective mapped image and combined with radar.…”
Section: E Fusing Vision With Other Modalitiesmentioning
confidence: 99%
“…The authors in [56] used vision operations on the inverse perspective mapped image and ranged via radar. Camera and radar detections were projected into a common global occupancy grid; vehicles were tracked with Kalman filtering in a global frame of reference [52]. In [63], a radar-vision online learning framework was utilized for vehicle detection.…”
Section: Fusion Of Sensorsmentioning
confidence: 99%