2016
DOI: 10.3390/s16111947
|View full text |Cite
|
Sign up to set email alerts
|

Context-Aware Fusion of RGB and Thermal Imagery for Traffic Monitoring

Abstract: In order to enable a robust 24-h monitoring of traffic under changing environmental conditions, it is beneficial to observe the traffic scene using several sensors, preferably from different modalities. To fully benefit from multi-modal sensor output, however, one must fuse the data. This paper introduces a new approach for fusing color RGB and thermal video streams by using not only the information from the videos themselves, but also the available contextual information of a scene. The contextual information… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
12
0

Year Published

2018
2018
2021
2021

Publication Types

Select...
3
3
3

Relationship

0
9

Authors

Journals

citations
Cited by 33 publications
(12 citation statements)
references
References 31 publications
0
12
0
Order By: Relevance
“…Alldieck et al [61] fuse RGB and thermal images from a video stream using a contextual information to access the quality of each image stream to accurately fuse the information from the two sensors. Whereas, methods such as MFNet [62], RTFNet [63], PST900 [64], FuseSeg [65] combine the potential of RGB images along with thermal images using CNN architectures for semantic segmentation of outdoor scenes, providing accurate segmentation results even in presence of degraded lighting conditions.…”
Section: A Scene Understandingmentioning
confidence: 99%
“…Alldieck et al [61] fuse RGB and thermal images from a video stream using a contextual information to access the quality of each image stream to accurately fuse the information from the two sensors. Whereas, methods such as MFNet [62], RTFNet [63], PST900 [64], FuseSeg [65] combine the potential of RGB images along with thermal images using CNN architectures for semantic segmentation of outdoor scenes, providing accurate segmentation results even in presence of degraded lighting conditions.…”
Section: A Scene Understandingmentioning
confidence: 99%
“…Multi-Object Tracking (MOT) is used by applications such as ITS, self-driving cars, and traffic-surveillance cameras to detect obstacles and pedestrians at intersections using infrastructure based perception, 144 radar and Light Detection And Ranging (LiDAR) for 3D tracking, 145 video-tracking, 146 or thermal imagery. 147 The goal of MOT is to estimate the trajectories of all objects in a dynamic scene. Different sensors such as video, radar, and LiDAR can be used for MOT.…”
Section: Multi-object Trackingmentioning
confidence: 99%
“…The detection accuracy is shown to be further improved if we allow a context-aware, qualitybased fusion. 19 Context sensitive indicators are introduced 19 to weigh in the soft segmentation performed over individual RGB and infrared data.…”
Section: Related Workmentioning
confidence: 99%