2021
DOI: 10.1109/jsen.2021.3077029
|View full text |Cite
|
Sign up to set email alerts
|

Multi-Modal Sensor Fusion-Based Semantic Segmentation for Snow Driving Scenarios

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 26 publications
(7 citation statements)
references
References 47 publications
0
7
0
Order By: Relevance
“…For example, by using a deep learning model to fuse the data streams emanating from a visual camera and a thermal sensor, semantic segmentation can be performed in snowy weather. 13 The vision-thermal fusion model detects persons with higher accuracy than a camera-only model in snowy scenarios. This example demonstrates that diverse sensor inputs can be used to compensate for various unsuitable weather conditions in the self-driving context.…”
Section: Self-driving Applicationsmentioning
confidence: 99%
See 1 more Smart Citation
“…For example, by using a deep learning model to fuse the data streams emanating from a visual camera and a thermal sensor, semantic segmentation can be performed in snowy weather. 13 The vision-thermal fusion model detects persons with higher accuracy than a camera-only model in snowy scenarios. This example demonstrates that diverse sensor inputs can be used to compensate for various unsuitable weather conditions in the self-driving context.…”
Section: Self-driving Applicationsmentioning
confidence: 99%
“…One example of deep-learning -based satellite fusion is superresolution imagery, for which images from Landsat and Sentinel-2 are fused, and time series images with high resolution are generated. 13 Another example is the fusion of satellite images from homogeneous satellites but in different modes for detecting ice-wedge polygons across the Arctic region. 14 Superior Earth observations enable scientists to closely monitor Earth in real time and precisely model planetary changes to generate predictions.…”
Section: Satellite Applications Assisted By Remote Sensing Technologymentioning
confidence: 99%
“…Histogram of oriented gradient (HOG) and autocorrelation loss are used to facilitate the orientation consistency and repress repetitive rain streaks. They trained the network all the way from drizzle to downpour rain Fusion [110] LiDAR [152] LiDAR [76] LiDAR [153] Others [154] LiDAR [155] LiDAR [156] Camera [157] Camera [158] Camera [159] Camera [160] Camera [161] Camera [162] Camera [163] Camera [164] LiDAR [165] LiDAR [166] LiDAR [128] LiDAR [29] Fusion [129] LiDAR [167] Fusion [168] LiDAR [169] LiDAR [170] Fusion [171] LiDAR [172] Camera [173] Camera [174] Camera [175] Camera [176] Camera [177] Camera [178] Camera [179] Camera [180] Camera [181] Camera [182] Camera [183] Camera [184] Camera [185] Camera [186] Fusion [187] Fusion [188] LiDAR [189] Camera [190] Camera…”
Section: Rainmentioning
confidence: 99%
“…Vachmanus et al [188] extended this idea into the autonomous driving semantic segmentation task by adding thermal cameras into the modality. RGB camera input might not be enough to represent every pertinent object with various colors in the surroundings, or pedestrians involved in the snow driving scenario, which happens to be the thermal camera's strong point.…”
Section: Snowmentioning
confidence: 99%
“…Multimodal sensors for mobile agents only focus on indoor localization [12]. Studies to overcome the challenges of the outdoor environment, such as snow, help autonomous driving by learning with color and infrared images; however, it seems insufficient to recognize abnormal situations in outdoor monitoring areas [13]. Multimodal sensors for object detection in marine environments use color-based learning and can only be used during the daytime [14].…”
Section: Introductionmentioning
confidence: 99%