2018
DOI: 10.1186/s13638-018-1062-0
|View full text |Cite
|
Sign up to set email alerts
|

Deep multimodal fusion for ground-based cloud classification in weather station networks

Abstract: Most existing methods only utilize the visual sensors for ground-based cloud classification, which neglects other important characteristics of cloud. In this paper, we utilize the multimodal information collected from weather station networks for ground-based cloud classification and propose a novel method named deep multimodal fusion (DMF). In order to learn the visual features, we train a convolutional neural network (CNN) model to obtain the sum convolutional map (SCM) by using a pooling operation across al… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
16
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
4
3

Relationship

2
5

Authors

Journals

citations
Cited by 24 publications
(16 citation statements)
references
References 36 publications
0
16
0
Order By: Relevance
“…Such a strategy thoroughly investigates the correlations between the visual features and the multi-modal information and takes into consideration the complementary and supplementary information between them as well as their relative importance for the recognition task. 77.95 DMF [31] 79.05 DCAFs [25] 82.67 DCAFs + MI 82.97 CloutNet [26] 79.92 CloutNet + MI 80.37 JFCNN [32] 84.13 DTFN [62] 86.48 HMF [63] 87.90 MMFN 88.63…”
Section: Comparison With Other Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Such a strategy thoroughly investigates the correlations between the visual features and the multi-modal information and takes into consideration the complementary and supplementary information between them as well as their relative importance for the recognition task. 77.95 DMF [31] 79.05 DCAFs [25] 82.67 DCAFs + MI 82.97 CloutNet [26] 79.92 CloutNet + MI 80.37 JFCNN [32] 84.13 DTFN [62] 86.48 HMF [63] 87.90 MMFN 88.63…”
Section: Comparison With Other Methodsmentioning
confidence: 99%
“…Hence, instead of only focusing on cloud visual representations, it is more reasonable to enhance the recognition performance via combining ground-based cloud visual and multi-modal information. Liu and Li [31] extracted deep features by stretching the sum convolutional map obtained from pooling activation at the same position of all the feature maps in deep convolutional layers. Then the deep features are integrated with multimodal features with weight.…”
Section: Introductionmentioning
confidence: 99%
“…Recently, the feature fusion methods are also employed to obtain the completed representations for clouds. Liu and Li [10] directly fused the multimodal information with deep visual features in a concatenated manner for ground-based cloud classification. The Joint Fusion Convolutional Neural Network (JFCNN) [11] was presented to utilize a joint fusion layer to integrate the learned multimodal information and the learned visual information for cloud representation.…”
Section: B Feature Fusionmentioning
confidence: 99%
“…The traditional methods for multimodal ground-based cloud classification [10], [11] usually apply shallow fusion strategies which only capture partial correlations between heterogeneous features. In order to overcome the limitation, HMF is proposed to deeply mine the complex relationship between deep visual features and deep multimodal features.…”
Section: B Hierarchical Multimodal Fusionmentioning
confidence: 99%
See 1 more Smart Citation