2020
DOI: 10.1609/aaai.v34i01.5369
|View full text |Cite
|
Sign up to set email alerts
|

Crisis-DIAS: Towards Multimodal Damage Analysis - Deployment, Challenges and Assessment

Abstract: In times of a disaster, the information available on social media can be useful for several humanitarian tasks as disseminating messages on social media is quick and easily accessible. Disaster damage assessment is inherently multi-modal, yet most existing work on damage identification has focused solely on building generic classification models that rely exclusively on text or image analysis of online social media sessions (e.g., posts). Despite their empirical success, these efforts ignore the multi-modal in… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
15
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
2
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 25 publications
(18 citation statements)
references
References 0 publications
0
15
0
Order By: Relevance
“…In the past few years, multimodal sentiment analysis has attracted much attention including several tasks such as hate speech detection [12], emotion recognition [16,44], social media crisis handling [1,2], etc. For the lack of large-scaled dataset, datasets such as MOUD [30] MOSI [43], and MOSEI [44] are constructed based on product review and recommendation videos from YouTube, each associated with a sentiment label.…”
Section: Related Work 21 Multimodal Sentiment Analysismentioning
confidence: 99%
“…In the past few years, multimodal sentiment analysis has attracted much attention including several tasks such as hate speech detection [12], emotion recognition [16,44], social media crisis handling [1,2], etc. For the lack of large-scaled dataset, datasets such as MOUD [30] MOSI [43], and MOSEI [44] are constructed based on product review and recommendation videos from YouTube, each associated with a sentiment label.…”
Section: Related Work 21 Multimodal Sentiment Analysismentioning
confidence: 99%
“…The exploration of multimodality has also received attention in the research community [2,1]. In [2], authors explore different fusion strategies for multimodal learning.…”
Section: Multimodality (Image and Text)mentioning
confidence: 99%
“…The exploration of multimodality has also received attention in the research community [2,1]. In [2], authors explore different fusion strategies for multimodal learning. Similarly, in [1] a cross-attention based network exploited for multimodal fusion.…”
Section: Multimodality (Image and Text)mentioning
confidence: 99%
See 1 more Smart Citation
“…However, to further enhance the perceptions ability of human for urban and villages form , environmental and infrastructure, Wang, Tianyi and Tao, Yudong et.al [33] present a new multi-task and multi-modal deep learning framework with automatic loss weighting to assess damage after disaster events. Agarwal, Mansi and Leekha et.al [34] proposed a towards multimodal damage analysis methods to reply deployment, challenges and assessment and are called Crisis-DIAS. In order to promote human response to disaster events and extract as much detailed information as possible from limited data, Alam, Firoj and Ofli, Ferda et al [35] collected a large amount of multi-modal data (include image and text) from Twitter, effectively solving the limitation of lack of labeled image data, and improving The ability to respond to and manage disasters.…”
Section: The Deep Learning Image Perception Of City and Villagesmentioning
confidence: 99%