2019 IEEE Fifth International Conference on Multimedia Big Data (BigMM) 2019
DOI: 10.1109/bigmm.2019.00-38
|View full text |Cite
|
Sign up to set email alerts
|

Multimodal Analysis of Disaster Tweets

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
11
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 37 publications
(15 citation statements)
references
References 33 publications
0
11
0
Order By: Relevance
“…As shown in Table 9, the method using multimodality had better results than the method using unimodality. Compared to the model designed by Gautam [37] and Ofli [23], the architecture of the model designed in this paper was simpler and easier to train. Specifically, in the image feature extractor module, we adjusted the parameter size in the second to last layer of its fully connected layer to 500 from the original size of 1000, which means that the image feature is simpler and that the multimodal fusion input is simpler as well.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…As shown in Table 9, the method using multimodality had better results than the method using unimodality. Compared to the model designed by Gautam [37] and Ofli [23], the architecture of the model designed in this paper was simpler and easier to train. Specifically, in the image feature extractor module, we adjusted the parameter size in the second to last layer of its fully connected layer to 500 from the original size of 1000, which means that the image feature is simpler and that the multimodal fusion input is simpler as well.…”
Section: Discussionmentioning
confidence: 99%
“…For example, Dao et al [36] showed a context-aware data-fusion method for disaster image retrieval from social media where the system combined the image with text. In 2019, using CNN, VGG, and long short-term memory(LSTM), Gautam et al [37] designed models with multimodal data to categorize the information found on Twitter, which could further improve the accuracy of the classification task. In 2020, Ofli et al [23] exploited both text and image modalities from social media and mined the useful disaster information from them.…”
Section: Related Workmentioning
confidence: 99%
“…Under the multimodal analysis, Gautam et al [25] proposed a diffusion method for the classification of Twitter data (text and images) of seven disasters of the CrisisMMD dataset [33] into two classes informative and noninformative and compared their model with the unimodal models based on text-only and image-only modalities. For text-only modality, they applied N-gram, LSTM, BiLSTM, and CNN+Glove methods, and for image-only modality, they used six pre-trained models VGG-16, VGG-19, ResNet50, InceptionV2, Xception, and DenseNet for transfer learning.…”
Section: Multimodal: Based On Both Text and Image Modalitiesmentioning
confidence: 99%
“…Under the multimodal category, the most common methods used by researchers to fuse the features of text and image modalities for the classification of disaster-related data are early fusion and late fusion [25], [26], [27], [28], [29]. Early fusion is also called feature-based fusion, where the final decision is based on the common vector obtained after concatenating the extracted features of the individual modalities.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation