2018
DOI: 10.1007/978-3-319-98932-7_1
|View full text |Cite
|
Sign up to set email alerts
|

Deep Multimodal Classification of Image Types in Biomedical Journal Figures

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
6
2

Relationship

2
6

Authors

Journals

citations
Cited by 15 publications
(6 citation statements)
references
References 13 publications
0
6
0
Order By: Relevance
“…To automatically assign class label to individual panels, we build an image classifier. A pre-trained Convolutional Neural Network, VGG16 (Andrearczyk and M ü ller, 2018 ; Simonyan and Zisserman, 2015 ), is used for image classification. To train the classifier, we use the annotated image dataset that was introduced by Lopez et al (2013) based on the Molecular INTeraction database dataset ( Licata et al , 2012 ).…”
Section: Methodsmentioning
confidence: 99%
“…To automatically assign class label to individual panels, we build an image classifier. A pre-trained Convolutional Neural Network, VGG16 (Andrearczyk and M ü ller, 2018 ; Simonyan and Zisserman, 2015 ), is used for image classification. To train the classifier, we use the annotated image dataset that was introduced by Lopez et al (2013) based on the Molecular INTeraction database dataset ( Licata et al , 2012 ).…”
Section: Methodsmentioning
confidence: 99%
“…Several challenges on identifying the image types were run in ImageCLEF [15]. The currently best results for the really challenging and very unbalanced data set of over 30 classes reached over 90% [2].…”
Section: Extracting Content From Medical Imagesmentioning
confidence: 99%
“…In PMC, most articles have manually attached MeSH (Medical Subject Headings) terms. Text has been used in most retrieval applications [10] but has also obtained very good results in modality classification [17,2], as it is complementary to visual information. In the case of compound figure detection, a caption with several subparts can also be indicative for the presence of subfigures.…”
Section: Combining Text and Images For Data Analysismentioning
confidence: 99%
“…This data set was changed strongly compared to the same task run in 2017 to reduce the diversity on the data and limit the number of compound figures. A subset of clinical figures was automatically obtained from the overall set of 5.8 million PMC figures using a deep multimodal fusion of Convolutional Neural Networks (CNN), described in [2]. In total, the dataset is comprised of 232,305 imagecaption pairs split into disjoint training (222,305 pairs) and test (10,000 pairs) sets.…”
Section: Datasetmentioning
confidence: 99%