2021
DOI: 10.1016/j.eswa.2020.114320
|View full text |Cite
|
Sign up to set email alerts
|

A semantic approach for document classification using deep neural networks and multimedia knowledge graph

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0
1

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 22 publications
(11 citation statements)
references
References 32 publications
0
10
0
1
Order By: Relevance
“…The papers delve into advanced techniques such as neural architecture search for multilingual text classification (Yan et al, 2023), disaster-related multilingual text classification using graph neural networks , and the utilization of graph neural networks in natural language processing (Liu & Wu, 2022). Some contributions focus on semantic parsing via multilingual translation (Procopio et al, 2021), generic frameworks for multilingual short text categorization (Enamoto et al, 2021), and semantic approaches for document classification using deep neural networks and multimedia knowledge graphs (Rinaldi et al, 2021). Additionally, there are studies addressing cross-lingual word sense disambiguation (Janu et al, 2022), semantic graph-based topic modeling for multilingual fake news detection (Mohawesh et al, 2023), and automatic Bangla knowledge graph construction with semantic neural graph filtering (Wasi et al, 2024).…”
Section: Related Workmentioning
confidence: 99%
“…The papers delve into advanced techniques such as neural architecture search for multilingual text classification (Yan et al, 2023), disaster-related multilingual text classification using graph neural networks , and the utilization of graph neural networks in natural language processing (Liu & Wu, 2022). Some contributions focus on semantic parsing via multilingual translation (Procopio et al, 2021), generic frameworks for multilingual short text categorization (Enamoto et al, 2021), and semantic approaches for document classification using deep neural networks and multimedia knowledge graphs (Rinaldi et al, 2021). Additionally, there are studies addressing cross-lingual word sense disambiguation (Janu et al, 2022), semantic graph-based topic modeling for multilingual fake news detection (Mohawesh et al, 2023), and automatic Bangla knowledge graph construction with semantic neural graph filtering (Wasi et al, 2024).…”
Section: Related Workmentioning
confidence: 99%
“…For each entity, the crawler found a description using Wikidata. The text obtained with the combination of entities and descriptions is analyzed with a semantic-based metric [35] to have the best sense of each term. In the image classification task a feature extraction is used on each image of a given webpage using convolutional neural networks (CNN) [38], removing the last layer of the network and Fig.…”
Section: The Knowledge Graphmentioning
confidence: 99%
“…The proper sense of a given term is the sense that reaches the best score. In this work, we use the WSD algorithm proposed in [35] using an innovative technique that differs from others present in literature [17,25,25,27,46]. Such WSD algorithm is based on ontology and in particular it uses WordNet 3.0, and compute the sense disambiguation using a Dynamic Semantic Network (DSN), which is built to extract all hypernyms of a pre-selected input term and for each hyponym extracted from this term, all other semantic relationships of WordNet are explored and, eventually, a score of similarity is computed.…”
Section: The Storytelling Processmentioning
confidence: 99%
“…Feature selection aims at reducing the number of variables to keep only the most relevant without changing the initial variables (Gomathi and Karlekar, 2019;Mendez et al, 2019). On the contrary, Feature extraction change the initial variables using the prior knowledge of the ontology for get relevant features (Kumar et al, 2020;Radovanovic et al, 2019;Evert et al, 2019;Agarwal et al, 2015;Radinsky et al, 2012;Greenbaum et al, 2019;Liu et al, 2021;Rinaldi et al, 2021;Castillo et al, 2008;Yilmaz, 2017;Hsieh et al, 2013;Rajput and Haider, 2011;Manuja and Garg, 2015;Ahani et al, 2021;Akila et al, 2021;Deepak et al, 2022;Messaoudi et al, 2021;Nayak et al, 2021;Pérez-Pérez et al, 2021;Zhao et al, 2021;. In semantic embedding, always in training data step, raw data are both refined by semantic knowledge and transformed into vectors to be exploited by neural networks (Chen et al, 2021;Ren et al, 2020;Qiu et al, 2019;Ali et al, 2019;Zhang et al, 2019;Makni and Hendler, 2019;Benarab et al, 2019;Moussallem et al, 2019;Gaur et al, 2019;Jang et al, 2018;Hassanzadeh et al, 2020;Ali et al, 2021;Amador-Domínguez et al, 2021;Alexandridis et al, 2021;Niu et al, 2022), SVM…”
Section: Informed Machine Learningmentioning
confidence: 99%
“…Table 8 presents machine learning algorithms of each informed machine learning category. Articles were mainly published after 2017, and a great part of them concern neural networks (Hassanzadeh et al, 2020;Gaur et al, 2019;Ali et al, 2019;Jang et al, 2018;Zhang et al, 2019;Ali et al, 2021;Amador-Domínguez et al, 2021;Benarab et al, 2019;Chen et al, 2021;Wang et al, 2021bWang et al, , 2010Sabra et al, 2020;Pancerz and Lewicki, 2014;Yilmaz, 2017;Kumar et al, 2020;Rinaldi et al, 2021;Gomathi and Karlekar, 2019;Serafini et al, 2017;Kuang et al, 2021;Chung et al, 2020;Fu et al, 2015;Huang et al, 2019;Abdollahi et al, 2021;Ahani et al, 2021;Akila et al, 2021;Deepak et al, 2022;Messaoudi et al, 2021;Nayak et al, 2021;Zhao et al, 2021), especially Recurrent Neural Networks (Makni and Hendler, 2019;Ren et al, 2020;Moussallem et al, 2019;Zhang et al, 2019;Jang et al, 2018;Ali et al, 2021;Liu et al, 2021;Huang et al, 2019;Alexandridis et al, 2021;Niu...…”
Section: Informed Machine Learningmentioning
confidence: 99%