2017
DOI: 10.1007/s10489-017-1109-7
|View full text |Cite
|
Sign up to set email alerts
|

Hybridizing metric learning and case-based reasoning for adaptable clickbait detection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
23
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 27 publications
(23 citation statements)
references
References 18 publications
0
23
0
Order By: Relevance
“…A lot of work has been done to combat the clickbait titles. Some tools are available in different leading media sites, which automatically block such articles [8,9]. Bauhaus-Universität Weimar organised a clickbait challenge 1 to detect clickbait by providing their datasets, which draws a lot of attraction in this domain of research.…”
Section: Introductionmentioning
confidence: 99%
“…A lot of work has been done to combat the clickbait titles. Some tools are available in different leading media sites, which automatically block such articles [8,9]. Bauhaus-Universität Weimar organised a clickbait challenge 1 to detect clickbait by providing their datasets, which draws a lot of attraction in this domain of research.…”
Section: Introductionmentioning
confidence: 99%
“…A deep generative variational autoencoder model was used for classification clickbaits (Zannettou, Chatzis, Papadamou, & Sirivianos, 2018). A new deep learning and metric learning based hybrid techniques integrated with a case based reasoning methodology are proposed for adaptable clickbait detection (López-Sánchez, Herrero, Arrieta, & Corchado, 2018). A deep generative model (Liu, Le, Shu, Wang, & Lee, 2018) is proposed to address the issue of non-availability of large scale labeled data required to train the supervised learning models.…”
Section: Related Workmentioning
confidence: 99%
“…The learned embeddings from the embedding layer should not be confused with the embeddings that Glove [26] or word2vec [27] learn. These related embeddings are trained to capture semantic similarity whilst the Embedding layer in this work outputs embeddings that are configured purely for classification purposes on the dataset itself [28].…”
Section: -D Cnn Architecturementioning
confidence: 99%