Our system is currently under heavy load due to increased usage. We're actively working on upgrades to improve performance. Thank you for your patience.
2021 IEEE International Conference on Big Data (Big Data) 2021
DOI: 10.1109/bigdata52589.2021.9671592
|View full text |Cite
|
Sign up to set email alerts
|

Multi-Source Domain Adaptation with Weak Supervision for Early Fake News Detection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
14
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 14 publications
(14 citation statements)
references
References 18 publications
0
14
0
Order By: Relevance
“…have not been trained. Our work, as well as recent research in generalizability [44], weak labeling [37], foundation models [7], and rapid fake news detection [26] falls in this period. Our generalizability experiments in Section 3 show that fine-tuned models, while having lower performance on unseen data, do have better accuracy on some subsets.…”
Section: Datasetmentioning
confidence: 78%
See 2 more Smart Citations
“…have not been trained. Our work, as well as recent research in generalizability [44], weak labeling [37], foundation models [7], and rapid fake news detection [26] falls in this period. Our generalizability experiments in Section 3 show that fine-tuned models, while having lower performance on unseen data, do have better accuracy on some subsets.…”
Section: Datasetmentioning
confidence: 78%
“…Usually, this performance degradation is detected, and a new model is trained on new labeled data. Recently, the velocity and size of new data makes obtaining labeled data quickly and at scale, very expensive [26]. Updating models during data domain shift requires relying on weak labels, authoritative sources, and hierarchical models [26,37,38].…”
Section: Motivationmentioning
confidence: 99%
See 1 more Smart Citation
“…Then, a new classifier can be trained on this invariant representation for both source and target samples. Domain invariance is scalable to multiple source domains by fusing their latent representations with an adversarial encoder-discriminator framework (Li et al, 2021). For multi-source domain adaptation (MDA), classifiers for each source have different weights: static weights using distance (Li et al, 2021) or per-sample weights on l2 norm (Suprem et al, 2020).…”
Section: Multi-domain Adaptationmentioning
confidence: 99%
“…To catch up with concept drift, the classification models need to be expanded to cover a wide variety of data sets (Kaliyar et al, 2021;Li et al, 2021;Suprem and Pu, 2022), or augmented with new knowledge on true novelty such as the appearance of the Omicron variant (Pu et al, 2020). In this paper, we assume the availability of domain-specific authorative sources such as CDC and WHO that provide trusted up-to-date information on the pandemic.…”
Section: Introductionmentioning
confidence: 99%