2021
DOI: 10.1007/s11042-021-11006-8
|View full text |Cite
|
Sign up to set email alerts
|

Adaptive Salp swarm optimization algorithms with inertia weights for novel fake news detection model in online social media

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
12
0
1

Year Published

2021
2021
2024
2024

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 35 publications
(13 citation statements)
references
References 28 publications
0
12
0
1
Order By: Relevance
“…As we can see in fig. (7), our proposed model is underperforming the best model and has 94% accuracy, almost .7% lesser accuracy than the highest accuracy claim by the best algorithm, which is 95% in this dataset [23]. Our model is more consistent in every epoch we run while testing it on this dataset, but it cannot best the accuracy in this dataset compared to the existing model, which we can improve in feature.…”
Section: ) Isotmentioning
confidence: 72%
See 1 more Smart Citation
“…As we can see in fig. (7), our proposed model is underperforming the best model and has 94% accuracy, almost .7% lesser accuracy than the highest accuracy claim by the best algorithm, which is 95% in this dataset [23]. Our model is more consistent in every epoch we run while testing it on this dataset, but it cannot best the accuracy in this dataset compared to the existing model, which we can improve in feature.…”
Section: ) Isotmentioning
confidence: 72%
“…On the FakeNewsNet dataset BERT approach is used in [35] is giving maximum accuracy among all the other models. On the BUZZFEED dataset ASSO-OSSIW approach is used in [23] is giving maximum accuracy among all the other models; on Weibo, the dataset CNN approach is used in [42] is giving consistent accuracy in every epoch among all the other models, on LIAR dataset NLP approach is used in [43] is giving maximum accuracy among all the other models, on PHEME dataset MN approach is used in [42] is giving maximum accuracy among all the other models.…”
Section: Related Workmentioning
confidence: 99%
“…To investigate the performance of OBDSSA, this subsection compares it with several SSA variants on a set of 18 widely used benchmarks listed in Table 4. The compared algorithms include the original SSA [37], the self‐adaptive SSA algorithm (ASSA) [51], the SSA algorithm with random replacement tactic and double adaptive weighting mechanism (RDSSA) [53], the chaotic mechanism‐based SSA algorithm (CSSA) [59], the enhanced SSA algorithm (ESSA) [60], the PSO‐based SSA algorithm (SSAPSO) [61], the lifetime mechanism‐based SSA algorithm (LSSA) [62], the multi‐subpopulation‐based SSA algorithm (MSNSSA) [63], the opposition‐based learning enhanced SSA algorithm (ISSA) [64], the Gaussian‐SSA algorithm (GSSA) [65], the enhanced opposition‐based SSA algorithm (OBSSA) [66], the adaptive SSA algorithm with non‐linear coefficient decreasing inertia weight (ASSO) [67], the hybrid enhanced whale optimisation SSA algorithm (IWOSSA) [68]. In this experiment, mean value (Mean) and standard deviation (Std) were employed to evaluate the performance of the algorithms, the dimension of the benchmark functions was set as 100.…”
Section: Experimental Results and Analysismentioning
confidence: 99%
“…OGWO a was nonlinearly decreased from 2 to 0 SSA algorithm (MSNSSA) [63], the opposition-based learning enhanced SSA algorithm (ISSA) [64], the Gaussian-SSA algorithm (GSSA) [65], the enhanced opposition-based SSA algorithm (OBSSA) [66], the adaptive SSA algorithm with nonlinear coefficient decreasing inertia weight (ASSO) [67], the hybrid enhanced whale optimisation SSA algorithm (IWOSSA) [68]. In this experiment, mean value (Mean) and standard deviation (Std) were employed to evaluate the performance of the algorithms, the dimension of the benchmark functions was set as 100.…”
Section: Comparison With Ssa and Improved Ssamentioning
confidence: 99%
“…While compared with the shallow models, deep learning methods have considerable features for large datasets in terms of interpretability, learning capacity, feature representation, number of parameters, and running time. Similarly, some miscellaneous techniques are used for detecting fake news, which is the reverse-tracking approach (Ko et al June 2019), honeycomb framework (Talwar et al 2020), and ASSO-OSIW and GWO (Ozbay and Alatas 2021). The most used techniques with some advantages and challenges are listed in Table 3.…”
Section: Algorithmic Classificationmentioning
confidence: 99%