2019 International Conference on Machine Learning and Cybernetics (ICMLC) 2019
DOI: 10.1109/icmlc48188.2019.8949309
|View full text |Cite
|
Sign up to set email alerts
|

An Empirical Study on the Classification of Chinese News Articles by Machine Learning and Deep Learning Techniques

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
2
2

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(1 citation statement)
references
References 11 publications
0
1
0
Order By: Relevance
“…Bachman et al [42] proposed a contrastive presentation learning approach by maximizing mutual information between features from multiple views of data. Huang et al [44] presented a contrastive learning method that discovers sample-based neighborhoods to facilitate feature representation, which emphasizes the importance of discriminative feature extraction during training, Zhuang et al [45] introduced a contrastive idea that trains embedding functions using a metric of local aggregation, allowing similar data instances to cluster while separating dissimilar ones. These ideas of contrastive loss emphasize the significance of capturing meaningful discriminative representations from data.…”
Section: Contrastive Lossmentioning
confidence: 99%
“…Bachman et al [42] proposed a contrastive presentation learning approach by maximizing mutual information between features from multiple views of data. Huang et al [44] presented a contrastive learning method that discovers sample-based neighborhoods to facilitate feature representation, which emphasizes the importance of discriminative feature extraction during training, Zhuang et al [45] introduced a contrastive idea that trains embedding functions using a metric of local aggregation, allowing similar data instances to cluster while separating dissimilar ones. These ideas of contrastive loss emphasize the significance of capturing meaningful discriminative representations from data.…”
Section: Contrastive Lossmentioning
confidence: 99%