2023
DOI: 10.1016/j.patcog.2023.109448
|View full text |Cite
|
Sign up to set email alerts
|

Dual-channel graph contrastive learning for self-supervised graph-level representation learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(1 citation statement)
references
References 70 publications
0
1
0
Order By: Relevance
“…They propose to derive positive and negative contrastive pairs from citation triplets and demonstrate the power of mining hard negatives. MICoL and CitationSum (Luo et al, 2023) adopt contrastive learning to multi-label classification and summarization of scientific papers, respectively. As for multi-task learning, Luan et al (2018) propose a multi-task scientific knowledge graph construction framework by jointly identifying entities, relations, and coreference; treat multiple biomedical named entity recognition datasets (with different types of entities annotated) as multiple tasks so that they can mutually benefit each other.…”
Section: Related Workmentioning
confidence: 99%
“…They propose to derive positive and negative contrastive pairs from citation triplets and demonstrate the power of mining hard negatives. MICoL and CitationSum (Luo et al, 2023) adopt contrastive learning to multi-label classification and summarization of scientific papers, respectively. As for multi-task learning, Luan et al (2018) propose a multi-task scientific knowledge graph construction framework by jointly identifying entities, relations, and coreference; treat multiple biomedical named entity recognition datasets (with different types of entities annotated) as multiple tasks so that they can mutually benefit each other.…”
Section: Related Workmentioning
confidence: 99%