2019
DOI: 10.2196/preprints.17643
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

A Graph Convolutional Network–Based Method for Chemical-Protein Interaction Extraction: Algorithm Development (Preprint)

Abstract: BACKGROUND Extracting the interactions between chemicals and proteins from the biomedical literature is important for many biomedical tasks such as drug discovery, medicine precision, and knowledge graph construction. Several computational methods have been proposed for automatic chemical-protein interaction (CPI) extraction. However, the majority of these proposed models cannot effectively learn semantic and syntactic information from complex sentences in biomedical texts. … Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 25 publications
0
2
0
Order By: Relevance
“…For a feature matrix H (t) ∈ R n×m at the t-th layer, the canonical graph convolution scheme in GCN (Kipf & Welling, 2017) gives H (t+1) = ĀH (t) . Later, DGC (Wang et al, 2021b) generalizes it to H (t+1) = (1 − ∆t)I + ∆t ĀH (t) with a flexible step size ∆t for finite difference, which reduces the canonical form when ∆t = 1. Comparing it to the alignment update rule in Eq.…”
Section: Connections Between the Two Paradigmsmentioning
confidence: 99%
See 1 more Smart Citation
“…For a feature matrix H (t) ∈ R n×m at the t-th layer, the canonical graph convolution scheme in GCN (Kipf & Welling, 2017) gives H (t+1) = ĀH (t) . Later, DGC (Wang et al, 2021b) generalizes it to H (t+1) = (1 − ∆t)I + ∆t ĀH (t) with a flexible step size ∆t for finite difference, which reduces the canonical form when ∆t = 1. Comparing it to the alignment update rule in Eq.…”
Section: Connections Between the Two Paradigmsmentioning
confidence: 99%
“…Message Passing Graph Neural Networks (MP-GNNs) are the prevailing designs in modern Graph Neural Networks (GNNs), including numerous variants like GCN (Kipf & Welling, 2017), GAT (Veličković et al, 2018), and even the Transformers (Vaswani et al, 2017). There is a vast literature studying its diffusion dynamics and representation power Oono & Suzuki, 2020;Wang et al, 2021b;Li et al, 2022;Dong et al, 2021;Xu et al, 2019;Chen et al, 2022). Therefore, establishing a connection between contrastive learning (CL) and MP-GNNs will hopefully bring new theoretical and empirical insights for understanding and designing contrastive learning methods.…”
Section: Introductionmentioning
confidence: 99%