2019 IEEE/CVF International Conference on Computer Vision (ICCV) 2019
DOI: 10.1109/iccv.2019.00936
|View full text |Cite
|
Sign up to set email alerts
|

DeepGCNs: Can GCNs Go As Deep As CNNs?

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

6
608
0
2

Year Published

2019
2019
2023
2023

Publication Types

Select...
3
3
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 1,045 publications
(684 citation statements)
references
References 30 publications
6
608
0
2
Order By: Relevance
“…L2 regularization and dropout were found to be comparable to early-stopping on validation, so early-stop was used instead. Model training was found to diverge without residue connections, which others have seen [59] . Final layer numbers are K = 4, L = 3 , J = 3.…”
Section: Modelmentioning
confidence: 92%
“…L2 regularization and dropout were found to be comparable to early-stopping on validation, so early-stop was used instead. Model training was found to diverge without residue connections, which others have seen [59] . Final layer numbers are K = 4, L = 3 , J = 3.…”
Section: Modelmentioning
confidence: 92%
“…To capture higher-order dependencies, multiple dynamic graph convolution layers are chained and connected densely within a feature extraction unit [16,20,22,36]:…”
Section: Representation Encoder With Differentiable Poolingmentioning
confidence: 99%
“…Some of the key factors for using the Residual Network is the scalability of the training parameters by increasing the complexity of the network by adding additional layers. Like the classic Convolution Neural Network (CNNs), the Residual Network (ResNet) does not face the gradient issue of extinction [27].…”
Section: A Transfer Learningmentioning
confidence: 99%