“…Additionally, we explore the effectiveness of our dualattention GCN by comparing the results with ourselves, and experiment on the hop K to determine what value is appropriate. Dataset train words test nodes classes 20ng 11314 42757 7532 61603 20 mr 7108 18764 3554 29426 2 ohsumed 3357 14157 4043 21557 23 R52 6532 8892 2568 17992 52 R8 5485 7688 2189 15362 8 We compare our proposed method dual-attention GCN with multiple stateof-the-art text classification and embedding methods by following , including TF-IDF+LR [26], CNN [12] , LSTM [16] , Bi-LSTM, PV-DBOW [14] , PV-DM [14] , PTE [22] , fastText [11] , SVEM [21] , LEAM [19] , Graph-CNN-C [6], Graph-CNN-S [4] , Graph-CNN-F [9] and TextGCN. TF-IDF+LR is the bagof-words model set term frequency-inverse document frequency as weights with Logistic Regression classifier.…”