2021
DOI: 10.1016/j.neucom.2021.08.078
|View full text |Cite
|
Sign up to set email alerts
|

Automatic topic labeling using graph-based pre-trained neural embedding

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
61
1
3

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 44 publications
(67 citation statements)
references
References 8 publications
2
61
1
3
Order By: Relevance
“…Tis work proposes aggregating the related information and adding a cross-attention neural network layer. From the experimental results based on pretrained language models such as DeBERTa-V3-Large [4], RoBERTa-Large [5], and BERT-for-Patent [6], we found that the proposed methods perform better than these general methods. And it has an average of 2-4 percent higher performance, further improving the accuracy and performance of downstream tasks.…”
Section: Introductionmentioning
confidence: 93%
“…Tis work proposes aggregating the related information and adding a cross-attention neural network layer. From the experimental results based on pretrained language models such as DeBERTa-V3-Large [4], RoBERTa-Large [5], and BERT-for-Patent [6], we found that the proposed methods perform better than these general methods. And it has an average of 2-4 percent higher performance, further improving the accuracy and performance of downstream tasks.…”
Section: Introductionmentioning
confidence: 93%
“…They found 4. Sub-task 2 baseline is available at: https://github.com/nl4opt/nl4opt-subtask2-baseline that implementing adversarial attack using the FGM proposed by Goodfellow et al (2014) on the DeBERTa-large transformer (He et al, 2021) with a CRF layer performed the best on the development set. They trained 9 variations of this model using different random initializations to form their ensemble and leveraged majority voting for the final prediction.…”
Section: Solutionsmentioning
confidence: 99%
“…For the reranking we actually used a few different rerankers compared to the known languages track. 9 For Yoruba, we add three models a) the monoT0pp over the translated corpus we had used in English on the known languages track; b) RankT5 using ByT5 [14] 10 as its pretrained language model and trained solely on mMARCO; and c) A cross encoding reranker using Deberta [7] 11 as its pretrained language model. Results are presented in Table 10.…”
Section: Rerankersmentioning
confidence: 99%