2021
DOI: 10.48550/arxiv.2105.03791
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Enhancing Transformers with Gradient Boosted Decision Trees for NLI Fine-Tuning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 0 publications
0
1
0
Order By: Relevance
“…Previous studies have demonstrated benefits of utilizing learned representations from Transformer Networks as inputs for gradient boosting models, leading to improved outcomes compared to directly using the predictions of a Transformer Network 19,36 . We thus follow a similar approach here.…”
Section: Prosmith Feeds the Learned Representations To Gradient Boost...mentioning
confidence: 99%
“…Previous studies have demonstrated benefits of utilizing learned representations from Transformer Networks as inputs for gradient boosting models, leading to improved outcomes compared to directly using the predictions of a Transformer Network 19,36 . We thus follow a similar approach here.…”
Section: Prosmith Feeds the Learned Representations To Gradient Boost...mentioning
confidence: 99%