The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
Proceedings of the Conference Recent Advances in Natural Language Processing - Deep Learning for Natural Language Processing Me 2021
DOI: 10.26615/978-954-452-072-4_068
|View full text |Cite
|
Sign up to set email alerts
|

Multiple Teacher Distillation for Robust and Greener Models

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
0
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 21 publications
0
0
0
Order By: Relevance
“…The reason could be relying on a single metric can introduce biased preference in models and a lack of diversity for captured hallucinations. In general, multiple teacher models lead to a robust, unbiased process Ilichev et al, 2021). Using diverse metrics in mFACT's training helps the classifier detect various hallucination types -our inverse transfer experiments (Table 2) also show mFACT's promising correlations with both intrinsic and extrinsic hallucination metrics.…”
Section: A8 Prompts Used For Multilingual Llm's Summarisationmentioning
confidence: 81%
“…The reason could be relying on a single metric can introduce biased preference in models and a lack of diversity for captured hallucinations. In general, multiple teacher models lead to a robust, unbiased process Ilichev et al, 2021). Using diverse metrics in mFACT's training helps the classifier detect various hallucination types -our inverse transfer experiments (Table 2) also show mFACT's promising correlations with both intrinsic and extrinsic hallucination metrics.…”
Section: A8 Prompts Used For Multilingual Llm's Summarisationmentioning
confidence: 81%