“…The reason could be relying on a single metric can introduce biased preference in models and a lack of diversity for captured hallucinations. In general, multiple teacher models lead to a robust, unbiased process Ilichev et al, 2021). Using diverse metrics in mFACT's training helps the classifier detect various hallucination types -our inverse transfer experiments (Table 2) also show mFACT's promising correlations with both intrinsic and extrinsic hallucination metrics.…”
Section: A8 Prompts Used For Multilingual Llm's Summarisationmentioning
Hallucinations pose a significant challenge to the reliability of neural models for abstractive summarisation. While automatically generated summaries may be fluent, they often lack faithfulness to the original document. This issue becomes even more pronounced in lowresource languages, where summarisation requires cross-lingual transfer. With the existing faithful metrics focusing on English, even measuring the extent of this phenomenon in crosslingual settings is hard. To address this, we first develop a novel metric, mFACT, evaluating the faithfulness of non-English summaries, leveraging translation-based transfer from multiple English faithfulness metrics. Through extensive experiments in multiple languages, we demonstrate that mFACT is best suited to detect hallucinations compared to alternative metrics. With mFACT, we assess a broad range of multilingual large language models, and find that they all tend to hallucinate often in languages different from English. We then propose a simple but effective method to reduce hallucinations in cross-lingual transfer, which weighs the loss of each training example by its faithfulness score. This method drastically increases both performance and faithfulness according to both automatic and human evaluation when compared to strong baselines for cross-lingual transfer such as MAD-X. Our code and dataset are available at https: //github.com/yfqiu-nlp/mfact-summ.
“…The reason could be relying on a single metric can introduce biased preference in models and a lack of diversity for captured hallucinations. In general, multiple teacher models lead to a robust, unbiased process Ilichev et al, 2021). Using diverse metrics in mFACT's training helps the classifier detect various hallucination types -our inverse transfer experiments (Table 2) also show mFACT's promising correlations with both intrinsic and extrinsic hallucination metrics.…”
Section: A8 Prompts Used For Multilingual Llm's Summarisationmentioning
Hallucinations pose a significant challenge to the reliability of neural models for abstractive summarisation. While automatically generated summaries may be fluent, they often lack faithfulness to the original document. This issue becomes even more pronounced in lowresource languages, where summarisation requires cross-lingual transfer. With the existing faithful metrics focusing on English, even measuring the extent of this phenomenon in crosslingual settings is hard. To address this, we first develop a novel metric, mFACT, evaluating the faithfulness of non-English summaries, leveraging translation-based transfer from multiple English faithfulness metrics. Through extensive experiments in multiple languages, we demonstrate that mFACT is best suited to detect hallucinations compared to alternative metrics. With mFACT, we assess a broad range of multilingual large language models, and find that they all tend to hallucinate often in languages different from English. We then propose a simple but effective method to reduce hallucinations in cross-lingual transfer, which weighs the loss of each training example by its faithfulness score. This method drastically increases both performance and faithfulness according to both automatic and human evaluation when compared to strong baselines for cross-lingual transfer such as MAD-X. Our code and dataset are available at https: //github.com/yfqiu-nlp/mfact-summ.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.