Findings of the Association for Computational Linguistics: ACL 2023 2023
DOI: 10.18653/v1/2023.findings-acl.525
|View full text |Cite
|
Sign up to set email alerts
|

Multi-lingual and Multi-cultural Figurative Language Understanding

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
1
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 0 publications
0
1
0
Order By: Relevance
“…In cross-cultural communication, cultural differences cause misunderstandings of speakers' intentions (Thomas, 1983;Tannen, 1985;Wierzbicka, 1991). Recent work in NLP has studied differences in time expressions (Vilares and Gómez-Rodríguez, 2018;Shwartz, 2022), perspectives over news topics (Gutiérrez et al, 2016), pragmatic reference of nouns (Shaikh et al, 2023), culture-specific entities (Peskov et al, 2021;Yao et al, 2023), figurative language (Kabra et al, 2023;Liu et al, 2023b). Our work connects the two lines of research by investigating how cultural knowledge affects language understanding.…”
Section: Cultural Factors and Normsmentioning
confidence: 99%
“…In cross-cultural communication, cultural differences cause misunderstandings of speakers' intentions (Thomas, 1983;Tannen, 1985;Wierzbicka, 1991). Recent work in NLP has studied differences in time expressions (Vilares and Gómez-Rodríguez, 2018;Shwartz, 2022), perspectives over news topics (Gutiérrez et al, 2016), pragmatic reference of nouns (Shaikh et al, 2023), culture-specific entities (Peskov et al, 2021;Yao et al, 2023), figurative language (Kabra et al, 2023;Liu et al, 2023b). Our work connects the two lines of research by investigating how cultural knowledge affects language understanding.…”
Section: Cultural Factors and Normsmentioning
confidence: 99%
“…This generalization capability is further improved with various tuning methods, such as instruction tuning (Sanh et al, 2022;Wei et al, 2022a;Chung et al, 2022;Muennighoff et al, 2022). However, LLMs and their instruction-tuned variants face difficulties in generalizing across various languages, leading to a disparity in performances (Xue et al, 2021;Gehrmann et al, 2022;Scao et al, 2022;Chowdhery et al, 2022;Yong et al, 2023;Zhang et al, 2023;Asai et al, 2023;Kabra et al, 2023). Moreover, these models have limited language coverage, mostly in the Indo-European language family as indicated in Figure 1.…”
Section: Introductionmentioning
confidence: 99%