2022
DOI: 10.1108/jd-12-2021-0238
|View full text |Cite
|
Sign up to set email alerts
|

An exploration of ethnic minorities' needs for multilingual information access of public digital cultural services

Abstract: PurposeEthnic minorities (EMs), who make up a sizable proportion of multilingual users, are more likely to browse and search in their native language. It is helpful to identify multilingual users' information needs to provide public digital cultural services (PDCS) for making their life better.Design/methodology/approachThe in-context interview is an efficient way to explore EMs' information needs and evoke their daily experience with PDCS. The material from 31 one-on-one interviews with EMs in China was recor… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(1 citation statement)
references
References 56 publications
0
1
0
Order By: Relevance
“…Comparing these two NER models, the XLMR model performed better on the criminal, victim, date/time, action, and root cause labels, while the WanchanBERTa model outperformed the other on the police, location, and worth labels in terms of F1-score. Since XLMR has a marginally better performance than WanchanBERTa and can support diverse languages due to being cross-lingual, facilitating applications that need multilingual information access and processing [61], it was chosen for integration into the system. It is interesting to note that while BiLSTM-CRF has been a popular choice for NER tasks in low-resource language settings [62,63,64], this has been proved otherwise for the Thai language, especially in our case study.…”
Section: ) Model Selectionmentioning
confidence: 99%
“…Comparing these two NER models, the XLMR model performed better on the criminal, victim, date/time, action, and root cause labels, while the WanchanBERTa model outperformed the other on the police, location, and worth labels in terms of F1-score. Since XLMR has a marginally better performance than WanchanBERTa and can support diverse languages due to being cross-lingual, facilitating applications that need multilingual information access and processing [61], it was chosen for integration into the system. It is interesting to note that while BiLSTM-CRF has been a popular choice for NER tasks in low-resource language settings [62,63,64], this has been proved otherwise for the Thai language, especially in our case study.…”
Section: ) Model Selectionmentioning
confidence: 99%