2023
DOI: 10.1017/pan.2023.3
|View full text |Cite
|
Sign up to set email alerts
|

Topic Classification for Political Texts with Pretrained Language Models

Abstract: Supervised topic classification requires labeled data. This often becomes a bottleneck as high-quality labeled data are expensive to acquire. To overcome the data scarcity problem, scholars have recently proposed to use cross-domain topic classification to take advantage of preexisting labeled datasets. Cross-domain topic classification only requires limited annotation in the target domain to verify its cross-domain accuracy. In this letter, we propose supervised topic classification with pretrained language m… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 10 publications
(10 citation statements)
references
References 11 publications
0
3
0
Order By: Relevance
“…These approaches aim to optimize the ranking of documents based on their relevance to a given query. In personalized recommendation, researchers [7,11,12,15,16,30] have investigated the potential of large language models in extracting user interests through techniques like prompt designing and in-context learning. These efforts have focused on harnessing the capabilities of large models to enhance the effectiveness of personalized recommendation systems.…”
Section: Llms In Information Retrievalmentioning
confidence: 99%
“…These approaches aim to optimize the ranking of documents based on their relevance to a given query. In personalized recommendation, researchers [7,11,12,15,16,30] have investigated the potential of large language models in extracting user interests through techniques like prompt designing and in-context learning. These efforts have focused on harnessing the capabilities of large models to enhance the effectiveness of personalized recommendation systems.…”
Section: Llms In Information Retrievalmentioning
confidence: 99%
“…O modelo Transformer tem seu processamento totalmente realizado no idioma inglês, chamado nativo (Vaswani et al, 2017;Wang et al, 2023). Nos países de idioma não-nativo, a rede neural e seus módulos de integração adicionam dois processamentos de tradução: [1] no prompt de entrada, no sentido idioma-inglês; [2] na saída da execução da tarefa, no sentido inglês-idioma (OpenAI, 2023).…”
Section: Lição 4: Viés Do Idioma Não-nativo E Os Cuidados No Promptin...unclassified
“…They also proposed two metrics called vision-language relevance score and vision-language bias score, using which they concluded that state-of-the-art VLPMs under consideration not only encode stereotypical bias but are more complex than language bias and need to be studied. Several studies have given mitigation techniques to deal with bias like (Hendricks et al 2018;Amend, Wazzan, and Souvenir 2021;Zhao, Andrews, and Xiang 2023;Wang and Russakovsky 2023). As can be noticed in these studies, there are different components and parts of the entire vision-language processing pipeline that are put under consideration.…”
Section: Data and Biasmentioning
confidence: 99%