2022
DOI: 10.1177/00491241221134527
|View full text |Cite
|
Sign up to set email alerts
|

Introduction to Neural Transfer Learning With Transformers for Social Science Text Analysis

Abstract: Transformer-based models for transfer learning have the potential to achieve high prediction accuracies on text-based supervised learning tasks with relatively few training data instances. These models are thus likely to benefit social scientists that seek to have as accurate as possible text-based measures, but only have limited resources for annotating training data. To enable social scientists to leverage these potential benefits for their research, this article explains how these methods work, why they mig… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 14 publications
(11 citation statements)
references
References 108 publications
0
2
0
Order By: Relevance
“…This makes it straightforward to compare the performance of LLMs with other techniques. Extant research shows that LLMs using variants of the transformer architecture outperform conventional machine learning models at several social scientific text classification tasks (Bonikowski, Luo, and Stuhler 2022;Wankmüller 2022;Widmann and Wich 2022;Chae and Davidson 2023).…”
Section: Classification Annotation and Methodological Bricolagementioning
confidence: 99%
See 1 more Smart Citation
“…This makes it straightforward to compare the performance of LLMs with other techniques. Extant research shows that LLMs using variants of the transformer architecture outperform conventional machine learning models at several social scientific text classification tasks (Bonikowski, Luo, and Stuhler 2022;Wankmüller 2022;Widmann and Wich 2022;Chae and Davidson 2023).…”
Section: Classification Annotation and Methodological Bricolagementioning
confidence: 99%
“…See Jurafsky andMartin (2023, Chapter 10) andWankmüller (2022) for more technical introductions to transformer models. 5 https://ai.meta.com/llama/…”
mentioning
confidence: 99%
“…In light of the increasing use of large language models by communication researchers Wankmüller, 2022), we want to reiterate in the strongest terms that validation is still an essential requirement for any measurement that involves CTAM. Admittedly, results of our review indicate that most researchers in our sample did validate their measurement, with the exception that dictionaries were sometimes applied without validation.…”
Section: Always Validate Ctam That Measure Social Science Constructsmentioning
confidence: 99%
“…We assemble and merge two datasets to test our theoretical expectations. Based on state-of-the-art transformer-based machine learning methods (Devlin et al, 2019;Wankmüller, 2022;Müller and Proksch, 2023), we identify policy emphasis in 48,877 statements from 1270 candidate manifestos released during five lower house elections in Japan between 2003 and 2014. Afterwards, we combine our new measures of issue emphasis during campaigns with a novel dataset on the legislative posts of Members of Parliament (MPs) from 2003 to 2017.…”
Section: Introductionmentioning
confidence: 99%