The version presented here may differ from the published version. If citing, you are advised to consult the published version for pagination, volume/issue and date of publication Highlights • Trump voters are more concerned about vaccines than other Americans • This effect emerges via Trump voters' greater willingness to believe conspiracies • Reading Trump's antivaxx tweets increases vaccination concern among Trump voters • Trump's antivaxx tweets did not polarize liberal voters into being more provaxx RUNNING HEAD: Donald Trump and vaccination Donald Trump and vaccination:The effect of political identity, conspiracist ideation and presidential tweets on vaccine hesitancy
Targeted syntactic evaluations have demonstrated the ability of language models to perform subject-verb agreement given difficult contexts. To elucidate the mechanisms by which the models accomplish this behavior, this study applies causal mediation analysis to pre-trained neural language models. We investigate the magnitude of models' preferences for grammatical inflections, as well as whether neurons process subject-verb agreement similarly across sentences with different syntactic structures. We uncover similarities and differences across architectures and model sizesnotably, that larger models do not necessarily learn stronger preferences. We also observe two distinct mechanisms for producing subject-verb agreement depending on the syntactic structure of the input sentence. Finally, we find that language models rely on similar sets of neurons when given sentences with similar syntactic structure. * Equal contribution. † Work done while visiting Google Research. ‡ Supported by the Viterbi Fellowship in the Center for Computer Engineering at the Technion.
Figure 1: A data example with two Python programs in Līla. One program annotation uses function construct whereas the other one is a plain script without function. The instruction for each task and categories across four dimensions are annotated for developing Līla.
Targeted syntactic evaluations have demonstrated the ability of language models to perform subject-verb agreement given difficult contexts. To elucidate the mechanisms by which the models accomplish this behavior, this study applies causal mediation analysis to pre-trained neural language models. We investigate the magnitude of models' preferences for grammatical inflections, as well as whether neurons process subject-verb agreement similarly across sentences with different syntactic structures. We uncover similarities and differences across architectures and model sizesnotably, that larger models do not necessarily learn stronger preferences. We also observe two distinct mechanisms for producing subject-verb agreement depending on the syntactic structure of the input sentence. Finally, we find that language models rely on similar sets of neurons when given sentences with similar syntactic structure. * Equal contribution. † Work done while visiting Google Research. ‡ Supported by the Viterbi Fellowship in the Center for Computer Engineering at the Technion.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.