Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics 2020
DOI: 10.18653/v1/2020.acl-main.476
|View full text |Cite
|
Sign up to set email alerts
|

Understanding the Language of Political Agreement and Disagreement in Legislative Texts

Abstract: While national politics often receive the spotlight, the overwhelming majority of legislation proposed, discussed, and enacted is done at the state level. Despite this fact, there is little awareness of the dynamics that lead to adopting these policies. In this paper, we take the first step towards a better understanding of these processes and the underlying dynamics that shape them, using data-driven methods. We build a new large-scale dataset, from multiple data sources, connecting state bills and legislator… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
9
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
1
1

Relationship

1
5

Authors

Journals

citations
Cited by 8 publications
(9 citation statements)
references
References 24 publications
(14 reference statements)
0
9
0
Order By: Relevance
“…We use two configurations to form the experimental dataset. (1) random: We set up an in-session experiment environment following Kornilova et al (2018); Davoodi et al (2020), where records of each two-year session is considered as an independent experiment set. This results in 4 experiment sets.…”
Section: Experiments Setupmentioning
confidence: 99%
See 1 more Smart Citation
“…We use two configurations to form the experimental dataset. (1) random: We set up an in-session experiment environment following Kornilova et al (2018); Davoodi et al (2020), where records of each two-year session is considered as an independent experiment set. This results in 4 experiment sets.…”
Section: Experiments Setupmentioning
confidence: 99%
“…Some embedding methods (Kraft et al, 2016) also promote learning of legislators. More recently, external context information including party, sponsor and donors (Kornilova et al, 2018;Yang et al, 2020;Davoodi et al, 2020) have been introduced to better describe the legislative process.…”
Section: Related Workmentioning
confidence: 99%
“…Recent works (Bhavan et al, 2020;Bhavan et al, 2019) have shown the presence of herd mentality in political stances through graph embeddings by identifying the linguistic similarity between members of the same political party over a set of 1,251 debates. (Davoodi et al, 2020) study the interactions between the content of a proposed bill and the legislative context in which it is presented. (Al Khatib et al, 2020) models debater characteristics to predict persuasiveness.…”
Section: Related Workmentioning
confidence: 99%
“…The good news is that natural language processing (NLP) shows promise for analyzing voluminous political debates and breaking the understanding barrier towards political ideology to help make informed voting decisions (Davoodi et al, 2020;Eidelman et al, 2018). However, conventional language models (Hasan and Ng, 2013) may not generalize well on understanding the obscure linguistic styles of political debates.…”
Section: Introductionmentioning
confidence: 99%
“…Of late, several works attempted to solve such tasks, such as analyzing relationships and their evolution (Iyyer et al, 2016;Han et al, 2019), analyzing political discourse on news and social media (Demszky et al, 2019;Roy and Goldwasser, 2020) and political ideology (Diermeier et al, 2012;Preoţiuc-Pietro et al, 2017;Kulkarni et al, 2018). Various political tasks such as roll call vote prediction (Clinton et al, 2003;Kornilova et al, 2018b;Patil et al, 2019;Spell et al, 2020a;Davoodi et al, 2020), entity stance detection (Mohammad et al, 2016;Fang et al, 2019), hyper-partisan/fake news detection (Li and Goldwasser, 2019;Palić et al, 2019;Baly et al, 2020) require a rich understanding of the context around the entities that are present in the text. But, the representations used are usually limited in scope to specific tasks and not rich enough to capture information that is useful across several tasks.…”
Section: Introductionmentioning
confidence: 99%