Proceedings of the 5th Workshop on Argument Mining 2018
DOI: 10.18653/v1/w18-5214
|View full text |Cite
|
Sign up to set email alerts
|

Using context to identify the language of face-saving

Abstract: We created a corpus of utterances that attempt to save face from parliamentary debates and use it to automatically analyze the language of reputation defence. Our proposed model that incorporates information regarding threats to reputation can predict reputation defence language with high confidence. Further experiments and evaluations on different datasets show that the model is able to generalize to new utterances and can predict the language of reputation defence in a new dataset.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
4
0
1

Year Published

2019
2019
2021
2021

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(6 citation statements)
references
References 27 publications
0
4
0
1
Order By: Relevance
“…Progress in the last two decades includes attempts to standardise subareas of Pragmatics, such as discourse structure (ISO/TS 24617-5:2014), discourse relations (ISO 24617-8:2016) speech act annotation (ISO 24617-2:2020), dialogue acts (ISO 24617-2:2020), and semantic relations in discourse (ISO 24617-8:2016); and even to structure the whole field of Computational Pragmatics and pragmatic annotation (Pareja-Lora & Aguado de Cea 2010; Pareja-Lora 2014) and integrate it with other levels of Computational Linguistics and linguistic annotation (Pareja-Lora 2012). Further current research concerns, for instance, the polarity of speech acts, that is, in their classification as neutral, face-saving or face-threatening acts (Naderi & Hirst 2018). However, as Archer, Culpeper & Davies (2008) indicate, "[u]nlike the computational studies concerning speech act interpretation, [...] corpus-based schemes are, in the main, applied manually, and schemes that are semi-automatic tend to be limited to specific domains" (e.g.…”
Section: Pragmatics: the Social Life Of Wordsmentioning
confidence: 99%
See 1 more Smart Citation
“…Progress in the last two decades includes attempts to standardise subareas of Pragmatics, such as discourse structure (ISO/TS 24617-5:2014), discourse relations (ISO 24617-8:2016) speech act annotation (ISO 24617-2:2020), dialogue acts (ISO 24617-2:2020), and semantic relations in discourse (ISO 24617-8:2016); and even to structure the whole field of Computational Pragmatics and pragmatic annotation (Pareja-Lora & Aguado de Cea 2010; Pareja-Lora 2014) and integrate it with other levels of Computational Linguistics and linguistic annotation (Pareja-Lora 2012). Further current research concerns, for instance, the polarity of speech acts, that is, in their classification as neutral, face-saving or face-threatening acts (Naderi & Hirst 2018). However, as Archer, Culpeper & Davies (2008) indicate, "[u]nlike the computational studies concerning speech act interpretation, [...] corpus-based schemes are, in the main, applied manually, and schemes that are semi-automatic tend to be limited to specific domains" (e.g.…”
Section: Pragmatics: the Social Life Of Wordsmentioning
confidence: 99%
“…This has resulted in increased insight, but has somewhat fragmented the field away from the kinds of unifying, standardised principles represented by Brown & Levinson (1987). But developers of machine learning systems need stability; and so, current studies in machine learning of politeness are in the slightly curious position of continuing to apply categories from Brown & Levinson (1987) to machine learning -for example Li et al (2020) on social media posts in the US and China; Naderi & Hirst (2018) on a corpus from the official Canadian parliamentary proceedings; and Lee et al (2021) on interactions between robots and children. All these studies use categories from Brown & Levinson (1987) as a basis for machine learning.…”
Section: Politenessmentioning
confidence: 99%
“…-Un des textes proposait une méthode pour évaluer automatiquement le niveau de persuasion des interventions dans un fil de commentaires (online comments convincingness evaluation) (Gu et al, 2018); -Un des textes proposait une méthode pour analyser et prédire automatiquement les arguments et le langage des campagnes de relations publiques dans un contexte de défense de réputation (predict the language of reputation defence) (Naderi et Graeme, 2018); -Un autre proposait un agent de dialogue argumentatif en mesure de débattre avec des personnes humaines sur des sujets complexes et controversés, agent nommé Dave the Debater (Thu Le et al, 2018); -Un article proposait une méthode d'AM pour générer automatiquement les prémisses et les conclusions d'une argumentation sur un thème précis, en intégrant des connaissances contextuelles et une compréhension plus large des sujets connexes, un défi dans l'AM (Lawrence et Reed, 2017a); -Les mêmes chercheurs proposaient, dans une autre conférence, une technique d'AM (Complex Argumentative Interaction (CAI)) pour reconstruire de façon formalisée la structure argumentative d'un débat mené à large échelle, par exemple celui des élections présidentielles américaines de 2016, utilisé comme corpus (Lawrence et Reed, 2017b); -Un dernier texte classifiait et organisait plus de 200 000 arguments politiques récoltés sur Internet concernant la possibilité d'une nouvelle constitution au Chili (Fierro et al, 2017).…”
Section: Champs D'applicationsunclassified
“…Other previous work attempt to computationally model politeness, using politeness as a feature to identify conversations that appear to go awry in online discussions (Zhang et al, 2018a). Previous work has also explored indirect speech acts as potential sources of face-threatening acts through blame (Briggs and Scheutz, 2014) and as face-saving acts in parliamentary debates (Naderi and Hirst, 2018).…”
Section: Related Workmentioning
confidence: 99%