Key point analysis is the task of extracting a set of concise and high-level statements from a given collection of arguments, representing the gist of these arguments. This paper presents our proposed approach to the Key Point Analysis shared task, collocated with the 8th Workshop on Argument Mining. The approach integrates two complementary components. One component employs contrastive learning via a siamese neural network for matching arguments to key points; the other is a graph-based extractive summarization model for generating key points. In both automatic and manual evaluation, our approach was ranked best among all submissions to the shared task.
Within the field of argument mining, an important task consists in predicting the frame of an argument, that is, making explicit the aspects of a controversial discussion that the argument emphasizes and which narrative it constructs. Many approaches so far have adopted the framing classification proposed by Boydstun et al. [3], consisting of 15 categories that have been mainly designed to capture frames in media coverage of political articles. In addition to being quite coarse-grained, these categories are limited in terms of their coverage of the breadth of discussion topics that people debate. Other approaches have proposed to rely on issue-specific and subjective (argumentation) frames indicated by users via labels in debating portals. These labels are overly specific and do often not generalize across topics. We present an approach to bridge between coarse-grained and issue-specific inventories for classifying argumentation frames and propose a supervised approach to classifying frames of arguments at a variable level of granularity by clustering issue-specific, user-provided labels into frame clusters and predicting the frame cluster that an argument evokes. We demonstrate how the approach supports the prediction of frames for varying numbers of clusters. We combine the two tasks, frame prediction with respect to media frames categories as well as prediction of clusters of user-provided labels, in a multi-task setting, learning a classifier that performs the two tasks. As main result, we show that this multi-task setting improves the classification on the single tasks, the media frames classification by up to +9.9 % accuracy and the cluster prediction by up to +8 % accuracy.
When assessing the similarity of arguments, researchers typically use approaches that do not provide interpretable evidence or justifications for their ratings. Hence, the features that determine argument similarity remain elusive. We address this issue by introducing novel argument similarity metrics that aim at high performance and explainability. We show that Abstract Meaning Representation (AMR) graphs can be useful for representing arguments, and that novel AMR graph metrics can offer explanations for argument similarity ratings. We start from the hypothesis that similar premises often lead to similar conclusionsand extend an approach for AMR-based argument similarity rating by estimating, in addition, the similarity of conclusions that we automatically infer from the arguments used as premises. We show that AMR similarity metrics make argument similarity judgements more interpretable and may even support argument quality judgements. Our approach provides significant performance improvements over strong baselines in a fully unsupervised setting. Finally, we make first steps to address the problem of reference-less evaluation of argumentative conclusion generations.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.