Proceedings of the 1st Workshop on Understanding Implicit and Underspecified Language 2021
DOI: 10.18653/v1/2021.unimplicit-1.4
|View full text |Cite
|
Sign up to set email alerts
|

UnImplicit Shared Task Report: Detecting Clarification Requirements in Instructional Text

Abstract: This paper describes the data, task setup, and results of the shared task at the First Workshop on Understanding Implicit and Underspecified Language (UnImplicit). The task requires computational models to predict whether a sentence contains aspects of meaning that are contextually unspecified and thus require clarification. Two teams participated and the best scoring system achieved an accuracy of 68%.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
4
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(4 citation statements)
references
References 4 publications
(6 reference statements)
0
4
0
Order By: Relevance
“…6 For our submitted predictions on the test, which was evaluated by the organizers in terms of accuracy, measured as the ratio of correct predictions over all data instances (Roth and Anthonio, 2021), we achieved 66.3% accuracy for the mention-based 6 Our original submission had a bug, leading to low scores. We thus report results for our updated submission, without this bug, which is also reported in Roth and Anthonio (2021). system, which is higher than the logistic regression baseline provided by the organizers.…”
Section: Results and Analysismentioning
confidence: 99%
See 1 more Smart Citation
“…6 For our submitted predictions on the test, which was evaluated by the organizers in terms of accuracy, measured as the ratio of correct predictions over all data instances (Roth and Anthonio, 2021), we achieved 66.3% accuracy for the mention-based 6 Our original submission had a bug, leading to low scores. We thus report results for our updated submission, without this bug, which is also reported in Roth and Anthonio (2021). system, which is higher than the logistic regression baseline provided by the organizers.…”
Section: Results and Analysismentioning
confidence: 99%
“…The shared task on implicit and underspecified language (Roth and Anthonio, 2021) 1 aims to provide a binary classification for revision requirements to make a prediction of whether sentences in instructional texts require revision to improve understandability. Since instructional texts must be clear enough so that readers and machines can actually achieve the goal described by the instructions, this task focuses on modeling implicit elements that make the sentence more precise and clear.…”
Section: Introductionmentioning
confidence: 99%
“…Previous work, related to this involves a Shared Task (Roth and Anthonio, 2021) which was a binary classification task, in which systems had to predict whether a given sentence in context requires clarification or not. This shared task uses the same dataset that is the wikiHowToImprove dataset (Anthonio et al, 2020) but with some variations.…”
Section: Introductionmentioning
confidence: 99%
“…Two sentences are considered equivalent (non-divergent) at the sentence level if the same overall information is conveyed, even if there are minor meaning differences. Finer-grained differences are not widely considered in the detection of semantic divergences, despite the fact that implicit information can be critical to the understanding of the sentence (Roth and Anthonio, 2021).…”
Section: Introductionmentioning
confidence: 99%