Proceedings of the 11th Joint Conference on Lexical and Computational Semantics 2022
DOI: 10.18653/v1/2022.starsem-1.7
|View full text |Cite
|
Sign up to set email alerts
|

A Simple Unsupervised Approach for Coreference Resolution using Rule-based Weak Supervision

Abstract: Labeled data for the task of Coreference Resolution is a scarce resource, requiring significant human effort. While state-of-the-art coreference models rely on such data, we propose an approach that leverages an end-to-end neural model in settings where labeled data is unavailable. Specifically, using weak supervision, we transfer the linguistic knowledge encoded by Stanford's rule-based coreference system to the end-to-end model, which jointly learns rich, contextualized span representations and coreference c… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(1 citation statement)
references
References 18 publications
0
1
0
Order By: Relevance
“…In this section, we discuss ML NLP tasks that can be performed on the RISCBAC dataset and those tasks that require additional work on the dataset before it can be used. The documents generated can be used for research on unsupervised automatic text summarization [41], unsupervised question answering [42] and unsupervised information retrieval [43], unsupervised legal text simplification [44], unsupervised machine translation [45], text anonymization [46], and coreference resolution of clauses [47,48]. In addition, it could also be used as a low-resource dataset for meta-learning tasks [49].…”
Section: Research Using Riscbacmentioning
confidence: 99%
“…In this section, we discuss ML NLP tasks that can be performed on the RISCBAC dataset and those tasks that require additional work on the dataset before it can be used. The documents generated can be used for research on unsupervised automatic text summarization [41], unsupervised question answering [42] and unsupervised information retrieval [43], unsupervised legal text simplification [44], unsupervised machine translation [45], text anonymization [46], and coreference resolution of clauses [47,48]. In addition, it could also be used as a low-resource dataset for meta-learning tasks [49].…”
Section: Research Using Riscbacmentioning
confidence: 99%