The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
Proceedings of the Third Workshop on Structured Prediction For 2019
DOI: 10.18653/v1/w19-1504
|View full text |Cite
|
Sign up to set email alerts
|

Lightly-supervised Representation Learning with Global Interpretability

Abstract: We propose a lightly-supervised approach for information extraction, in particular named entity classification, which combines the benefits of traditional bootstrapping, i.e., use of limited annotations and interpretability of extraction patterns, with the robust learning approaches proposed in representation learning. Our algorithm iteratively learns custom embeddings for both the multi-word entities to be extracted and the patterns that match them from a few example entities per category. We demonstrate that… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
18
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 13 publications
(18 citation statements)
references
References 16 publications
0
18
0
Order By: Relevance
“…Because the seed entities are sparse comparing to the number of unexpanded entities, the learned Boot-strapTeacher tends to be underfitting. Inspired by Zupon et al (2019), and based on the intuition that the pattern and entity embeddings are similar to their neighbors but dissimilar to their unrelated patterns or entities, we leverage the graph structure as a regularizer to the learning procedure, and let the BootstrapTeacher maximize the following unsupervised learning objective:…”
Section: Model Learningmentioning
confidence: 99%
See 4 more Smart Citations
“…Because the seed entities are sparse comparing to the number of unexpanded entities, the learned Boot-strapTeacher tends to be underfitting. Inspired by Zupon et al (2019), and based on the intuition that the pattern and entity embeddings are similar to their neighbors but dissimilar to their unrelated patterns or entities, we leverage the graph structure as a regularizer to the learning procedure, and let the BootstrapTeacher maximize the following unsupervised learning objective:…”
Section: Model Learningmentioning
confidence: 99%
“…Datasets: We use two datasets, CoNLL and OntoNotes, constructed by Zupon et al (2019). CoNLL is constructed from the CoNLL 2003 shared task dataset (Tjong Kim Sang and De Meulder 2003), which contains 4 entity types.…”
Section: Experiments Experimental Setupmentioning
confidence: 99%
See 3 more Smart Citations