Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) 2018
DOI: 10.18653/v1/p18-1029
|View full text |Cite
|
Sign up to set email alerts
|

Zero-shot Learning of Classifiers from Natural Language Quantification

Abstract: Humans can efficiently learn new concepts using language. We present a framework through which a set of explanations of a concept can be used to learn a classifier without access to any labeled examples. We use semantic parsing to map explanations to probabilistic assertions grounded in latent class labels and observed attributes of unlabeled data, and leverage the differential semantics of linguistic quantifiers (e.g., 'usually' vs 'always') to drive model training. Experiments on three domains show that the … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
43
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
3
3
2

Relationship

2
6

Authors

Journals

citations
Cited by 48 publications
(51 citation statements)
references
References 18 publications
(17 reference statements)
0
43
0
Order By: Relevance
“…Zhang et al (2019) enrich the embedding representations by incorporating class descriptions, class hierarchy, and the word-to-label paths in ConceptNet. Srivastava et al (2018) assume that some natural language explanations about new labels are available. Then those explanations are parsed into formal constraints which are further combined with unlabeled data to yield new label oriented classifiers through posterior regularization.…”
Section: Related Workmentioning
confidence: 99%
“…Zhang et al (2019) enrich the embedding representations by incorporating class descriptions, class hierarchy, and the word-to-label paths in ConceptNet. Srivastava et al (2018) assume that some natural language explanations about new labels are available. Then those explanations are parsed into formal constraints which are further combined with unlabeled data to yield new label oriented classifiers through posterior regularization.…”
Section: Related Workmentioning
confidence: 99%
“…We base our learning framework on previous work by Srivastava et al (2018), who train loglinear classifiers (with parameters θ) using natural language explanations of the individual classes and unlabeled data. Further, they use the semantics of linguistic quantifiers (such as 'usually','always', etc.)…”
Section: Learning Classifiers From a MIX Of Observations And Explanatmentioning
confidence: 99%
“…'), etc. Approaches such as Srivastava et al (2018) map such language to data measurements that computational models can reason over. 2 Statistical frameworks such as Generalized Expectation (Druck et al, 2008), Posterior Regularization (Ganchev et al, 2010) and Bayesian…”
Section: Challenges In Relation To Previous Workmentioning
confidence: 99%
See 1 more Smart Citation
“…We envision enabling the agent to learn concepts (such as important emails) from a combination of explanations, and examples of the concept. This is motivated by our recent research on using natural language to define feature functions for learning tasks (Srivastava et al, 2017), and also work on using declarative knowledge in natural language explanations to supervise training of classifiers (Srivastava et al, 2018). Using semantic parsing, we can map natural language statements to predicates in a logical language, which are grounded in sensor-effector capabilities of the personal agent.…”
Section: Grounding New Knowledge In Sensors and Effectorsmentioning
confidence: 99%