Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing 2017
DOI: 10.18653/v1/d17-1161
|View full text |Cite
|
Sign up to set email alerts
|

Joint Concept Learning and Semantic Parsing from Natural Language Explanations

Abstract: Natural language constitutes a predominant medium for much of human learning and pedagogy. We consider the problem of concept learning from natural language explanations, and a small number of labeled examples of the concept. For example, in learning the concept of a phishing email, one might say 'this is a phishing email because it asks for your bank account number'. Solving this problem involves both learning to interpret open-ended natural language statements, as well as learning the concept itself. We pres… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
67
0
2

Year Published

2018
2018
2024
2024

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 66 publications
(73 citation statements)
references
References 20 publications
1
67
0
2
Order By: Relevance
“…For training this component, we use a CCG semantic parsing formalism, and follow the feature-set from Zettlemoyer and Collins 2007, consisting of simple indicator features for occurrences of keywords and lexicon entries. This is also compatible with the semantic parsing formalism in Srivastava et al (2017), whose data (and accompanying lexicon) are also used in our evaluation. For other datasets with predefined features, this component is learned easily from simple lexicons consisting of trigger words for features and labels.…”
Section: Semantic Parser Componentsmentioning
confidence: 92%
See 1 more Smart Citation
“…For training this component, we use a CCG semantic parsing formalism, and follow the feature-set from Zettlemoyer and Collins 2007, consisting of simple indicator features for occurrences of keywords and lexicon entries. This is also compatible with the semantic parsing formalism in Srivastava et al (2017), whose data (and accompanying lexicon) are also used in our evaluation. For other datasets with predefined features, this component is learned easily from simple lexicons consisting of trigger words for features and labels.…”
Section: Semantic Parser Componentsmentioning
confidence: 92%
“…e.g., 'the subject of course-related emails usually mentions CS100' can map to a composite predicate like 'isStringMatch(field:subject, stringVal('CS100'))' , which can be evaluated for different emails to reflect whether their subject mentions 'CS100'. Mapping language to executable feature functions has been shown to be effective (Srivastava et al, 2017). For sake of simplicity, here we assume that a statement refers to a single feature, but the method can be extended to handle more complex descriptions.…”
Section: Mapping Language To Constraintsmentioning
confidence: 99%
“…In their framework, an annotator provides a natural language explanation for each labeling decision. Similar work has been presented by Srivastava et al (2017) [26], but they jointly train a task-specific semantic parser and classifier instead of a rule-based parser. These systems, however, rely on a labelled set of training examples that are not available in most of the real-world applications.…”
Section: Related Workmentioning
confidence: 99%
“…All conversational instructable agents need to map the user's inputs onto existing concepts, procedures and system functionalities supported by the agent, and to have natural language understanding mechanisms and training data in each task domain. Because of this constraint, existing agents limit their supported tasks to one or a few pre-defined domains, such as data science [11], email processing [3,46], or database queries [17].…”
Section: Natural Language Programmingmentioning
confidence: 99%
“…Other natural language programming approaches (e.g., [3,11,17,46]) restricted the problem space to specific task domains, so that they could constrain the space and the complexity of target program statements in order to enable the understanding of flexible user utterances. Such restrictions are due to the limited capabilities of existing natural language understanding techniques -they do not yet support robust understanding of utterances across diverse domains without extensive training data and structured prior knowledge within each domain.…”
Section: Introductionmentioning
confidence: 99%