Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics 2020
DOI: 10.18653/v1/2020.acl-main.438
|View full text |Cite
|
Sign up to set email alerts
|

Learning Constraints for Structured Prediction Using Rectifier Networks

Abstract: Various natural language processing tasks are structured prediction problems where outputs are constructed with multiple interdependent decisions. Past work has shown that domain knowledge, framed as constraints over the output space, can help improve predictive accuracy. However, designing good constraints often relies on domain expertise. In this paper, we study the problem of learning such constraints. We frame the problem as that of training a two-layer rectifier network to identify valid structures or sub… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 9 publications
(10 citation statements)
references
References 32 publications
0
10
0
Order By: Relevance
“…Besides, we will also discuss about unsuper-vised techniques for out-of-distribution (OOD) detection (Zhou et al, 2021b;Hendrycks et al, 2020), prediction with abstention (Dhamija et al, 2018;Hendrycks et al, 2018) and novelty class detection (Perera and Patel, 2019) that seek to help the IE model identify invalid inputs or inputs with semantic shifts during its inference phase. Specifically, to demonstrate how models can ensure the global consistency of the extraction, we will cover constraint learning methods that automatically capture logical constraints among relations (Wang et al, , 2022cPan et al, 2020), and techniques to enforce the constraints in inference Li et al, 2019a;Han et al, 2019;Lin et al, 2020). To assess if the systems give faithful extracts, we will also talk about the spurious correlation problems of current IE models and how to address them with counterfactual analysis Qian et al, 2021).…”
Section: Robust Learning and Inference For Ie [35min]mentioning
confidence: 99%
“…Besides, we will also discuss about unsuper-vised techniques for out-of-distribution (OOD) detection (Zhou et al, 2021b;Hendrycks et al, 2020), prediction with abstention (Dhamija et al, 2018;Hendrycks et al, 2018) and novelty class detection (Perera and Patel, 2019) that seek to help the IE model identify invalid inputs or inputs with semantic shifts during its inference phase. Specifically, to demonstrate how models can ensure the global consistency of the extraction, we will cover constraint learning methods that automatically capture logical constraints among relations (Wang et al, , 2022cPan et al, 2020), and techniques to enforce the constraints in inference Li et al, 2019a;Han et al, 2019;Lin et al, 2020). To assess if the systems give faithful extracts, we will also talk about the spurious correlation problems of current IE models and how to address them with counterfactual analysis Qian et al, 2021).…”
Section: Robust Learning and Inference For Ie [35min]mentioning
confidence: 99%
“…In terms of enforcing declarative constraints in neural models, early efforts (Roth and Yih, 2004; formulate the inference process as Integer Linear Programming (ILP) problems. Pan et al (2020) also employ ILP to enforce constraints learned automatically from Rectifier Networks with strong expressiveness (Pan and Srikumar, 2016). Yet the main drawback of solving an ILP problem is its inefficiency in a large feasible solution space.…”
Section: Related Workmentioning
confidence: 99%
“…When we construct three-event subgraphs from documents, a binary label t for structure legitimacy is created for each subgraph. Inspired by how constraints are learned for several structured prediction tasks (Pan et al, 2020), we represent constraints for a given subgraph-label pair (X, t) as K linear inequalities. 2 Formally, t = 1 if X satisfies constraints c k for all k = 1, • • • , K. And the k th constraint c k is expressed by a linear inequality…”
Section: Learning Constraintsmentioning
confidence: 99%
See 2 more Smart Citations