Proceedings of the 24th Conference on Computational Natural Language Learning 2020
DOI: 10.18653/v1/2020.conll-1.3
|View full text |Cite
|
Sign up to set email alerts
|

Neural Proof Nets

Abstract: Linear logic and the linear λ-calculus have a long standing tradition in the study of natural language form and meaning. Among the proof calculi of linear logic, proof nets are of particular interest, offering an attractive geometric representation of derivations that is unburdened by the bureaucratic complications of conventional prooftheoretic formats. Building on recent advances in set-theoretic learning, we propose a neural variant of proof nets based on Sinkhorn networks, which allows us to translate pars… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
10
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
2
1

Relationship

1
5

Authors

Journals

citations
Cited by 8 publications
(13 citation statements)
references
References 41 publications
0
10
0
Order By: Relevance
“…We see that removing all planarity information (i.e., the link filtering, the planarity-aware attention, and the planarity loss term T1 ) is disastrous; this condition has by far the largest drop in coverage. This is especially notable as LCG proof nets must be half-planar due to the non-commutativity of L*; this useful constraint is not present in type-logical grammars that do not have this property, such as that employed by Kogkalidis et al (2020).…”
Section: Discussionmentioning
confidence: 99%
See 3 more Smart Citations
“…We see that removing all planarity information (i.e., the link filtering, the planarity-aware attention, and the planarity loss term T1 ) is disastrous; this condition has by far the largest drop in coverage. This is especially notable as LCG proof nets must be half-planar due to the non-commutativity of L*; this useful constraint is not present in type-logical grammars that do not have this property, such as that employed by Kogkalidis et al (2020).…”
Section: Discussionmentioning
confidence: 99%
“…We train our models on LCGbank, a semi-automatic conversion of CCGbank to LCG (Fowler, 2016). This conversion necessitated adjusting for instances of CCG's crossing rules that are not permitted in LCG, as well as providing fully categorial parses for the cases in CCGbank where non-categorial rules are used (e.g., unary type-changing).6 LCGbank also omits features on its categories and includes 5Although Kogkalidis et al (2020) describe their model's training as "end-to-end", their approach is perhaps better described as joint training. A truly end-to-end system would allow differentiation through the supertagger/proof frame construction, which remains a topic for further investigation.…”
Section: Datamentioning
confidence: 99%
See 2 more Smart Citations
“…In this paper I will present two different ways of combining proof net proof search with neural networks, using two different ways to split the task into two subtasks. The first approach is the 'standard' approach which has been applied to proof search in type-logical grammars in various different forms (Kogkalidis, Moortgat & Moot 2020b, De Pourtales, Rabault, Kogkalidis & Moot 2023. Since this approach has been discussed in other places, I will only present it briefly as a way to contrast it with the second, novel approach, which is the main topic of the paper.…”
Section: Introductionmentioning
confidence: 99%