2018
DOI: 10.1613/jair.5714
|View full text |Cite
|
Sign up to set email alerts
|

Learning Explanatory Rules from Noisy Data

Abstract: Artificial Neural Networks are powerful function approximators capable of modelling solutions to a wide variety of problems, both supervised and unsupervised. As their size and expressivity increases, so too does the variance of the model, yielding a nearly ubiquitous overfitting problem. Although mitigated by a variety of model regularisation methods, the common cure is to seek large amounts of training data-which is not necessarily easily obtained-that sufficiently approximates the data distribution of the d… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
369
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
5
2
2

Relationship

1
8

Authors

Journals

citations
Cited by 323 publications
(394 citation statements)
references
References 37 publications
0
369
0
Order By: Relevance
“…However, as argued in Section 1, we claim that E-reduction is not always the most suitable form of reduction because it can remove metarules necessary to learn programs with the appropriate specificity. To test this claim, we now conduct experiments that compare the learning performance of Metagol 2.3.0 19 , the main MIL implementation, when given different reduced sets of metarules 20 . We test the null hypothesis:…”
Section: Methodsmentioning
confidence: 99%
“…However, as argued in Section 1, we claim that E-reduction is not always the most suitable form of reduction because it can remove metarules necessary to learn programs with the appropriate specificity. To test this claim, we now conduct experiments that compare the learning performance of Metagol 2.3.0 19 , the main MIL implementation, when given different reduced sets of metarules 20 . We test the null hypothesis:…”
Section: Methodsmentioning
confidence: 99%
“…Following prior work in the area of multi-stage neural NLG (Dušek and Jurcicek, 2016;Daniele et al, 2017;Puduppully et al, 2018;Hajdik et al, 2019;Moryossef et al, 2019), and inspired by more traditional pipeline data-to-text generation (Reiter and Dale, 2000;Gatt and Krahmer, 2018), we present a system which splits apart the typically end-to-end data-driven neural model into separate utterance planning and surface realization models using a symbolic intermediate representation. We focus in particular on surface realization and introduce a new symbolic intermediate representation which is based on an underspecified universal dependency tree (Mille et al, 2018b).…”
Section: Introductionmentioning
confidence: 99%
“…In addition, whereas some datasets use only dyadic concepts, such as kinship or string transformations, our dataset also requires learning programs with a mixture of predicates arities, such as input_jump/8 in Checkers and next_cell/4 predicate in Sudoku. Learning programs with high-arity predicates is a challenge for some ILP approaches [14,38,24]. Moreover, because of our second main contribution, we can continually and automatically expand the dataset as new games are introduced into the GGP competition.…”
Section: Size and Diversitymentioning
confidence: 99%
“…This reflects a major current issue in ILP, where systems are often given well crafted language biases to ensure feasibility; however, this is not the only current challenge in ILP. For example, some ILP approaches target challenges such as learning from noisy data [62,24,49], probabilistic reasoning [19,20,66,3,67], non-determinism expressed through unstratified negation [63,48], and preference learning [46]. Future versions of this dataset could be extended to contain these features.…”
Section: More Evaluation Metricsmentioning
confidence: 99%