2019
DOI: 10.1353/lan.2019.0021
|View full text |Cite
|
Sign up to set email alerts
|

No free lunch in linguistics or machine learning: Response to Pater

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
6
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
4
3
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 12 publications
(6 citation statements)
references
References 36 publications
0
6
0
Order By: Relevance
“…These models were heavily influenced by the early advances in neural network research (Alderete and Tupper, 2018a ; Pater, 2019 ). Modeling linguistic data with neural networks has seen a rapid increase in the past few years (Alderete et al, 2013 ; Avcu et al, 2017 ; Kirov, 2017 ; Alderete and Tupper, 2018a ; Dupoux, 2018 ; Mahalunkar and Kelleher, 2018 ; Weber et al, 2018 ; Prickett et al, 2019 , for cautionary notes, see Rawski and Heinz, 2019 ). One of the promising implications of neural network modeling is the ability to test generalizations that the models produce without language-specific assumptions (Pater, 2019 ).…”
Section: Previous Workmentioning
confidence: 99%
See 1 more Smart Citation
“…These models were heavily influenced by the early advances in neural network research (Alderete and Tupper, 2018a ; Pater, 2019 ). Modeling linguistic data with neural networks has seen a rapid increase in the past few years (Alderete et al, 2013 ; Avcu et al, 2017 ; Kirov, 2017 ; Alderete and Tupper, 2018a ; Dupoux, 2018 ; Mahalunkar and Kelleher, 2018 ; Weber et al, 2018 ; Prickett et al, 2019 , for cautionary notes, see Rawski and Heinz, 2019 ). One of the promising implications of neural network modeling is the ability to test generalizations that the models produce without language-specific assumptions (Pater, 2019 ).…”
Section: Previous Workmentioning
confidence: 99%
“…This paper also proposes a technique for establishing the Generator's internal representations. The inability to uncover networks' representations has been used as an argument against neural network approaches to linguistic data (among others in Rawski and Heinz, 2019 ). We argue that the internal representation of a network can be, at least partially, uncovered.…”
Section: Introductionmentioning
confidence: 99%
“…These models were heavily influenced by the early advances in neural network research (Alderete and Tupper, 2018a;Pater, 2019). Modeling linguistic data with neural networks has seen a rapid increase in the past few years (Alderete et al 2013;Avcu et al 2017;Kirov 2017;Alderete and Tupper 2018a;Mahalunkar and Kelleher 2018;Weber et al 2018;Dupoux 2018;Prickett et al 2019, for cautionary notes, see Rawski and Heinz 2019). One of the promising implications of neural network modeling is the ability to test generalizations that the models produce without language-specific assumptions (Pater, 2019).…”
Section: Previous Workmentioning
confidence: 99%
“…He argues that these architectures may supply the theory of learning that linguistics currently lacks. In response, Rawski and Heinz (2019) invoke the no-free-lunch theorems (Wolpert and Macready, 1997) and poverty-of-the-stimulus arguments (Chomsky, 1986) to question whether neural models actually have the right inductive biases.…”
Section: Introductionmentioning
confidence: 99%