1995
DOI: 10.1007/978-1-4471-3579-1_2
|View full text |Cite
|
Sign up to set email alerts
|

Mapping across domains without feedback: A neural network model of transfer of implicit knowledge

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

2
33
0

Year Published

2000
2000
2010
2010

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 20 publications
(35 citation statements)
references
References 19 publications
2
33
0
Order By: Relevance
“…Besides the basic grammatical effect, the model has been shown to reproduce several additional effects such as the effects of similarity to training strings and grammaticality reported by Vokey and Brooks (1992;see Dienes, Altmann, & Gao, 1999) and effects of knowledge about the position of single letters (Kinder, 2000). A variant of the model has been demonstrated to explain transfer of grammar knowledge to a new letter set (e.g., Shanks, Johnstone, & Staggs, 1997; see Dienes, Altmann, & Gao, 1999). Furthermore, the model accounts for dissociations between classification and recognition performance in amnesic and control participants in the AGL paradigm (Kinder & Shanks, 2001).…”
Section: The Srn Modelmentioning
confidence: 92%
“…Besides the basic grammatical effect, the model has been shown to reproduce several additional effects such as the effects of similarity to training strings and grammaticality reported by Vokey and Brooks (1992;see Dienes, Altmann, & Gao, 1999) and effects of knowledge about the position of single letters (Kinder, 2000). A variant of the model has been demonstrated to explain transfer of grammar knowledge to a new letter set (e.g., Shanks, Johnstone, & Staggs, 1997; see Dienes, Altmann, & Gao, 1999). Furthermore, the model accounts for dissociations between classification and recognition performance in amnesic and control participants in the AGL paradigm (Kinder & Shanks, 2001).…”
Section: The Srn Modelmentioning
confidence: 92%
“…Because this accuracy is higher for grammatical than for nongrammatical strings, the model correctly predicts the former to be endorsed more frequently than the latter. Dienes (1992) compared several variants of the autoassociator model with respect to their capability of reproducing empirical data obtained from Dienes, Broadbent, and Berry (1991) as well as Dulany, Carlson, and Dewey (1984). The variants diered with respect to several features, these being the learning rule (Hebb rule vs. Delta rule), the coding of the letter features (single letter coding vs. additional coding of bigrams), the coding of absent features (activation of zero vs. activation of )1 if the letter is absent), successive versus simultaneous prediction (successive prediction: each unit received input only from units representing preceding letter positions; simultaneous prediction: each unit received input from all other units, except from itself), and the amount of training (preasymptotic training: the model was trained with the identical number of trials as were the participants; asymptotic training: the model was trained until asymptote was reached).…”
Section: Introductionmentioning
confidence: 97%
“…Several attempts have been made to investigate the kind of information which is stored in arti®cial grammar learning (Knowlton & Squire, 1994, 1996Perruchet & Pacteau, 1990;Shanks, Johnstone, & Staggs, 1997). Parallel to the growing interest in this issue of research, computational models of arti®cial grammar learning have been proposed (Dienes, 1992;Dienes, Altmann, & Gao, 1999;Servan-Schreiber & Anderson, 1990). It is crucial for the validity of these models that they make correct predictions about the kind of information which is stored during training and used for categorization.…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations