2009
DOI: 10.1111/j.1551-6709.2009.01024.x
|View full text |Cite
|
Sign up to set email alerts
|

Conceptual Hierarchies in a Flat Attractor Network: Dynamics of Learning and Computations

Abstract: The structure of people's conceptual knowledge of concrete nouns has traditionally been viewed as hierarchical (Collins & Quillian, 1969). For example, superordinate concepts (vegetable) are assumed to reside at a higher level than basic-level concepts (carrot). A feature-based attractor network with a single layer of semantic features developed representations of both basic-level and superordinate concepts. No hierarchical structure was built into the network. In Experiment and Simulation 1, the graded struct… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

2
51
2
3

Year Published

2009
2009
2023
2023

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 44 publications
(58 citation statements)
references
References 68 publications
2
51
2
3
Order By: Relevance
“…One way in which objects may be related is taxonomically, or within categories of things that share semantic features (e.g., Collins and Loftus, 1975; Rosch and Mervis, 1975; Rogers and McClelland, 2004; O'Connor et al, 2009). For example, taxonomically-related zebras and lions share visual features (e.g., eyes, four legs) and encyclopedic features (e.g., live on the savanna).…”
Section: Introductionmentioning
confidence: 99%
“…One way in which objects may be related is taxonomically, or within categories of things that share semantic features (e.g., Collins and Loftus, 1975; Rosch and Mervis, 1975; Rogers and McClelland, 2004; O'Connor et al, 2009). For example, taxonomically-related zebras and lions share visual features (e.g., eyes, four legs) and encyclopedic features (e.g., live on the savanna).…”
Section: Introductionmentioning
confidence: 99%
“…We used the model developed by Cree and colleagues (Cree et al, 1999; see also O’Connor, Cree, & McRae, 2009), and we would expect similar behavior from other attractor dynamical models of semantic processing (e.g., Plaut & Booth, 2000; Rogers & McClelland, 2004). The model architecture is shown in Figure 1.…”
Section: Attractor Model Simulationmentioning
confidence: 99%
“…Following O’Connor et al (2009), we set learning rate to 0.01 and added momentum (0.9) after the first 10 training epochs. The model was trained using continuous recurrent backpropagation through time (Pearlmutter, 1995) until it correctly activated over 95% of the appropriate semantic feature units (i.e., the model activated over 95% of features that were produced by participants in the feature norming study; by this point, the model also correctly deactivated over 99% of nonproduced features), which was approximately 40 training epochs.…”
Section: Attractor Model Simulationmentioning
confidence: 99%
“…Our behavioral results were broadly consistent with the claim made by Cree et al (2006) that rare features play a privileged role in semantic processing. We examined whether their model (Cree et al, 2006; see also Cree, McRae, & McNorgan, 1999; O’Connor, Cree, & McRae, 2009) correctly predicts this outcome. A schematic depiction of the model architecture is shown in Figure 2.…”
Section: Simulationsmentioning
confidence: 99%