1992
DOI: 10.1016/s0893-6080(05)80007-3
|View full text |Cite
|
Sign up to set email alerts
|

CALM: Categorizing and learning module

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
60
0

Year Published

1996
1996
2014
2014

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 73 publications
(60 citation statements)
references
References 53 publications
0
60
0
Order By: Relevance
“…Geometrically speaking, the classification process orthogonalizes each of the patterns of a pair with reference to the other patterns in the training set, the subsequent association between those patterns being trivially accomplished without interfering with previously acquired associations. The general scheme is shown in Figure 9 and is functionally almost identical to the ARTMAP network developed by , as well as to many other networks (e.g., Hecht-Nielson 1987;Burton et al 1990; McLaren 1993;Murre 1992;Murre et al 1992). It is also functionally equivalent to a noisy version of the nearest-neighbour classification algorithm used in the machine-learning community, and structurally equivalent to more general psychological models including the category-learning models described in previous sections and other models such as the one proposed by Bower (1996).…”
Section: Supervised Learningmentioning
confidence: 88%
See 1 more Smart Citation
“…Geometrically speaking, the classification process orthogonalizes each of the patterns of a pair with reference to the other patterns in the training set, the subsequent association between those patterns being trivially accomplished without interfering with previously acquired associations. The general scheme is shown in Figure 9 and is functionally almost identical to the ARTMAP network developed by , as well as to many other networks (e.g., Hecht-Nielson 1987;Burton et al 1990; McLaren 1993;Murre 1992;Murre et al 1992). It is also functionally equivalent to a noisy version of the nearest-neighbour classification algorithm used in the machine-learning community, and structurally equivalent to more general psychological models including the category-learning models described in previous sections and other models such as the one proposed by Bower (1996).…”
Section: Supervised Learningmentioning
confidence: 88%
“…In characterizing such a localist approach I have sought to generalize from a number of different models (e.g., Burton 1994;Carpenter & Grossberg 1987a;1987b;Foldiak 1991;Kohonen 1984;Murre 1992;Murre et al 1992;Nigrin 1993;Rumelhart & Zipser 1986). These models differ in their details but are similar in structure and I shall attempt to draw together the best features of each.…”
Section: A Generalized Localist Modelmentioning
confidence: 99%
“…Given cooperation between neurons coding for features of the same proto-object in different modules, in combination with competition between neurons coding for features of different proto-objects within modules (see also [37,118]), allows such incongruent resolutions. For example, neurons coding for a feature of a proto-object may 'win' in the colour module, while neurons coding for a feature of another proto-object win in the shape module.…”
Section: (I) From Visual Features To Proto-objectsmentioning
confidence: 99%
“…In (22), the connection matrix of HN-COR corresponds to X. Thus, the eigenvectors of HN-COR are equivalent to those of 1-module CCHN, i.e., the dynamical features of HN-COR and 1-module CCHN are characterized in the same orthogonal subspace.…”
Section: Contribution Of a Single Interactionmentioning
confidence: 99%
“…In this model, the states of a module are, in general, dynamically updated taking into account not only its given information processing but also the consistency of its relationship with other modules. Previous research has offered several approaches to interactive MNNs and their real-world applications (e.g., Carpenter and Grossberg 1988 ;Hassoun 1989;Kosko 1989;Tsutsumi 1989Tsutsumi , 1990; Murre et al 1992; Fukushima and Imagawa 1993;Happel and Murre 1994).…”
Section: Introductionmentioning
confidence: 99%