1998
DOI: 10.1037/0096-1523.24.1.322
|View full text |Cite
|
Sign up to set email alerts
|

Selective attention and the formation of linear decision boundaries: Reply to Maddox and Ashby (1998).

Abstract: his comments on an earlier version of this article. My thanks also go to W. Todd Maddox for providing me with the stimulus coordinates that he used in his computer-simulation analyses reported in the appendix of the Maddox and Ashby (1998) commentary and for his patience in answering my questions concerning details of the simulation procedures.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
19
0

Year Published

1998
1998
2012
2012

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 17 publications
(20 citation statements)
references
References 59 publications
1
19
0
Order By: Relevance
“…Good examples are colors varying in brightness and saturation, or tones varying in loudness and pitch. Nosofsky (1987Nosofsky ( , 1998 and McKinley and Nosofsky (1996) provided evidence that when observers learn to classify integral-dimension stimuli, they fail to form orthogonal linear boundaries along single dimensions, even when such boundaries would produce nearly optimal performance. Instead, the patterns of performance observed under such conditions are more consistent with the idea that similarity comparisons to stored exemplars drive classification (McKinley & Nosofsky, 1996;Nosofsky & Palmeri, 1997).…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…Good examples are colors varying in brightness and saturation, or tones varying in loudness and pitch. Nosofsky (1987Nosofsky ( , 1998 and McKinley and Nosofsky (1996) provided evidence that when observers learn to classify integral-dimension stimuli, they fail to form orthogonal linear boundaries along single dimensions, even when such boundaries would produce nearly optimal performance. Instead, the patterns of performance observed under such conditions are more consistent with the idea that similarity comparisons to stored exemplars drive classification (McKinley & Nosofsky, 1996;Nosofsky & Palmeri, 1997).…”
Section: Discussionmentioning
confidence: 99%
“…In the context of criticizing an article by McKinley and Nosofsky (1996), Maddox and Ashby (1998) have also argued that model fits involving only averaged data can be misleading. While expressing fundamental agreement with this general point, Nosofsky (1998) replied that the particular criticisms raised by Maddox and Ashby (1998) were misguided. See Maddox and Ashby (1998) and Nosofsky (1998) for details regarding this particular debate.…”
Section: Notesmentioning
confidence: 99%
See 1 more Smart Citation
“…A bivariate normal distribution is described by a mean and variance along each dimension, as well as a covariance term, /lx' /lv' cr}, cr~, cov x v ' where the subscripts x and y denote dimensions x and y. Figure 1 depicts hypothetical equal likelihood contours for four tifiable, from the parameters that define the decision process (see Nosofsky, 1998, for a discussion). For example, the attention weight parameter, W, in the generalized context model modifies the similarity relations among items in the psychological space and thus has a strong effect on the equal similarity contour that separates the psychological space into two regions-c-one in which the probability ofresponding "A" is greater than .5 and the other in which the probability is less than .5.…”
Section: Effects Of Perceptual Representation On Categorization Perfomentioning
confidence: 99%
“…In the two category structures where both dimensions are relevant, there is overwhelming evidence for the implementation of the GCM using uniform priors. This implies that, in these conditions, participants do not seem to allocate their attention optimally over both of the stimulus dimensions (Nosofsky, 1998b;Nosofsky & Johansen, 2000). These conclusions are consistent with those of Nosofsky (1989), made on the basis of point parameter estimates found by fitting the GCM without explicit priors on parameters.…”
Section: Discussionmentioning
confidence: 99%