The interaction between vision and audition was investigated using a signal detection method. A light and tone were presented either in the same location or in different locations along the horizontal plane, and the subjects responded with same-different judgments of stimulus location. Three modes of stimulus presentation were used: simultaneous presentation of the light and tone, tone first, and light first. For the latter two conditions, the interstimulus interval was either 0.7 or 2.0 sec. A statistical decision model was developed which distinguished between the perceptual and decision processes. The results analyzed within the framework of this model suggested that the apparent interaction between vision and audition is due to shifts in decision criteria rather than perceptual change.One of the most controversial issues in psychology concerns the genesis of perceptual abilities and the relations that exist among the sense modalities. Until recently. it was strongly believed that "vision is educated by touch." Many scholars, such as Berkeley and Helmholtz. argued that since visual perception is grossly different from the image on the retina, it must be acquired through tactual experience. Although many experiments were conducted to test the cmpirieistie doctrine of visual space perception (e.g., Stratton. 1~97), there has been no unequivocal evidence supporting the position. Indeed, the argument has been challenged by recent studies of intersensory relations. Harris (1965). for example, demonstrated that vision "dominates" and modities the proprioceptive sense when the two modalities arc made to provide discrepant information. Thus. a person viewing the image of his hand through deflecting prism lenses comes to feel his hand where he sees it rather than where it actually is. Rock and Harris (I % 7) reported several experiments which further demonstrated the dominance of vision over touch. They concluded that rather than touch educating vision. the reverse appears to be true.The lind ing of visual dominance over proprioception led other researchers to investigate conflict situat ions involving other sensory modalities. Pick, Wurrcn , and Hay (ll)6l» examined the interaction of vision and proprioception. proprioception and audition. and vision and audition. For vision and audition. the discrepancy was created by displacing
A group of 10 subjects participated in a memory search task and a visual search task in different sessions. The subjects searched for a given target letter in three-, four-, and five-letter words and pronounceable nonwords. There were no significant differences in either the reaction time (RT) data or the error rates between the two tasks. Mean RT increased linearly with the number of letters in the display or in the memory set. Word trials produced faster responses than nonwords by about 40 msec in all conditions. Errors also increased with set size and occurred more often as misses on positive trials than as false alarms. The overall similarity of the results from memory search and visual search tasks suggests that the component processes involved are the same.Recognition involves the comparison of an encoded stimulus with information stored in memory to determine if a match can be found between the two. Insights into the nature of this comparison process have been gained from search tasks involving a single target character and a search set of one or more characters. The subject's task is to produce a positive response ifthe target is included in the search set and a negative response otherwise. Sternberg (1966) has developed a memory search paradigm in which the search set is presented first and held in memory in some form until the target is presented. The visual search analog of this task was used by Atkinson, Holmgren. and J uola (1969) , in which a single target letter was presented first followed by a horizontal array of letters. The data from these two experiments were very similar, with mean reaction time (RT) increasing linearly with set size in a paralIel fashion for positive and negative responses.A direct comparison of visual search and memory search in a within-subjects design was reported by Townsend and Roos (1973). They used three subjects who participated in 11 sessions of memory search followed by 11 sessions of visual search. The search sets in both tasks were strings of from one to five consonants. The results from the two tasks were very similar to each other and to data from comparable search experiments. Mean RT increased fairly linearly with set size, and the positive RT function was parallel to. but below, the negative function in both tasks. Townsend and Roos (1973) argued that the comparison process involving the target item and the search set items could take place in either an auditory This research was supported in part by funds from Biomedical Sciences Support Grant RR-07037 from the National Institutes of Health and National Science Foundation Grant No. BMS74-12801 to the second author. Requests for reprints should be addressed to James F. Juola, Department of Psychology, University of Kansas, Lawrence, Kansas 66045.or visual "form system." Both are presumed to be limited-capacity storage systems capable of handling echoic or iconic inputs from sensory processors as well as auditory or visual images derived from long-term memory. A limited-capacity translator can operate to ...
Children in the fourth and sixth grades searched memory sets of two, three, or four items for the presence of a given word or picture probe. The memory sets were all of one form on any trial, being either words or easily nameable pictures, and the probe form was varied to match or mismatch the form of the memory items. Subject,s responded more rapidly when the probe form and memory set form matched, an effect that did not interact with the number of memory set items. Presumably, stimulus form effects are limited to encoding processes which precede comparisons between the probe and memory set items. The comparison process itself appears to be independent of the form in which the probe is presented.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.