In dichotic listening, subjects are apparently unable to attend simultaneously to two concurrent, auditory speech messages. However, in two experiments reported here, it is shown that people can attend to and repeat back continuous speech at the same time as taking in complex, unrelated visual scenes, or even while sightreading piano music. In both cases performance with divided attention was very good, and in the case of sight-reading was as good as with undivided attention. There was little or no effect of the dual task on the accuracy of speech shadowing. These results are incompatible with the hypothesis that human attention is limited by the capacity of a general-purpose central processor in the nervous system. An alternative, "multi-channel", hypothesis is outlined.
Successive brief visual stimuli falling within a critical time interval are phenomenally simultaneous. This paper examines two models of perceptual sampling which purport to account for phenomenal simultaneity. The first is Stroud's (1955) theory that the sensory input is quantized into successive, discrete summation periods or ‘moments’ (the Discrete Moment Hypothesis). An alternative model which has not generally been considered represents the ‘moment’ as a continuous, running sample of the input (the Travelling Moment Hypothesis). Two experiments on phenomenal simultaneity are reported which provide a critical test between these two hypotheses. The results were entirely incompatible with the discrete moment model, which is therefore rejected. The travelling moment model accounted well for the results. These also suggest a possible relation between the limits of phenomenal simultaneity and the critical duration of brightness summation.
Three experiments are described in which two pictures of isolated man-made objects were presented in succession. The subjects' task was to decide, as rapidly as possible, whether the two pictured objects had the same name. With a stimulus-onset asynchrony (SOA) of above 200 msec two types of facilitation were observed: (1) the response latency was reduced if the pictures showed the same object, even though seen from different viewpoints (object benefit); (2) decision time was reduced further if the pictures showed the same object from the same angle of view (viewpoint benefit). These facilitation effects were not affected by projecting the pictures to different retinal locations. Significant benefits of both types were also obtained when the projected images differed in size. However, in these circumstances there was a small but significant performance decrement in matching two similar views of a single object, but not if the views were different. Conversely, the object benefit, but not the viewpoint benefit, was reduced when the SOA was only 100 msec. The data suggest the existence of (at least) two different visual codes, one non-retinotopic but viewer-centred, the other object-centred.
This paper sets out to identify, in information-processing terms, the elementary functional components of the mental lexicon and their interrelations. In particular it is concerned with the independent status of lexical codes for written and spoken language, and their relations to each other and to a language-free cognitive representation. Our evidence is based on the performance of language transcoding tasks (such as reading aloud or writing to dictation) in brain-damaged adult subjects. We review evidence for the functional independence of non-linguistic, cognitive representations, and for word-specific, lexical codes in both phonological and orthographic form. The data rule out the hypothesis of a modality-free or abstract lexicon mediating communication between lexical and cognitive representations. The data also reject the dominance of phonological over orthographic codes in access to and from word meanings. We can find no satisfactory evidence for independent lexicons used in language reception and language production.
We present evidence that the visual analysis of Chinese characters by skilled readers is based upon well-defined orthographic constituents. These functional units are the recurrent, integral stroke-patterns, not the individual strokes as previously thought. The speed of simultaneous “same-different” comparisons of Chinese characters is affected by the number of these orthographic units and, for “different” judgements, by the proportion of mismatching units, but not by the number of individual strokes. We further define a category of orthographic unit, referred to here as the “lexical radical”, which requires strict positional regularity within each composite character. Violation of positional regularity results in illegal non-characters. In contrast, recombination of orthographic units (stroke patterns) with the lexical radical in its regular position forms a regular pseudocharacter. We show that real characters are matched faster than pseudocharacters and non-characters—a word superiority effect in Chinese. Pseudocharacters are matched faster than non-characters, a pseudoword advantage in Chinese. We also present evidence suggesting that individual stroke patterns may be better recognized in real characters than in pseudocharacters and non-characters—a word superiority effect in terms of unit recognition. These results support the hypothesis that the functional orthographic unit in the recognition of Chinese characters, comparable to the letter in alphabetic word recognition, is the recurring integral stroke pattern.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.