The computer can serve as a consciousness‐raiser by highlighting and correcting students' errors and giving explanatory feedback. In the past, computer feedback was limited to simple error messages that did not tell the learner much more than that he or she was wrong and should try again. Natural language processing, however, allows the computer to actually parse student responses grammatically and to provide feedback that is more detailed and informative than was formerly possible. The study investigates the effectiveness of two types of computer feedback: one is traditional computer feedback that indicates only missing or unexpected words in the learner's response, and the other is intelligent computer feedback that provides further information about the nature of the errors in the form of metalinguistic rules. The study found that intelligent computer feedback is more effective than traditional computer feedback for improving the learners' grammatical proficiency in the use of complex structures of the target language. The study supports the value of metalinguistic instruction by computer.
Young deaf children use their vision to gather both language input and information about the environment. This dual requirement greatly complicates conversational turn-taking for the children and their parents, particularly when interaction centers on a visual focus such as a book. Data are presented here on the onset and maintenance of visual attention to signing in three profoundly deaf children, ages 2;9 - 3;7 years, while interacting with their hearing mothers about a story told through pictures. The data indicate that the children's visual attention in this situation was quite variable, although all of them experienced problems with their need to focus simultaneously on two sources of information: the mother's signs and the picture book. Suggestions for developing visual turn-taking skills are made, based on the research on first-language acquisition and the interactions of deaf mothers with their children.
To test whether deaf persons can read signs in peripheral vision, 12 profoundly deaf students, aged 15 to 18, in a residential school for the deaf, were seated between two signers, who presented common signs in random turns. Subjects responded by signing back to a video-camera, on which they were to fix their gaze. The tape recorded their responses as well as their eye movements, if any. Twenty-four signs were presented in each of two conditions: with the stimulus signs between 45° and 61° in the periphery, and with the signs between 61° and 77°. Mean performances, respectively, were 79.7% and 68%. The results support the supposition that peripheral vision may be linguistically and communicatively useful for deaf people, particularly as signs in isolation may be more difficult to read than signs in discourse.
Deaf children often have major difficulty learning the language of their parents, who in the majority of cases are hearing. The principal reason for these problems is limitation of linguistic input reaching the children: The hearing loSS itself acts as a drastic filter on the linguistic data, and information obtained from aided residual hearing, as well as from visual sources such as lipreading and signed representations of spoken language, is typically fragmentary. In addition to the limitations of input, the very difficulty of the task of learning an auditory language with severely restricted information is likely to lead to loss of motivation. Another complicating factor is language attitudes and the fact that the deaf community uses a visual-spatial language, American Sign Language (ASL), which deaf people acquire without effort and which provides a focus for cultural solidarity. Attitudes toward ASL are complicated by its identity as a minority language in a majority culture, whose standard language influences it to some extent. Attitudes toward English are complicated by the fact that the learning of English is imposed by an educational establishment run by hearing people and that ASL is not used as a language of instruction.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.