The goal of this study was to compare the processing of social information in deaf and hearing adolescents. A task was developed to assess social information processing (SIP) skills of deaf adolescents based on Crick and Dodge's (1994; A review and reformulation of social information-processing mechanisms in children's social adjustment. Psychological Bulletin, 115, 74-101) reformulated six-stage model. It consisted of a structured interview after watching 18 scenes of situations depicting participation in a peer group or provocations by peers. Participants included 32 deaf and 20 hearing adolescents and young adults aged between 13 and 21 years. Deaf adolescents and adults had lower scores than hearing participants in all the steps of the SIP model (coding, interpretation, goal formulation, response generation, response decision, and representation). However, deaf girls and women had better scores on social adjustment and on some SIP skills than deaf male participants.
An eye tracking experiment explored the gaze behavior of deaf individuals when perceiving language in spoken and sign language only, and in sign-supported speech (SSS). Participants were deaf (n = 25) and hearing (n = 25) Spanish adolescents. Deaf students were prelingually profoundly deaf individuals with cochlear implants (CIs) used by age 5 or earlier, or prelingually profoundly deaf native signers with deaf parents. The effectiveness of SSS has rarely been tested within the same group of children for discourse-level comprehension. Here, video-recorded texts, including spatial descriptions, were alternately transmitted in spoken language, sign language and SSS. The capacity of these communicative systems to equalize comprehension in deaf participants with that of spoken language in hearing participants was tested. Within-group analyses of deaf participants tested if the bimodal linguistic input of SSS favored discourse comprehension compared to unimodal languages. Deaf participants with CIs achieved equal comprehension to hearing controls in all communicative systems while deaf native signers with no CIs achieved equal comprehension to hearing participants if tested in their native sign language. Comprehension of SSS was not increased compared to spoken language, even when spatial information was communicated. Eye movements of deaf and hearing participants were tracked and data of dwell times spent looking at the face or body area of the sign model were analyzed. Within-group analyses focused on differences between native and non-native signers. Dwell times of hearing participants were equally distributed across upper and lower areas of the face while deaf participants mainly looked at the mouth area; this could enable information to be obtained from mouthings in sign language and from lip-reading in SSS and spoken language. Few fixations were directed toward the signs, although these were more frequent when spatial language was transmitted. Both native and non-native signers looked mainly at the face when perceiving sign language, although non-native signers looked significantly more at the body than native signers. This distribution of gaze fixations suggested that deaf individuals – particularly native signers – mainly perceived signs through peripheral vision.
Lipreading proficiency was investigated in a group of hearing-impaired people, all of them knowing Spanish Sign Language (SSL). The aim of this study was to establish the relationships between lipreading and some other variables (gender, intelligence, audiological variables, participants' education, parents' education, communication practices, intelligibility, use of SSL). The 32 participants were between 14 and 47 years of age. They all had sensorineural hearing losses (from severe to profound). The lipreading procedures comprised identification of words in isolation. The words selected for presentation in isolation were spoken by the same talker. Identification of words required participants to select their responses from set of four pictures appropriately labelled. Lipreading was significantly correlated with intelligence and intelligibility. Multiple regression analyses were used to obtain a prediction equation for the lipreading measures. As a result of this procedure, it is concluded that proficient deaf lipreaders are more intelligent and their oral speech was more comprehensible for others.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.