Gary has been teaching and directing the Center on Access Technology Innovation Laboratory at NTID for five years. He is a deaf engineer who retired from IBM after serving for 30 years. He is a development engineering and manufacturing content expert. He develops and teaches all related engineering courses. His responsibility as a director of Center on Access Technology Innovation Laboratory include the planning, implementation and dissemination of research projects that are related to the need of accessibility. He received his BS from RIT and his MS from Lehigh University. His last assignment with IBM was an Advanced Process Control project manager. He managed team members in delivering the next generation Advanced Process Control solution which replaced the legacy APC system in the 300 mm semiconductor fabricator. Behm has fifteen patents and has presented over 30 scientific and technical papers at various professional conferences worldwide. Dr. Raja S Kushalnagar, Rochester Institute of TechnologyRaja Kushalnagar is an Assistant Professor in the Information and Computing Studies Department at the National Technical Institute for the Deaf at the Rochester Institute of Technology in Rochester, NY. He teaches information and/or computing courses, and tutors deaf and hard of hearing students in computer science/information technology courses. His research interests focus on the intersection of disability law, accessible and educational technology, and human-computer interaction. He is focused on enhancing educational access for deaf and hard of hearing students in mainstreamed classrooms. He worked in industry for over five years before returning to academia and disability law policy. Towards that end, he completed a J.D. and LL.M. in disability law, and an M.S. and Ph.D. in Computer Science. Enhancing Accessibility of Engineering Lectures for Deaf and Hard of Hearing Students: Real-time Tracking Text Display in Classrooms AbstractThe introduction of Real-time Text Display (RTD), in which typists transcribe audio and display it to students in real-time has greatly increased accessibility of lectures for deaf and hard of hearing (DHH) students as evidenced by their increased graduation rates from post-secondary programs. However, significant but subtle barriers in current static displays of RTD persist, especially in engineering, which makes heavy use of detailed visuals and explanations via sequential steps 1-3 . Hearing students are able to look at the visuals and simultaneously listen to the spoken explanation and combine the two effortlessly. By contrast, DHH students have to constantly look away from the static image of RTD on a display to search and observe details in the lecture visually. As a result, they spend less time watching lecture visuals and gain less information than their hearing peers.We discuss the implementation and evaluation of an accessible technology system, Real-time Tracking Text Display (RTTD) that addresses the issue of accessibility to information that DHH engineering students face e...
Captions (subtitles) for television and movies have greatly enhanced accessibility for Deaf and hard of hearing (DHH) consumers who do not understand the audio, but can otherwise follow by reading the captions. However, these captions fail to fully convey auditory information, due to simultaneous delivery of aural and visual content, and lack of standardization in representing non-speech information.Viewers cannot simultaneously watch the movie scenes and read the visual captions; instead they have to switch between the two and inevitably lose information and context in watching the movies. In contrast, hearing viewers can simultaneously listen to the audio and watch the scenes.Most auditory non-speech information (NSI) is not easily represented by words, e.g., the description of a ring tone, or the sound of something falling.We enhance captions with tactile and visual-tactile feedback. For the former, we transform auditory NSI into its equivalent tactile representation and convey it simultaneously with the captions. For the latter, we visually identify the location of the NSI. This approach can benefit DHH viewers by conveying more aural content to the viewer's visual and tactile senses simultaneously than visual-only captions alone. We conducted a study, which compared DHH viewer responses between video with captions, tactile captions, and visual-tactile captions. The viewers significantly benefited from visual-tactile and tactile captions. Figure 1: Intertitle-Scene temporal and spatial separation: The movie briefly narrates what will happen in the scene, and then displays the scene.
and a Visiting Lecturer at NTID. He is a deaf engineer at IBM who received his BS from RIT and his MS from Lehigh University. He currently serves as a loaned executive at NTID/RIT working in the Center on Access Technology and the department of Engineering Studies. At IBM, he is a delivery project manager in the Rapid Application Development Engineering System. Behm has six patents and has presented over 20 scientific and technical papers at various professional conferences.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.