Many accessibility features available on mobile platforms require applications (apps) to provide complete and accurate metadata describing user interface (UI) components. Unfortunately, many apps do not provide sucient metadata for accessibility features to work as expected. In this paper, we explore inferring accessibility metadata for mobile apps from their pixels, as the visual interfaces often best reect an app's full functionality. We trained a robust, fast, memory-ecient, on-device model to detect UI elements using a dataset of 77,637 screens (from 4,068 iPhone apps) that we collected and annotated. To further improve UI detections and add semantic information, we introduced heuristics (e.g., UI grouping and ordering) and additional models (e.g., recognize UI content, state, interactivity). We built Screen Recognition to generate accessibility metadata to augment iOS VoiceOver. In a study with 9 screen reader users, we validated that our approach improves the accessibility of existing mobile apps, enabling even previously inaccessible apps to be used.
CCS CONCEPTS• Human-centered computing ! Accessibility technologies.
Many technologies currently exist that are capable of analyzing the surface of solid samples under ambient or vacuum conditions, but they are typically limited to smooth, planar surfaces. Those few that can be applied to nonplanar surfaces, however, require manual sampling and a high degree of human intervention. Herein, we describe a new platform, Robotic Surface Analysis Mass Spectrometry (RoSA-MS), for direct surface sampling of three-dimensional (3D) objects. In RoSA-MS, a sampling probe is attached to a robotic arm that has 360° rotation through 6 individual joints. A 3D laser scanner, also attached to the robotic arm, generates a digital map of the sample surface that is used to direct a probe to specific ( x, y, z) locations. The sampling probe consists of a spring-loaded needle that briefly contacts the object surface, collecting trace amounts of material. The probe is then directed at an open port liquid sampling interface coupled to the electrospray ion source of a mass spectrometer. Material on the probe tip is dissolved by the solvent flow in the liquid interface and mass analyzed with high mass resolution and accuracy. The surface of bulky, nonplanar objects can thus be probed to produce chemical maps at the molecular level. Applications demonstrated herein include the examination of food sample surfaces, lifestyle chemistry, and chemical reactions on curved substrates. The modular design of this system also allows for modifications to the sampling probe and the ionization source, thereby expanding the potential of RoSA-MS for a great diversity of applications.
Augmented Reality (AR) technology creates new immersive experiences in entertainment, games, education, retail, and social media. AR content is often primarily visual and it is challenging to enable access to it non-visually due to the mix of virtual and real-world content. In this paper, we identify common constituent tasks in AR by analyzing existing mobile AR applications for iOS, and characterize the design space of tasks that require accessible alternatives. For each of the major task categories, we create prototype accessible alternatives that we evaluate in a study with 10 blind participants to explore their perceptions of accessible AR. Our study demonstrates that these prototypes make AR possible to use for blind users and reveals a number of insights to move forward. We believe our work sets forth not only exemplars for developers to create accessible AR applications, but also a roadmap for future research to make AR comprehensively accessible. CCS CONCEPTS • Human-centered computing → Human computer interaction (HCI); Accessibility technologies; Mixed / augmented reality.
current systems (up to 23%). Finally, we show three example applications that are facilitated by screen parsing: (i) UI similarity search, (ii) accessibility enhancement, and (iii) code generation from UI screenshots.
SynchroWatch is a one-handed interaction technique for smartwatches that uses rhythmic correlation between a user's thumb movement and on-screen blinking controls. Our technique uses magnetic sensing to track the synchronous extension and reposition of the thumb, augmented with a passive magnetic ring. The system measures the relative changes in the magnetic field induced by the required thumb movement and uses a time-shifted correlation approach with a reference waveform for detection of synchrony. We evaluated the technique during three distraction tasks with varying degrees of hand and finger movement: active walking, browsing on a computer, and relaxing while watching online videos. Our initial offline results suggest that intentional synchronous gestures can be distinguished from other movement. A second evaluation using a live implementation of the system running on a smartwatch suggests that this technique is viable for gestures used to respond to notifications or issue commands. Finally, we present three demonstration applications that highlight the technique running in real-time on the smartwatch.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.