Assistive Technologies empower individuals to accomplish tasks they might not be able to do otherwise. Unfortunately, a large percentage of Assistive Technology devices that are purchased (35% or more) end up unused or abandoned [7,10], leaving many people with Assistive Technology that is inappropriate for their needs. Low acceptance rates of Assistive Technology occur for many reasons, but common factors include 1) lack of considering user opinion in selection, 2) ease in obtaining devices, 3) poor device performance, and 4) changes in user needs and priorities [7]. We are working to help more people gain access to the Assistive Technology they need by empowering non-engineers to "Do-It-Yourself" (DIY) and create, modify, or build. This paper illustrates that it is possible to custom-build Assistive Technology, and argues why empowering users to make their own Assistive Technology can improve the adoption process (and subsequently adoption rates). We discuss DIY experiences and impressions from individuals who have either built Assistive Technology before, or rely on it. We found that increased control over design elements, passion, and cost motivated individuals to make their own Assistive Technology instead of buying it. We discuss how a new generation of rapid prototyping tools and online communities can empower more individuals. We synthesize our findings into design recommendations to help promote future DIY-AT success.
Consumer-grade digital fabrication such as 3D printing is on the rise, and we believe it can be leveraged to great benefit in special education. Although 3D printing is infiltrating mainstream education, little research has explored 3D printing in the context of students with special support needs. We describe our studies on this topic and the resulting contributions. We initially conducted a formative study exploring the use of 3D printing at three locations serving populations with varying ability, including individuals with cognitive, motor, and visual impairments. We found that 3D design and printing perform three functions in special education: (1) STEM engagement, (2) creation of educational aids for accessible curriculum content, and (3) making custom adaptive devices. As part of our formative work, we also discussed a case study in the codesign of an assistive hand grip created with occupational therapists at one of our investigation sites. This work inspired further studies on the creation of adaptive devices using 3D printers. We identified the needs and constraints of these therapists and found implications for a specialized 3D modeling tool to support their use of 3D printers. We developed GripFab, 3D modeling software based on feedback from therapists, and used it to explore the feasibility of in-house 3D object designs in support of accessibility. Our contributions include case studies at three special education sites and discussion of obstacles to efficient 3D printing in this context. We have extended these contributions with a more in-depth look at the stakeholders and findings from GripFab studies. We have expanded our discussion to include suggestions for researchers in this space, in addition to refined suggestions from our earlier work for technologists creating 3D modeling and printing tools, therapists seeking to leverage 3D printers, and educators and administrators looking to implement these design tools in special education environments.
Motor thalamus (Mthal) comprises the ventral anterior, ventral lateral, and ventral medial thalamic nuclei in rodents. This subcortical hub receives input from the basal ganglia (BG), cerebellum, and reticular thalamus in addition to connecting reciprocally with motor cortical regions. Despite the central location of Mthal, the mechanisms by which it influences movement remain unclear. To determine its role in generating ballistic, goal-directed movement, we recorded single-unit Mthal activity as male rats performed a two-alternative forced-choice task. A large population of Mthal neurons increased their firing briefly near movement initiation and could be segregated into functional groups based on their behavioral correlates. The activity of "initiation" units was more tightly locked to instructional cues than movement onset, did not predict which direction the rat would move, and was anticorrelated with reaction time (RT). Conversely, the activity of "execution" units was more tightly locked to movement onset than instructional cues, predicted which direction the rat would move, and was anticorrelated with both RT and movement time. These results suggest that Mthal influences choice RT performance in two stages: short latency, nonspecific action initiation followed by action selection/invigoration. We discuss the implications of these results for models of motor control incorporating BG and cerebellar circuits. Motor thalamus (Mthal) is a central node linking subcortical and cortical motor circuits, though its precise role in motor control is unclear. Here, we define distinct populations of Mthal neurons that either encode movement initiation, or both action selection and movement vigor. These results have important implications for understanding how basal ganglia, cerebellar, and motor cortical signals are integrated. Such an understanding is critical to defining the pathophysiology of a range of BG- and cerebellum-linked movement disorders, as well as refining pharmacologic and neuromodulatory approaches to their treatment.
If applications were able to detect a user's expertise, then software could automatically adapt to better match expertise. Detecting expertise is difficult because a user's skill changes as the user interacts with an application and differs across applications. This means that expertise must be sensed dynamically, continuously, and unobtrusively so as not to burden the user. We present an approach to this problem that can operate without a task model based on low-level mouse and menu data which can typically be sensed across applications at the operating systems level. We have implemented and trained a classifier that can detect "novice" or "skilled" use of an image editing program, the GNU Image Manipulation Program (GIMP), at 91% accuracy, and tested it against real use. In particular, we developed and tested a prototype application that gives the user dynamic application information that differs depending on her performance.
An increasing number of online communities support the open-source sharing of designs that can be built using rapid prototyping to construct physical objects. In this paper, we examine the designs and motivations for assistive technology found on Thingiverse.com, the largest of these communities at the time of this writing. We present results from a survey of all assistive technology that has been posted to Thingiverse since 2008 and a questionnaire distributed to the designers exploring their relationship with assistive technology and the motivation for creating these designs. The majority of these designs are intended to be manufactured on a 3D printer and include assistive devices and modifications for individuals with disabilities, older adults, and medication management. Many of these designs are created by the end-users themselves or on behalf of friends and loved ones. These designers frequently have no formal training or expertise in the creation of assistive technology. This paper discusses trends within this community as well as future opportunities and challenges.
Information about the location and size of the targets that users interact with in real world settings can enable new innovations in human performance assessment and software usability analysis. Accessibility APIs provide some information about the size and location of targets. However this information is incomplete because it does not support all targets found in modern interfaces and the reported sizes can be inaccurate. These accessibility APIs access the size and location of targets through low-level hooks to the operating system or an application. We have developed an alternative solution for target identification that leverages visual affordances in the interface, and the visual cues produced as users interact with targets. We have used our novel target identification technique in a hybrid solution that combines machine learning, computer vision, and accessibility API data to find the size and location of targets users select with 89% accuracy. Our hybrid approach is superior to the performance of the accessibility API alone: in our dataset of 1355 targets covering 8 popular applications, only 74% of the targets were correctly identified by the API alone.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.