Despite clear patient experience advantages, low specificity rates have thus far prevented swallowable capsule endoscopes from replacing traditional endoscopy for diagnosis of colon disease. One explanation for this is that capsule endoscopes lack the ability to provide insufflation, which traditional endoscopes use to distend the intestine for a clear view of the internal wall. To provide a means of insufflation from a wireless capsule platform, in this paper we use biocompatible effervescent chemical reactions to convert liquids and powders carried onboard a capsule into gas. We experimentally evaluate the quantity of gas needed to enhance capsule visualization and locomotion, and determine how much gas can be generated from a given volume of reactants. These experiments motivate the design of a wireless insufflation capsule, which is evaluated in ex vivo experiments. These experiments illustrate the feasibility of enhancing visualization and locomotion of endoscopic capsules through wireless insufflation.
With content rapidly moving to the electronic space, access to graphics for individuals with visual impairments is a growing concern. Recent research has demonstrated the potential for representing basic graphical content on touchscreens using vibrations and sounds, yet few guidelines or processes exist to guide the design of multimodal, touchscreen-based graphics. In this work, we seek to address this gap by synergizing our collective research efforts over the past eight years and implementing our findings into a compilation of recommendations, which we validate through an iterative design process and user study. We start by reviewing previous work and then collate findings into a set of design guidelines for generating basic elements of touchscreen-based multimodal graphics. We then use these guidelines to generate exemplary graphics in mathematics, specifically bar charts and geometry concepts. We discuss the iterative design process of moving from guidelines to actual graphics and highlight challenges. We then present a formal user study with 22 participants with visual impairments, comparing learning performance on using touchscreen-rendered graphics to embossed graphics. We conclude with qualitative feedback from participants on the touchscreen-based approach and offer areas of future investigation as these recommendation are expanded to include more complex graphical concepts.
Introduction: The current work probes the effectiveness of multimodal touch screen tablet electronic devices in conveying science, technology, engineering, and mathematics graphics via vibrations and sounds to individuals who are visually impaired (i.e., blind or low vision) and compares it with similar graphics presented in an embossed format. Method: A volunteer sample of 22 participants who are visually impaired, selected from a summer camp and local schools for blind students, were recruited for the current study. Participants were first briefly (∼30 min) trained on how to explore graphics via a multimodal touch screen tablet. They then explored six graphic types (number line, table, pie chart, bar chart, line graph, and map) displayed via embossed paper and tablet. Participants answered three content questions per graphic type following exploration. Results: Participants were only 6% more accurate when answering questions regarding an embossed graphic as opposed to a tablet graphic. A paired-samples t test indicated that this difference was not significant, t(14) = 1.91, p = .07. Follow-up analyses indicated that presentation medium did not interact with graphic type, F(5, 50) = 0.43, p = .83, nor visual ability, F(1, 13) = 0.00, p = .96. Discussion: The findings demonstrate that multimodal touch screen tablets may be comparable to embossed graphics in conveying iconographic science and mathematics content to individuals with visual impairments, regardless of the severity of impairment. The relative equivalence in response accuracy between mediums was unexpected, given that most students who participated were braille readers and had experience reading embossed graphics, whereas they were introduced to the tablet the day of testing. Implications for practitioners: This work illustrates that multimodal touch screen tablets may be an effective option for general education teachers or teachers of students with visual impairments to use in their educational practices. Currently, preparation of accessible graphics is time consuming and requires significant preparation, but such tablets provide solutions for offering “real-time” displays of these graphics for presentation in class.
Extant literature illustrates that complementary efforts, such as Entrepreneurially Minded Learning, add an important dimension to the training of the next generation of engineers and innovators, providing them with multiple perspectives and a pathway for linking technical concepts to societal challenges. Nationwide initiatives, such as the Kern Entrepreneurial Engineering Network (KEEN), have focused specifically on infusing Entrepreneurially Minded Learning into curriculum content and delivery, training both faculty and students to have the know-why in addition to the know-how of engineering topics. KEEN has established a framework that supplements engineering skills already taught in classrooms with outcomes that support the development of an entrepreneurial mindset. The framework is rooted in fostering the 3Cs of entrepreneurial mindset: Curiosity, Connections, and Creating Value. In this study, we contribute a series of concepts infusing KEEN-inspired modules into a three-course sequence in Dynamics and Controls. We provide an overview on each of the modules, highlighting the KEEN framework objectives. We present postcourse student questionnaire responses illustrating student perception of entrepreneurial mindset and the 3Cs as it relates to engineering and addressing technological challenges. We provide lessons learned and sufficient detail of all modules for replication in other Dynamics and Controls course sequences as well as supporting student data.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.