Machine learning (ML) is increasingly used to enhance intelligent products in the field of product design. However, ML has a never-ending lifecycle in which its capabilities and technical properties iteratively change as new annotated data are utilized. The never-ending lifecycle of ML (which includes data annotation, model training, and other steps) has led to challenges to the prototyping of ML-enhanced products and requires a high level of ML literacy in designers. To facilitate the prototyping of ML-enhanced products and improve the ML literacy of designers, we draw inspiration from a design method called Material Lifecycle Thinking (MLT), which regards ML as a continuously evolving design material. Based on the MLT, we proposed a cyclical prototype workflow and developed inML Kit, a toolkit enabling designers to make functional ML prototypes and improve ML literacy by involving them in the never-ending ML lifecycle. The toolkit was designed, iterated, and implemented through the participatory design process with experienced designers in this field. We evaluated inML Kit by conducting a controlled user study where our toolkit was compared with Google AIY. The evaluation results imply that our inML Kit helps designers to make functional ML prototypes while improving their ML literacy.
Smart orthoses hold great potential for intelligent rehabilitation monitoring and training. However, most of these electronic assistive devices are typically too difficult for daily use and challenging to modify to accommodate variations in body shape and medical needs. For existing clinicians, the customization pipeline of these smart devices imposes significant learning costs. This paper introduces ThermoFit, an end-to-end design and fabrication pipeline for thermoforming smart orthoses that adheres to the clinically accepted procedure. ThermoFit enables the shapes and electronics positions of smart orthoses to conform to bodies and allows rapid iteration by integrating low-cost Low-Temperature Thermoplastics (LTTPs) with custom metamaterial structures and electronic components. Specifically, three types of metamaterial structures are used in LTTPs to reduce the wrinkles caused by the thermoforming process and to permit component position adjustment and joint movement. A design tool prototype aids in generating metamaterial patterns and optimizing component placement and circuit routing. Three applications show that ThermoFit can be shaped on bodies to different wearables. Finally, a hands-on study with a clinician verifies the user-friendliness of thermoforming smart orthosis, and technical evaluations demonstrate fabrication efficiency and electronic continuity.
Motion capture technologies reconstruct human movements and have wide-ranging applications. Mainstream research on motion capture can be divided into vision-based methods and inertial measurement unit (IMU)-based methods. The vision-based methods capture complex 3D geometrical deformations with high accuracy, but they rely on expensive optical equipment and suffer from the line-of-sight occlusion problem. IMU-based methods are lightweight but hard to capture fine-grained 3D deformations. In this work, we present a configurable self-sensing IMU sensor network to bridge the gap between the vision-based and IMU-based methods. To achieve this, we propose a novel kinematic chain model based on the four-bar linkage to describe the minimum deformation process of 3D deformations. We also introduce three geometric priors, obtained from the initial shape, material properties and motion features, to assist the kinematic chain model in reconstructing deformations and overcome the data sparsity problem. Additionally, to further enhance the accuracy of deformation capture, we propose a fabrication method to customize 3D sensor networks for different objects. We introduce origami-inspired thinking to achieve the customization process, which constructs 3D sensor networks through a 3D-2D-3D digital-physical transition. The experimental results demonstrate that our method achieves comparable performance with state-of-the-art methods.
Feet are the foundation of our bodies that not only perform locomotion but also participate in intent and emotion expression. Thus, foot gestures are an intuitive and natural form of expression for interpersonal interaction. Recent studies have mostly introduced smart shoes as personal gadgets, while foot gestures used in multi-person foot interactions in social scenarios remain largely unexplored. We present Shoes++, which includes an inertial measurement unit (IMU)-mounted sole and an input vocabulary of social foot-to-foot gestures to support foot-based interaction. The gesture vocabulary is derived and condensed by a set of gestures elicited from a participatory design session with 12 users. We implement a machine learning model in Shoes++ which can recognize two-person and three-person social foot-to-foot gestures with 94.3% and 96.6% accuracies (N=18). In addition, the sole is designed to easily attach to and detach from various daily shoes to support comfortable social foot interaction without taking off the shoes. Based on users' qualitative feedback, we also found that Shoes++ can support team collaboration and enhance emotion expression, thus making social interactions or interpersonal dynamics more engaging in an expanded design space.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.