Fig. 1. Our stretch-sensing soft glove captures hand poses in real time and with high accuracy. It functions in diverse and challenging settings, like heavily occluded environments or changing light conditions, and lends itself to various applications. All images shown here are frames from recorded live sessions. We propose a stretch-sensing soft glove to interactively capture hand poses with high accuracy and without requiring an external optical setup. We demonstrate how our device can be fabricated and calibrated at low cost, using simple tools available in most fabrication labs. To reconstruct the pose from the capacitive sensors embedded in the glove, we propose a deep network architecture that exploits the spatial layout of the sensor itself. The network is trained only once, using an inexpensive off-the-shelf hand pose reconstruction system to gather the training data. The per-user calibration is then performed on-the-fly using only the glove. The glove's capabilities are demonstrated in a series of ablative experiments, exploring different models and calibration methods. Comparing against commercial data gloves, we achieve a 35% improvement in reconstruction accuracy.
Fig. 1. Left to right: We propose a method for the fabrication of soft and stretchable silicone based capacitive sensor arrays. The sensor provides dense stretch measurements that, together with a data-driven prior, allow for the capture of surface deformations in real-time and without the need for line-of-sight.We propose a hardware and software pipeline to fabricate flexible wearable sensors and use them to capture deformations without line of sight. Our first contribution is a low-cost fabrication pipeline to embed multiple aligned conductive layers with complex geometries into silicone compounds. Overlapping conductive areas from separate layers form local capacitors that measure dense area changes. Contrary to existing fabrication methods, the proposed technique only requires hardware that is readily available in modern fablabs. While area measurements alone are not enough to reconstruct the full 3D deformation of a surface, they become sufficient when paired with a data-driven prior. A novel semi-automatic tracking algorithm, based on an elastic surface geometry deformation, allows to capture ground-truth data with an optical mocap system, even under heavy occlusions or partially unobservable markers. The resulting dataset is used to train a regressor based on deep neural networks, directly mapping the area readings to global positions of surface vertices. We demonstrate the flexibility and accuracy of the proposed hardware and software in a series of controlled experiments, and design a prototype of wearable wrist, elbow and biceps sensors, which do not require line-of-sight and can be worn below regular clothing.
We present a modular, novel mechanical device for animation authoring. The pose of the device is sensed at interactive rates, enabling quick posing of characters rigged with a skeleton of arbitrary topology. The mapping between the physical device and virtual skeleton is computed semi-automatically guided by sparse user correspondences. Our demonstration allows visitors to experiment with our device and software, choosing from a variety of characters to control.
Articulation of 3D characters requires control over many degrees of freedom: a difficult task with standard 2D interfaces. We present a tangible input device composed of interchangeable, hot-pluggable parts. Embedded sensors measure the device's pose at rates suitable for real-time editing and animation. Splitter parts allow branching to accommodate any skeletal tree. During assembly, the device recognizes topological changes as individual parts or pre-assembled subtrees are plugged and unplugged. A novel semi-automatic registration approach helps the user quickly map the device's degrees of freedom to a virtual skeleton inside the character. User studies report favorable comparisons to mouse and keyboard interfaces for the tasks of target acquisition and pose replication. Our device provides input for character rigging and automatic weight computation, direct skeletal deformation, interaction with physical simulations, and handle-based variational geometric modeling.
Figure 1: Left to right: Taking a rigged 3D character with many degrees of freedom as input, we propose a method to automatically compute assembly instructions for a modular tangible controller, consisting only of a small set of joints. A novel hardware joint parametrization provides a user-experience akin to inverse kinematics. After assembly the device is bound to the rig and enables animators to traverse a large space of poses via fluid manipulations. Here we control 110 bones in the dragon character with only 8 physical joints and 2 splitters. Detailed pose nuances are preserved by a real time pose interpolation strategy.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.