No abstract
Figure 1: Twelve anatomical face simulation models automatically generated using our procedure. We activated the levator palpabrae muscles to open the eyes and the zygomatic major and orbicularis oculi muscles to produce the smiles. These meshes were selected to represent a variety of features and characteristics. Asimov, Demon, Goblin, Artec Human, Kiran, Lincoln, Matthew, Ogre Head, Old Man, Orc, Spielberg and Yoda. AbstractWe present a fast, fully automatic morphing algorithm for creating simulatable flesh and muscle models for human and humanoid faces. Current techniques for creating such models require a significant amount of time and effort, making them infeasible or impractical. In fact, the vast majority of research papers use only a floating mask with no inner lips, teeth, tongue, eyelids, eyes, head, ears, etc.-and even those that build the full visual model would typically still lack the cranium, jaw, muscles, and other internal anatomy. Our method requires only the target surface mesh as input and can create a variety of models in only a few hours with no user interaction. We start with a symmetric, high resolution, anatomically accurate template model that includes auxiliary information such as feature points and curves. Then given a target mesh, we automatically orient it to the template, detect feature points, and use these to bootstrap the detection of corresponding feature curves. These curve correspondences are used to deform the surface mesh of the template model to match the target mesh. Then, the calculated displacements of the template surface mesh are used to drive a three-dimensional morph of the full template model including all interior anatomy. The resulting target model can be simulated to generate a large range of expressions that are consistent across characters using the same muscle activations. Full automation of this entire process makes it readily available to a wide range of users.
Figure 1: We present an end-to-end system for capturing well-composed footage of two subjects with a quadrotor in the outdoors. On the left, we show the quadrotor filming two subjects. To the right, are static shots captured by our system, covering a variety of perspectives and distances. We demonstrate people using our system to film a range of activites-pictured here: taking a selfie, playing catch, receiving a diploma, and performing a dance routine.
As drones become ubiquitous, it is important to understand how cultural differences impact human-drone interaction. A previous elicitation study performed in the United States illustrated how users would intuitively interact with drones. We replicated this study in China to gain insight into how these user-defined interactions vary across the two cultures. We found that as per the US study, Chinese participants chose to interact primarily using gesture. However, Chinese participants used multi-modal interactions more than their US counterparts. Agreement for many proposed interactions was high within each culture. Across cultures, there were notable differences despite similarities in interaction modality preferences. For instance, culturally-specific gestures emerged in China, such as a T-shape gesture for stopping the drone. Participants from both cultures anthropomorphized the drone, and welcomed it into their personal space. We describe the implications of these findings on designing culturally-aware and intuitive human-drone interaction.
Personal drones are becoming more mainstream and are used for a variety of tasks, such as delivery and photography. The exposed blades in conventional drones raise serious safety concerns. To address this, commercial drones have been moving towards a safe-to-touch design or have increased safety by adding propeller guards. The affordances of safe-to-touch drones enable new types of touch-based human-drone interaction. Various applications have been explored, such as augmented sports and haptic feedback in virtual reality; however, it is unclear if individuals feel comfortable using direct touch and manipulation when interacting with safe-to-touch drones. A previous elicitation study showed how users naturally interact with drones. We replicated this study with an unsafe and a safe-to-touch drone, to find out if participants will instinctively use touch as a means of interacting with the safe-to-touch drone. We found that 58% of the participants used touch, and across all tasks 39% of interactions were touch-based. The proposed touch interactions were in agreement for 67% of the tasks, and users reported that interacting with the safe-to-touch drone was significantly less mentally demanding than the unsafe drone. CCS Concepts: •Human-centered computing →Empirical studies in interaction design;
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.