a) Our method Jave = 0.936 Jmin = 0.609 (b) CubeCover Jave = 0.902 Jmin = 0.073 (c) Our method Jave = 0.947 Jmin = 0.658 (d) Volumetric PolyCube Jave = 0.950 Jmin = 0.131 Figure 1: High quality all-hex meshes generated by our method. Comparisons with CubeCover [Nieser et al. 2011] and volumetric PolyCube [Gregson et al. 2011] demonstrate that the hex meshes by our method are superior in mesh quality (the minimal scaled Jacobian of hexes is shown in the figure, bigger is better) and singularity placement (see the zoom-in views for comparison). AbstractDecomposing a volume into high-quality hexahedral cells is a challenging task in geometric modeling and computational geometry. Inspired by the use of cross field in quad meshing and the CubeCover approach in hex meshing, we present a complete all-hex meshing framework based on singularity-restricted field that is essential to induce a valid all-hex structure. Given a volume represented by a tetrahedral mesh, we first compute a boundary-aligned 3D frame field inside it, then convert the frame field to be singularity-restricted by our effective topological operations. In our all-hex meshing framework, we apply the CubeCover method to achieve the volume parametrization. For reducing degenerate elements appearing in the volume parametrization, we also propose novel tetrahedral split operations to preprocess singularity-restricted frame fields. Experimental results show that our algorithm generates high-quality all-hex meshes from a variety of 3D volumes robustly and efficiently.
We present an interactive approach to semantic modeling of indoor scenes with a consumer-level RGBD camera. Using our approach, the user first takes an RGBD image of an indoor scene, which is automatically segmented into a set of regions with semantic labels. If the segmentation is not satisfactory, the user can draw some strokes to guide the algorithm to achieve better results. After the segmentation is finished, the depth data of each semantic region is used to retrieve a matching 3D model from a database. Each model is then transformed according to the image depth to yield the scene. For large scenes where a single image can only cover one part of the scene, the user can take multiple images to construct other parts of the scene. The 3D models built for all images are then transformed and unified into a complete scene. We demonstrate the efficiency and robustness of our approach by modeling several real-world scenes.
No abstract
and target shapes to their latent spaces. We exploit a Generative Adversarial Network (GAN) to map deformed source shapes to deformed target shapes, both in the latent spaces, which ensures the obtained shapes from the mapping are indistinguishable from the target shapes. This is still an under-constrained problem, so we further utilize a reverse mapping from target shapes to source shapes and incorporate cycle consistency loss, i.e. applying both mappings should reverse to the input shape. This VAE-Cycle GAN (VC-GAN) architecture is used to build a reliable mapping between shape spaces. Finally, a similarity constraint is employed to ensure the mapping is consistent with visual similarity, achieved by learning a similarity neural network that takes the embedding vectors from the source and target latent spaces and predicts the light field distance between the corresponding shapes. Experimental results show that our fully automatic method is able to obtain high-quality deformation transfer results with unpaired data sets, comparable or better than existing methods where strict correspondences are required.
Figure 1: Example mechanical toy: Crocodile Feeding. (a) Input. The designer specifies the geometry and motion of the toy's features, in this case a boy and a crocodile object, forming two kinematic chains and four color-coded feature components. The feature base is colored orange. (b) Mechanical assembly synthesized by our system to generate the target motion. (c) Fabricated result. Overlayed arrows illustrate the motion, both input for features in (a) and output for the synthesized mechanism in (b), via the rules in [Mitra et al. 2010]. The canonical local coordinate system for the mechanical assembly is shown in (a). Please see the accompanying video for the full animation. AbstractWe introduce a new method to synthesize mechanical toys solely from the motion of their features. The designer specifies the geometry and a time-varying rotation and translation of each rigid feature component. Our algorithm automatically generates a mechanism assembly located in a box below the feature base that produces the specified motion. Parts in the assembly are selected from a parameterized set including belt-pulleys, gears, crank-sliders, quickreturns, and various cams (snail, ellipse, and double-ellipse). Positions and parameters for these parts are optimized to generate the specified motion, minimize a simple measure of complexity, and yield a well-distributed layout of parts over the driving axes. Our solution uses a special initialization procedure followed by simulated annealing to efficiently search the complex configuration space for an optimal assembly.
This paper presents a novel 2D shape deformation algorithm based on nonlinear least squares optimization. The algorithm aims to preserve two local shape properties: Laplacian coordinates of the boundary curve and local area of the shape interior, which are together represented in a non-quadric energy function. An iterative Gauss-Newton method is used to minimize this nonlinear energy function. The result is an interactive shape deformation system that can achieve physically plausible results that are hard to achieve with previous linear least squares methods. Besides preserving local shape properties, we also introduce a scheme to preserve the global area of the shape which is useful for deforming incompressible objects.
Human motions are the product of internal and external forces, but these forces are very difficult to measure in a general setting. Given a motion capture trajectory, we propose a method to reconstruct its open-loop control and the implicit contact forces. The method employs a strategy based on randomized sampling of the control within user-specified bounds, coupled with forward dynamics simulation. Sampling-based techniques are well suited to this task because of their lack of dependence on derivatives, which are difficult to estimate in contact-rich scenarios. They are also easy to parallelize, which we exploit in our implementation on a compute cluster. We demonstrate reconstruction of a diverse set of captured motions, including walking, running, and contact rich tasks such as rolls and kip-up jumps. We further show how the method can be applied to physically based motion transformation and retargeting, physically plausible motion variations, and referencetrajectory-free idling motions. Alongside the successes, we point out a number of limitations and directions for future work.
Inverse shape design for elastic objects greatly eases the design efforts by letting users focus on desired target shapes without thinking about elastic deformations. Solving this problem using classic iterative methods (e.g., Newton-Raphson methods), however, often suffers from slow convergence toward a desired solution. In this paper, we propose an asymptotic numerical method that exploits the underlying mathematical structure of specific nonlinear material models, and thus runs orders of magnitude faster than traditional Newton-type methods. We apply this method to compute rest shapes for elastic fabrication, where the rest shape of an elastic object is computed such that after physical fabrication the real object deforms into a desired shape. We illustrate the performance and robustness of our method through a series of elastic fabrication experiments.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.