We reduce the material of a 3D kitten (left), by carving porous in the solid (mid-left), to yield a honeycomb-like interior structure which provides an optimal strength-to-weight ratio, and relieves the overall stress illustrated on a cross-section (mid-right). The 3D printed hollowed solid is built-to-last using our interior structure (right).
Fine-grained image classification with a few-shot classifier is a highly challenging open problem at the core of a numerous data labeling applications. In this paper, we present Few-shot Classifier Generative Adversarial Network as an approach for few-shot classification. We address the problem of few-shot classification by designing a GAN in which the discriminator and the generator compete to output labeled data in any case. In contrast to previous methods, our techniques generate then classify images into multiple fake or real classes. A key innovation of our adversarial approach is to allow fine-grained classification using multiple fake classes with semi-supervised deep learning. A major strength of our techniques lies in its label-agnostic characteristic, in the sense that the system handles both labeled and unlabeled data during training. We validate quantitatively our few-shot classifier on the MNIST and SVHN datasets by varying the ratio of labeled data over unlabeled data in the training set. Our quantitative analysis demonstrates that our techniques produce better classification performance when using multiple fake classes and larger amount of unlabelled data.
Full body performance capture is a promising emerging technology that has been intensively studied in Computer Graphics and Computer Vision over the last decade. Highly-detailed performance animations are easier to obtain using existing multiple views platforms, markerless capture and 3D laser scanner. In this paper, we investigate the feasibility of extracting optimal reduced animation parameters without requiring an underlying rigid kinematic structure. This paper explores the potential of introducing harmonic cage-based linear estimation and deformation as post-process of current performance capture techniques used in 3D time-varying scene capture technology. We propose the first algorithm for performing cage-based tracking across time for vision and virtual reality applications. The main advantages of our novel approach are its linear single pass estimation of the desired surface, easy-to-reuse output cage sequences and reduction in storage size of animations. Our results show that estimated parameters allow a sufficient silhouette-consistent generation of the enclosed mesh under sparse frame-to-frame animation constraints and large deformation.
Cage-based deformation techniques are widely used to control the deformation of an enclosed fine-detail mesh. Achieving deformation based on vertex constraints has been extensively studied for the case of pure meshes, but few works specifically examine how such vertex constraints can be used to efficiently deform the template and estimate the corresponding cage pose. In this paper, we show that this can be achieved very efficiently with two contributions: (1) we provide a linear estimation framework for cage vertex coordinates; (2) the regularization of the deformation is expressed on the cage vertices rather than the enclosed mesh, yielding a computationally efficient solution which fully benefits from cage-based parameterizations. We demonstrate the practical use of this scheme for two applications: animation edition from sparse screenspace user-specified constraints, and automatic cage extraction from a sequence of meshes, for animation re-edition.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.