The goal of example‐based texture synthesis methods is to generate arbitrarily large textures from limited exemplars in order to fit the exact dimensions and resolution required for a specific modeling task. The challenge is to faithfully capture all of the visual characteristics of the exemplar texture, without introducing obvious repetitions or unnatural looking visual elements. While existing non‐parametric synthesis methods have made remarkable progress towards this goal, most such methods have been demonstrated only on relatively low‐resolution exemplars. Real‐world high resolution textures often contain texture details at multiple scales, which these methods have difficulty reproducing faithfully. In this work, we present a new general‐purpose and fully automatic self‐tuning non‐parametric texture synthesis method that extends Texture Optimization by introducing several key improvements that result in superior synthesis ability. Our method is able to self‐tune its various parameters and weights and focuses on addressing three challenging aspects of texture synthesis: (i) irregular large scale structures are faithfully reproduced through the use of automatically generated and weighted guidance channels; (ii) repetition and smoothing of texture patches is avoided by new spatial uniformity constraints; (iii) a smart initialization strategy is used in order to improve the synthesis of regular and near‐regular textures, without affecting textures that do not exhibit regularities. We demonstrate the versatility and robustness of our completely automatic approach on a variety of challenging high‐resolution texture exemplars.
Appearance reproduction is an important aspect of 3D printing. Current color reproduction systems use halftoning methods that create colors through a spatial combination of different inks at the object's surface. This introduces a variety of artifacts to the object, especially when viewed from a closer distance. In this work, we propose an alternative color reproduction method for 3D printing. Inspired by the inherent ability of 3D printers to layer different materials on top of each other, 3D color contoning creates colors by combining inks with various thicknesses inside the object's volume. Since inks are inside the volume, our technique results in a uniform color surface with virtually invisible spatial patterns on the surface. For color prediction, we introduce a simple and highly accurate spectral model that relies on a weighted regression of spectral absorptions. We fully characterize the proposed framework by addressing a number of problems, such as material arrangement, calculation of ink concentration, and 3D dot gain. We use a custom 3D printer to fabricate and validate our results.
In this paper, we study the associations between human faces and voices. Audiovisual integration, specifically the integration of facial and vocal information is a well-researched area in neuroscience. It is shown that the overlapping information between the two modalities plays a significant role in perceptual tasks such as speaker identification. Through an online study on a new dataset we created, we confirm previous findings that people can associate unseen faces with corresponding voices and vice versa with greater than chance accuracy. We computationally model the overlapping information between faces and voices and show that the learned cross-modal representation contains enough information to identify matching faces and voices with performance similar to that of humans. Our representation exhibits correlations to certain demographic attributes and features obtained from either visual or aural modality alone. We release our dataset of audiovisual recordings and demographic annotations of people reading out short text used in our studies.
In architectural design, surface shapes are commonly subject to geometric constraints imposed by material, fabrication or assembly. Rationalization algorithms can convert a freeform design into a form feasible for production, but often require design modifications that might not comply with the design intent. In addition, they only offer limited support for exploring alternative feasible shapes, due to the high complexity of the optimization algorithm.We address these shortcomings and present a computational framework for interactive shape exploration of discrete geometric structures in the context of freeform architectural design. Our method is formulated as a mesh optimization subject to shape constraints. Our formulation can enforce soft constraints and hard constraints at the same time, and handles equality constraints and inequality constraints in a unified way. We propose a novel numerical solver that splits the optimization into a sequence of simple subproblems that can be solved efficiently and accurately.Based on this algorithm, we develop a system that allows the user to explore designs satisfying geometric constraints. Our system offers full control over the exploration process, by providing direct access to the specification of the design space. At the same time, the complexity of the underlying optimization is hidden from the user, who communicates with the system through intuitive interfaces.
Figure 1: Samples of knitted garments designed and customized with our tool, and fabricated on a wholegarment industrial knitting machine. (A) Various garment prototypes on a 12 inch mannequin; (B) a glove with lace patterns on its palms; (C) an infinity scarf with noisy lace patterns; and (D) a sock with a ribbed cuff.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.