Abstract:Virtual environments are typically textured by manually choosing an image to apply on each surface. This implies browsing through large sets of generic textures for each and every surface in the scene.We propose to facilitate this long and tedious process. Our algorithm assists the user while he assigns textures to surfaces. Each time an image is chosen for a surface, our algorithm propagates this information throughout the entire environment.Our approach is based on a new surface similarity measure. We exploi… Show more
“…Recently several works have been proposed to suggest materials/colors for indoor scenes. For example, Chajdas et al [CLS10] proposed an algorithm that assists users in assigning textures to surfaces by propagating a chosen image for a certain surface throughout an entire environment. Chen et al…”
This paper presents an interactive system for quickly designing and previewing colored snapshots of indoor scenes. Different from high-quality 3D indoor scene rendering, which often takes several minutes to render a moderately complicated scene under a specific color theme with high-performance computing devices, our system aims at improving the effectiveness of color theme design of indoor scenes and employs an image colorization approach to efficiently obtain high-resolution snapshots with editable colors. Given several pre-rendered, multi-layer, gray images of the same indoor scene snapshot, our system is designed to colorize and merge them into a single colored snapshot. Our system also assists users in assigning colors to certain objects/components and infers more harmonious colors for the unassigned objects based on pre-collected priors to guide the colorization. The quickly generated snapshots of indoor scenes provide previews of interior design schemes with different color themes, making it easy to determine the personalized design of indoor scenes. To demonstrate the usability and effectiveness of this system, we present a series of experimental results on indoor scenes of different types, and compare our method with a state-of-the-art method for indoor scene material and color suggestion and offline/online rendering software packages. CCS Concepts • Applied computing → Computer-aided design; • Computing methodologies → Graphics systems and interfaces; Rendering;
“…Recently several works have been proposed to suggest materials/colors for indoor scenes. For example, Chajdas et al [CLS10] proposed an algorithm that assists users in assigning textures to surfaces by propagating a chosen image for a certain surface throughout an entire environment. Chen et al…”
This paper presents an interactive system for quickly designing and previewing colored snapshots of indoor scenes. Different from high-quality 3D indoor scene rendering, which often takes several minutes to render a moderately complicated scene under a specific color theme with high-performance computing devices, our system aims at improving the effectiveness of color theme design of indoor scenes and employs an image colorization approach to efficiently obtain high-resolution snapshots with editable colors. Given several pre-rendered, multi-layer, gray images of the same indoor scene snapshot, our system is designed to colorize and merge them into a single colored snapshot. Our system also assists users in assigning colors to certain objects/components and infers more harmonious colors for the unassigned objects based on pre-collected priors to guide the colorization. The quickly generated snapshots of indoor scenes provide previews of interior design schemes with different color themes, making it easy to determine the personalized design of indoor scenes. To demonstrate the usability and effectiveness of this system, we present a series of experimental results on indoor scenes of different types, and compare our method with a state-of-the-art method for indoor scene material and color suggestion and offline/online rendering software packages. CCS Concepts • Applied computing → Computer-aided design; • Computing methodologies → Graphics systems and interfaces; Rendering;
“…(Leifman and Tal 2012) colorize a 3D mesh model by propagating user input colors throughout the mesh. (Chajdas, Lefebvre, and Stamminger 2010) develop an algorithm to assist users in assigning textures to scenes. More recent work focuses on data-driven colorization.…”
Automatic generation of 3D visual content is a fundamental problem that sits at the intersection of visual computing and artificial intelligence. So far, most existing works have focused on geometry synthesis. In contrast, advances in automatic synthesis of color information, which conveys rich semantic information of 3D geometry, remain rather limited. In this paper, we propose to learn a generative model that maps a latent color parameter space to a space of colorizations across a shape collection. The colorizations are diverse on each shape and consistent across the shape collection. We introduce an unsupervised approach for training this generative model and demonstrate its effectiveness across a wide range of categories. The key feature of our approach is that it only requires one colorization per shape in the training data, and utilizes a neural network to propagate the color information of other shapes to train the generative model for each particular shape. This characteristics makes our approach applicable to standard internet shape repositories.
“…In the context of 3D models Mertens et al [MKCD07] generate reflectance details by learning the geometric correlation from another 3D model with reflectance. In a similar fashion, Chajdas et al [CLS10] consider local geometric structure to assist the user in assigning textures. In contrast to our work, their transfer is from 3D to 3D and does not consider the resulting perceived appearance, but solely statistical physical qualities.…”
Figure 1: Automatic 3D material style transfer from different source images (insets) to a target 3D scene using our approach.
AbstractThis work proposes a technique to transfer the material style or mood from a guide source such as an image or video onto a target 3D scene. It formulates the problem as a combinatorial optimization of assigning discrete materials extracted from the guide source to discrete objects in the target 3D scene. The assignment is optimized to fulfill multiple goals: overall image mood based on several image statistics; spatial material organization and grouping as well as geometric similarity between objects that were assigned to similar materials. To be able to use common uncalibrated images and videos with unknown geometry and lighting as guides, a material estimation derives perceptually plausible reflectance, specularity, glossiness, and texture. Finally, results produced by our method are compared to manual material assignments in a perceptual study.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.