We propose a novel data-driven technique for automatically and efficiently generating floor plans for residential buildings with given boundaries. Central to this method is a two-stage approach that imitates the human design process by locating rooms first and then walls while adapting to the input building boundary. Based on observations of the presence of the living room in almost all floor plans, our designed learning network begins with positioning a living room and continues by iteratively generating other rooms. Then, walls are first determined by an encoder-decoder network, and then they are refined to vector representations using dedicated rules. To effectively train our networks, we construct RPLAN - a manually collected large-scale densely annotated dataset of floor plans from real residential buildings. Intensive experiments, including formative user studies and comparisons, are conducted to illustrate the feasibility and efficacy of our proposed approach. By comparing the plausibility of different floor plans, we have observed that our method substantially outperforms existing methods, and in many cases our floor plans are comparable to human-created ones.
Computing locally injective mappings with low distortion in an efficient way is a fundamental task in computer graphics. By revisiting the well-known MIPS (Most-Isometric ParameterizationS) method, we introduce an advanced MIPS method that inherits the local injectivity of MIPS, achieves as low as possible distortions compared to the state-of-the-art locally injective mapping techniques, and performs one to two orders of magnitude faster in computing a mesh-based mapping. The success of our method relies on two key components. The first one is an enhanced MIPS energy function that penalizes the maximal distortion significantly and distributes the distortion evenly over the domain for both mesh-based and meshless mappings. The second is a use of the inexact block coordinate descent method in mesh-based mapping in a way that efficiently minimizes the distortion with the capability not to be trapped early by the local minimum. We demonstrate the capability and superiority of our method in various applications including mesh parameterization, mesh-based and meshless deformation, and mesh improvement.
PolyCubes provide compact representations for closed complex shapes and are essential to many computer graphics applications. Existing automatic PolyCube construction methods usually suffer from poor quality or time-consuming computation. In this paper, we provide a highly efficient method to compute volumetric PolyCube-maps. Given an input tetrahedral mesh, we utilize two novel normal-driven volumetric deformation schemes and a polycube-allowable mesh segmentation to drive the input to a volumetric PolyCube structure. Our method can robustly generate foldover-free and low-distortion PolyCube-maps in practice, and provide a flexible control on the number of corners of Polycubes. Compared with state-of-the-art methods, our method is at least one order of magnitude faster and has better mapping qualities. We demonstrate the efficiency and efficacy of our method in PolyCube construction and all-hexahedral meshing on various complex models. (a) Armadillo (σ =le) 236 corners J min = 0.265 Javg = 0.909 (b) Armadillo (σ = 1.5le) 150 corners J min = 0.185 Javg = 0.901 (c) Bimba (σ =le) 68 corners J min = 0.361 Javg = 0.935 (d) Bimba (σ = 2.0le) 34 corners J min = 0.276 Javg = 0.910 (e) Sphinx (σ =le) 72 corners J min = 0.385 Javg = 0.948 (f) Sphinx (σ = 2.0le) 16 corners J min = 0.300 Javg = 0.930 X. Fu & C. Bai & Y. Liu / Efficient Volumetric PolyCube-Map Construction (a) [GSZ11] J min = 0.138 Javg = 0.930 (b) [LVS * 13] J min = 0.274 Javg = 0.938 (c) [HJS * 14] J min = 0.382 Javg = 0.926 (d) Ours J min = 0.422 Javg = 0.942 (e) [GSZ11] J min = 0.235 Javg = 0.925 (f) [LVS * 13] J min = 0.401 Javg = 0.926 (g) [HJS * 14] J min = 0.302 Javg = 0.934 (h) Ours J min = 0.439 Javg = 0.943
3D geometric features constitute rich details of polygonal meshes. Their analysis and editing can lead to vivid appearance of shapes and better understanding of the underlying geometry for shape processing and analysis. Traditional mesh smoothing techniques mainly focus on noise filtering and they cannot distinguish different scales of features well, even mixing them up. We present an efficient method to process different scale geometric features based on a novel rolling-guidance normal filter. Given a 3D mesh, our method iteratively applies a joint bilateral filter to face normals at a specified scale, which empirically smooths small-scale geometric features while preserving large-scale features. Our method recovers the mesh from the filtered face normals by a modified Poisson-based gradient deformation that yields better surface quality than existing methods. We demonstrate the effectiveness and superiority of our method on a series of geometry processing tasks, including geometry texture removal and enhancement, coating transfer, mesh segmentation and level-of-detail meshing.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.