International audience3D modeling remains a notoriously difficult task for novices despite significant research effort to provide intuitive and automated systems. We tackle this problem by combining the strengths of two popular domains: sketch-based modeling and procedural modeling. On the one hand, sketch-based modeling exploits our ability to draw but requires detailed, unambiguous drawings to achieve complex models. On the other hand, procedural modeling automates the creation of precise and detailed geometry but requires the tedious definition and parameterization of procedural models. Our system uses a collection of simple procedural grammars, called snippets, as building blocks to turn sketches into realistic 3D models. We use a machine learning approach to solve the inverse problem of finding the procedural model that best explains a user sketch. We use non-photorealistic rendering to generate artificial data for training con-volutional neural networks capable of quickly recognizing the procedural rule intended by a sketch and estimating its parameters. We integrate our algorithm in a coarse-to-fine urban modeling system that allows users to create rich buildings by successively sketching the building mass, roof, facades, windows, and ornaments. A user study shows that by using our approach non-expert users can generate complex buildings in just a few minutes
Figure 1: Procedural modeling generation from a single image. 1) Given an image and a silhouette of a building, 2) our approach estimates the camera parameters and building mass as a first step.Then, 3) the façade image is rectified, and 4) the façade structure and 5) window styles are recognized. 6) Finally the output grammar is constructed and a corresponding 3d geometry is generated.Figure 1: Procedural Modeling from a Single Image. a) Given an image and a silhouette of a building, b) our approach automatically estimates the camera parameters and generates a building mass grammar as a first step. Then, c) the façade image is rectified, and d) the façade grammar is generated. e) For each window non-terminal, the best window grammar is selected by maximum vote. f) Finally the output grammar is constructed and a corresponding 3D geometry is generated. AbstractCreating a virtual city is demanded for computer games, movies, and urban planning, but it takes a lot of time to create numerous 3D building models. Procedural modeling has become popular in recent years to overcome this issue, but creating a grammar to get a desired output is difficult and time consuming even for expert users. In this paper, we present an interactive tool that allows users to automatically generate such a grammar from a single image of a building. The user selects a photograph and highlights the silhouette of the target building as input to our method. Our pipeline automatically generates the building components, from large-scale building mass to fine-scale windows and doors geometry. Each stage of our pipeline combines convolutional neural networks (CNNs) and optimization to select and parameterize procedural grammars that reproduce the building elements of the picture. In the first stage, our method jointly estimates camera parameters and building mass shape. Once known, the building mass enables the rectification of the façades, which are given as input to the second stage that recovers the façade layout. This layout allows us to extract individual windows and doors that are subsequently fed to the last stage of the pipeline that selects procedural grammars for windows and doors. Finally, the grammars are combined to generate a complete procedural building as output. We devise a common methodology to make each stage of this pipeline tractable. This methodology consists in simplifying the input image to match the visual appearance of synthetic training data, and in using optimization to refine the parameters estimated by CNNs. We used our method to generate a variety of procedural models of buildings from existing photographs.
The objective of this paper is to investigate the respective influence of various urban pattern characteristics on inundation flow. A set of 2000 synthetic urban patterns were generated using an urban procedural model providing locations and shapes of streets and buildings over a square domain of 1×1km. Steady two-dimensional hydraulic computations were performed over the 2000 urban patterns with identical hydraulic boundary conditions. To run such a large amount of simulations, the computational efficiency of the hydraulic model was improved by using an anisotropic porosity model. This model computes on relatively coarse computational cells, but preserves information from the detailed topographic data through porosity parameters. Relationships between urban characteristics and the computed inundation water depths have been based on multiple linear regressions. Finally, a simple mechanistic model based on two district-scale porosity parameters, combining several urban characteristics, is shown to capture satisfactorily the influence of urban characteristics on inundation water depths. The findings of this study give guidelines for more flood-resilient urban planning.
Aside from modeling geometric shape, three-dimensional (3D) urban procedural modeling has shown its value in understanding, predicting and/or controlling effects of shape on design and urban planning. In this paper, instead of the construction of flood resistant measures, we create a procedural generation system for designing urban layouts that passively reduce water depth during a flooding scenario. Our tool enables exploring designs that passively lower flood depth everywhere or mostly in chosen key areas. Our approach tightly integrates a hydraulic model and a parameterized urban generation system with an optimization engine so as to find the least cost modification to an initial urban layout design. Further, due to the computational cost of a fluid simulation, we train neural networks to assist with accelerating the design process. We have applied our system to several real-world locations and have obtained improved 3D urban models in just a few seconds.
Example-driven result: our interactive approach enables a user to quickly design a road network for an entire city. In this example, a) the user starts with a virtual city using roads from Jiangmen, China (one of the world's 10 fastest-growing cities). b) The user selects a target space for a new urban area, and a new road network is generated by growing and blending two road styles. c) Roads in the top right corner are replaced by a selected example of curved roads. d) Additionally, other interesting road network configurations are inserted. e) Finally, a 3D city model is created. All road network examples were obtained from OpenStreetMap and corresponded to styles extracted from Madrid, San Francisco, Canberra, Tel-Aviv, and London. AbstractSynthesizing and exploring large-scale realistic urban road networks is beneficial to 3D content creation, traffic animation, and urban planning. In this paper, we present an interactive tool that allows untrained users to design roads with complex realistic details and styles. Roads are generated by growing a geometric graph. During a sketching phase, the user specifies the target area and the examples. During a growing phase, two types of growth are effectively applied to generate roads in the target area; example-based growth uses patches extracted from the source example to generate roads that preserve some interesting structures in the example road networks; procedural-based growth uses the statistical information of the source example while effectively adapting the roads to the underlying terrain and the already generated roads. User-specified warping, blending, and interpolation operations are used at will to produce new road network designs that are inspired by the examples. Finally, our method computes city blocks, individual parcels, and plausible building and tree geometries. We have used our approach to create road networks covering up to 200 km 2 and containing over 3,500 km of roads.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.