In this paper, we propose a highly efficient method for synthesizing high-resolution(HR) smoke simulations based on deep learning. A major issue for physics-based HR fluid simulations is that they require large amounts of physical memory and long execution times. In recent years, this issue has been addressed by developing deep-learning-based super-resolution(SR) methods that convert lowresolution(LR) fluid simulation results to HR(High-resolution) versions. However, these methods were not very efficient because they performed operations even in areas with low density or no density. In this paper, we propose a method that can maximize its efficiency by introducing a downscaled and binarized adaptive octree. However, even if it is divided by octree, because the number of nodes increases when the resolution of the simulation space is large, we reduce the size of the space by multiscaling and at the same time perform binarization to preserve the density that may be lost in this process. The octree calculated in this process has a structure similar to that of a multigrid solver, and the octree calculated at coarse resolution is restored to its original size and used for HR expression. Finally, we apply the SR process only to those areas having significant density values. Using the proposed method, the SR process is significantly faster and the memory efficiency is improved. The performance of our method is compared with that of an existing SR method to demonstrate its efficiency.
We propose a novel method for addressing the problem of efficiently generating a highly refined normal map for screen-space fluid rendering. Because the process of filtering the normal map is crucially important to ensure the quality of the final screen-space fluid rendering, we employ a conditional generative adversarial network (cGAN) as a filter that learns a deep normal map representation, thereby refining the low-quality normal map. In particular, we have designed a novel loss function dedicated to refining the normal map information, and we use a specific set of auxiliary features to train the cGAN generator to learn features that are more robust with respect to edge details. Additionally, we constructed a dataset of six different typical scenes to enable effective demonstrations of multitype fluid simulation. Experiments indicated that our generator was able to infer clearer and more detailed features for this dataset than a basic screen-space fluid rendering method. Moreover, in some cases, the results generated by our method were even smoother than those generated by the conventional surface reconstruction method. Our method improves the fluid rendering results via the high-quality normal map while preserving the advantages of the screen-space fluid rendering methods and the traditional surface reconstruction methods, including that of the computation time being independent of the number of simulation particles and the spatial resolution being related only to image resolution.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.