Abstract. This paper presents PLDA, our parallel implementation of Latent Dirichlet Allocation on MPI and MapReduce. PLDA smooths out storage and computation bottlenecks and provides fault recovery for lengthy distributed computations. We show that PLDA can be applied to large, real-world applications and achieves good scalability. We have released MPI-PLDA to open source at
Figure 1: In less than 2 seconds per frame, our method simulates fluids through this detailed city with over 120 million voxels, an 4000× speedup compared to standard techniques. Our modular approach allows the user to rearrange building tiles at runtime. AbstractWe present a new approach to fluid simulation that balances the speed of model reduction with the flexibility of grid-based methods. We construct a set of composable reduced models, or tiles, which capture spatially localized fluid behavior. We then precompute coupling terms so that these models can be rearranged at runtime. To enforce consistency between tiles, we introduce constraint reduction. This technique modifies a reduced model so that a given set of linear constraints can be fulfilled. Because dynamics and constraints can be solved entirely in the reduced space, our method is extremely fast and scales to large domains.
Figure 1: Our method enables reduced simulation of fluid flow around this flying bird over 2000 times faster than the corresponding full simulation and reduced radiosity computation in this architectural scene over 113 times faster than the corresponding full radiosity. AbstractThis paper extends Galerkin projection to a large class of nonpolynomial functions typically encountered in graphics. We demonstrate the broad applicability of our approach by applying it to two strikingly different problems: fluid simulation and radiosity rendering, both using deforming meshes. Standard Galerkin projection cannot efficiently approximate these phenomena. Our approach, by contrast, enables the compact representation and approximation of these complex non-polynomial systems, including quotients and roots of polynomials. We rely on representing each function to be model-reduced as a composition of tensor products, matrix inversions, and matrix roots. Once a function has been represented in this form, it can be easily model-reduced, and its reduced form can be evaluated with time and memory costs dependent only on the dimension of the reduced space.
Figure 1: A self-refining liquid control game uses player analytics to guide precomputation to the most visited regions of the liquid's state space. The game's quality continuously improves over time, ultimately providing a high-quality, interactive experience. AbstractData-driven simulation demands good training data drawn from a vast space of possible simulations. While fully sampling these large spaces is infeasible, we observe that in practical applications, such as gameplay, users explore only a vanishingly small subset of the dynamical state space. In this paper we present a sampling approach that takes advantage of this observation by concentrating precomputation around the states that users are most likely to encounter. We demonstrate our technique in a prototype self-refining game whose dynamics improve with play, ultimately providing realistically rendered, rich fluid dynamics in real time on a mobile device. Our results show that our analytics-driven training approach yields lower model error and fewer visual artifacts than a heuristic training strategy.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.