In this paper, we present new solutions for the interactive modeling of city layouts that combine the power of procedural modeling with the flexibility of manual modeling. Procedural modeling enables us to quickly generate large city layouts, while manual modeling allows us to hand-craft every aspect of a city. We introduce transformation and merging operators for both topology preserving and topology changing transformations based on graph cuts. In combination with a layering system, this allows intuitive manipulation of urban layouts using operations such as drag and drop, translation, rotation etc. In contrast to previous work, these operations always generate valid, i.e., intersection-free layouts. Furthermore, we introduce anchored assignments to make sure that modifications are persistent even if the whole urban layout is regenerated.
Nowadays, there is a strong trend towards rendering to higher‐resolution displays and at high frame rates. This development aims at delivering more detail and better accuracy, but it also comes at a significant cost. Although graphics cards continue to evolve with an ever‐increasing amount of computational power, the speed gain is easily counteracted by increasingly complex and sophisticated shading computations. For real‐time applications, the direct consequence is that image resolution and temporal resolution are often the first candidates to bow to the performance constraints (e.g. although full HD is possible, PS3 and XBox often render at lower resolutions). In order to achieve high‐quality rendering at a lower cost, one can exploit temporal coherence (TC). The underlying observation is that a higher resolution and frame rate do not necessarily imply a much higher workload, but a larger amount of redundancy and a higher potential for amortizing rendering over several frames. In this survey, we investigate methods that make use of this principle and provide practical and theoretical advice on how to exploit TC for performance optimization. These methods not only allow incorporating more computationally intensive shading effects into many existing applications, but also offer exciting opportunities for extending high‐end graphics applications to lower‐spec consumer‐level hardware. To this end, we first introduce the notion and main concepts of TC, including an overview of historical methods. We then describe a general approach, image‐space reprojection, with several implementation algorithms that facilitate reusing shading information across adjacent frames. We also discuss data‐reuse quality and performance related to reprojection techniques. Finally, in the second half of this survey, we demonstrate various applications that exploit TC in real‐time rendering.
Due to its versatility, speed and robustness, shadow mapping has always been a popular algorithm for fast hard shadow generation since its introduction in 1978, first for off-line film productions and later increasingly so in real-time graphics. So it is not surprising that recent years have seen an explosion in the number of shadow map related publications. The last survey that encompassed shadow mapping approaches, but was mainly focused on soft shadow generation, dates back to 2003 [HLHS03], while the last survey for general shadow generation dates back to 1990 [WPF90]. No survey that describes all the advances made in hard shadow map generation in recent years exists. On the other hand, shadow mapping is widely used in the game industry, in production, and in many other applications, and it is the basis of many soft shadow algorithms. Due to the abundance of articles on the topic, it has become very hard for practitioners and researchers to select a suitable shadow algorithm, and therefore many applications miss out on the latest high-quality shadow generation approaches. The goal of this survey is to rectify this situation by providing a detailed overview of this field. We provide a detailed analysis of shadow mapping errors and derive a comprehensive classification of the existing methods. We discuss the most influential algorithms, consider their benefits and shortcomings and thereby provide the readers with the means to choose the shadow algorithm best suited to their needs.
Background: Multi-institutional, international practice variation of pediatric anaphylaxis management by healthcare providers has not been reported.Objective: Characterize variability in epinephrine administration for pediatric anaphylaxis across institutions, including frequency and types of medication errors. Methods:A prospective, observational, study using a standardized in situ simulated anaphylaxis scenario was performed across 28 healthcare institutions in six countries. The on-duty healthcare team was called for a child (patient simulator) in anaphylaxis. Real medications and supplies were obtained from their actual locations. Demographic data about team members, institutional protocols for anaphylaxis, timing of epinephrine delivery, medication errors, and systems safety issues discovered during the simulation were collected.Results: Thirty-seven in situ simulations were performed. Anaphylaxis guidelines existed in 41% (15/37) of institutions. Teams used a cognitive aid for medication dosing 41% (15/37) of the time and 32% (12/37) for preparation. Epinephrine auto injectors (EAIs) were not available in 54% (20/37) of institutions and were used in only 14% (5/37) simulations. Median time to epinephrine administration was 95 seconds (IQR 77, 252) for EAI and 263 seconds (IQR 146, 407.5) for manually prepared epinephrine (p=.12). At least one medication error occurred in 68% (25/37) of simulations. Prior nursing experience with epinephrine administration for anaphylaxis was associated with fewer preparation (p=.04) and administration (p=.01) errors.Latent safety threats (LSTs) were reported by 30% (11/37) of institutions, more than half of these (6/11) involved a cognitive aid. Conclusion and Relevance:A multicenter, international study of simulated pediatric anaphylaxis reveals: 1) variation in management between institutions in usage of protocols,
We present a physically based real-time water simulation and rendering method that brings volumetric foam to the real-time domain, significantly increasing the realism of dynamic fluids. We do this by combining a particle-based fluid model that is capable of accounting for the formation of foam with a layered rendering approach that is able to account for the volumetric properties of water and foam. Foam formation is simulated through Weber number thresholding. For rendering, we approximate the resulting water and foam volumes by storing their respective boundary surfaces in depth maps. This allows us to calculate the attenuation of light rays that pass through these volumes very efficiently. We also introduce an adaptive curvature flow filter that produces consistent fluid surfaces from particles independent of the viewing distance.
Ninety-six children were admitted during a 9-year period to a pediatric level 1 trauma center for treatment of farm-related injuries. The age range was from 6 weeks to 17 years (median, 7.5 years; mean, 7.6 years; standard deviation, 4.4). Thirty-nine patients (40.6%) had an animal-related injury, including 36 children (37.5%) who had an injury associated with a horse. Amish children had an increased risk of horse-related injury when compared with non-Amish children (p=0.04; RR=2.09, 95% CI: 1.18
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.