We present a real-time rendering scheme that reuses shading samples from earlier time frames to achieve practical antialiasing of procedural shaders. Using a reprojection strategy, we maintain several sets of shading estimates at subpixel precision, and incrementally update these such that for most pixels only one new shaded sample is evaluated per frame. The key difficulty is to prevent accumulated blurring during successive reprojections. We present a theoretical analysis of the blur introduced by reprojection methods. Based on this analysis, we introduce a nonuniform spatial filter, an adaptive recursive temporal filter, and a principled scheme for locally estimating the spatial blur. Our scheme is appropriate for antialiasing shading attributes that vary slowly over time. It works in a single rendering pass on commodity graphics hardware, and offers results that surpass 4×4 stratified supersampling in quality, at a fraction of the cost.
…Figure 1: The design and fabrication by example pipeline: casual users design new models by composing parts from a database of fabricable templates. The system assists the users in this task by automatically aligning parts and assigning appropriate connectors. The output of the system is a detailed model that includes all components necessary for fabrication. AbstractWe propose a data-driven method for designing 3D models that can be fabricated. First, our approach converts a collection of expertcreated designs to a dataset of parameterized design templates that includes all information necessary for fabrication. The templates are then used in an interactive design system to create new fabricable models in a design-by-example manner. A simple interface allows novice users to choose template parts from the database, change their parameters, and combine them to create new models. Using the information in the template database, the system can automatically position, align, and connect parts: the system accomplishes this by adjusting parameters, adding appropriate constraints, and assigning connectors. This process ensures that the created models can be fabricated, saves the user from many tedious but necessary tasks, and makes it possible for non-experts to design and create actual physical objects. To demonstrate our data-driven method, we present several examples of complex functional objects that we designed and manufactured using our system.
We propose a workflow for spectral reproduction of paintings, which captures a painting's spectral color, invariant to illumination, and reproduces it using multi-material 3D printing. We take advantage of the current 3D printers' capabilities of combining highly concentrated inks with a large number of layers, to expand the spectral gamut of a set of inks. We use a data-driven method to both predict the spectrum of a printed ink stack and optimize for the stack layout that best matches a target spectrum. This bidirectional mapping is modeled using a pair of neural networks, which are optimized through a problem-specific multi-objective loss function. Our loss function helps find the best possible ink layout resulting in the balance between spectral reproduction and colorimetric accuracy under a multitude of illuminants. In addition, we introduce a novel spectral vector error diffusion algorithm based on combining color contoning and halftoning, which simultaneously solves the layout discretization and color quantization problems, accurately and efficiently. Our workflow outperforms the state-of-the-art models for spectral prediction and layout optimization. We demonstrate reproduction of a number of real paintings and historically important pigments using our prototype implementation that uses 10 custom inks with varying spectra and a resin-based 3D printer.
We present a framework based on Genetic Programming (GP) for automatically simplifying procedural shaders. Our approach computes a series of increasingly simplified shaders that expose the inherent trade-off between speed and accuracy. Compared to existing automatic methods for pixel shader simplification [Olano et al. 2003;Pellacini 2005], our approach considers a wider space of code transformations and produces faster and more faithful results. We further demonstrate how our cost function can be rapidly evaluated using graphics hardware, which allows tens of thousands of shader variants to be considered during the optimization process. Our approach is also applicable to multi-pass shaders and perceptualbased error metrics.
Figure 1: Our multi-material 3D printer (left) and a set of fabricated materials and objects (right). AbstractWe have developed a multi-material 3D printing platform that is high-resolution, low-cost, and extensible. The key part of our platform is an integrated machine vision system. This system allows for self-calibration of printheads, 3D scanning, and a closed-feedback loop to enable print corrections. The integration of machine vision with 3D printing simplifies the overall platform design and enables new applications such as 3D printing over auxiliary parts. Furthermore, our platform dramatically expands the range of parts that can be 3D printed by simultaneously supporting up to 10 different materials that can interact optically and mechanically. The platform achieves a resolution of at least 40 µm by utilizing piezoelectric inkjet printheads adapted for 3D printing. The hardware is low cost (less than $7,000) since it is built exclusively from off-the-shelf components. The architecture is extensible and modular -adding, removing, and exchanging printing modules can be done quickly. We provide a detailed analysis of the system's performance. We also demonstrate a variety of fabricated multi-material objects.
Rapid, accurate, and low-cost detection of SARS-CoV-2 is crucial to contain the transmission of COVID-19. Here, we present a cost-effective smartphone-based device coupled with machine learning-driven software that evaluates the fluorescence signals of the CRISPR diagnostic of SARS-CoV-2. The device consists of a three-dimensional (3D)-printed housing and low-cost optic components that allow excitation of fluorescent reporters and selective transmission of the fluorescence emission to a smartphone. Custom software equipped with a binary classification model has been developed to quantify the acquired fluorescence images and determine the presence of the virus. Our detection system has a limit of detection (LoD) of 6.25 RNA copies/μL on laboratory samples and produces a test accuracy of 95% and sensitivity of 97% on 96 nasopharyngeal swab samples with transmissible viral loads. Our quantitative fluorescence score shows a strong correlation with the quantitative reverse transcription polymerase chain reaction (RT-qPCR) Ct values, offering valuable information of the viral load and, therefore, presenting an important advantage over nonquantitative readouts.
materials (solids, thin-films, and liquids) and functions into a single seamless autonomous sensory composite by avoiding the use of a premade substrate, and 3D-printing all required materials without any external processing.We present a monolithic integration of strain sensitive elements with an organic electrochemical transistor (OECT)based amplifier and an electrochromic element powered at 1.5 V DC fabricated using a low-temperature additive manufacturing approach (additive manufacturing system is shown in Figure 1B; integration scheme of the composite is in Figure 1C and photograph is shown in Figure 1D). There are two challenges in material interface engineering that currently limit the creation of functional 3D composites using a single fabrication method. First, local control of surface energy and texture at material interfaces is essential to assemble multiple materials, specifically for confining solvent inks on solid layers with high fidelity. [22] Similarly, controlling the droplet deployment order is essential (see the Experimental Section and Figure S1, Supporting Information). Second, reducing the operating voltage in active electrical signal processors is strongly tied to achieving defect-free, extremely uniform, thin gate dielectrics in field effect transistors or encapsulating electrolytes in OECTs. We solve this by using a drop-on-demand multimaterial inkjetbased 3D printing platform (see Figure 1B) that is capable of printing UV curable materials with a ≈35 µm lateral resolution [23] while simultaneously printing solvent-evaporated films, and encapsulated liquids (images of droplets in-flight and printed lines are in Figure S2, Supporting Information). Equipped with light-emitting diode (LED) arrays for UV curing, the system can digitally assemble UV curable polymers of varying elastic moduli and surface energies. Droplets of different materials are deployed simultaneously to assemble multiple materials in the same layer, where the resolution is the size of each droplet. When solvent-based inks are printed, a compact ceramic heater on the printhead carriage enables rapid local forced convection heating of the uppermost printed layers. Printed liquids can be confined by sidewalls and also be completely encapsulated inside UV curable matrices. The structure of the final composite to be printed is represented as voxels (volume elements), where the choice of the material for each voxel is made based on the required function. In total, six different materials are printed here (ink compositions are in Table S1, Supporting Information); full details of the materials and the printing process are in the Experimental Section and the Supporting Information.The mechanical matrix of the autonomous sensory composite is made from a basis of two UV curable acrylate polymer materials of varying mechanical stiffness; rigid formulation (elastic modulus ≈637.76 MPa) and the elastic material (elastic modulus ≈678.5 kPa) span three orders of magnitude in stiffness ( Figure S3, Supporting Information). The active reg...
Multi-view autostereoscopic displays provide an immersive, glassesfree 3D viewing experience, but they require correctly filtered content from multiple viewpoints. This, however, cannot be easily obtained with current stereoscopic production pipelines. We provide a practical solution that takes a stereoscopic video as an input and converts it to multi-view and filtered video streams that can be used to drive multi-view autostereoscopic displays. The method combines a phase-based video magnification and an interperspective antialiasing into a single filtering process. The whole algorithm is simple and can be efficiently implemented on current GPUs to yield a near real-time performance. Furthermore, the ability to retarget disparity is naturally supported. Our method is robust and works well for challenging video scenes with defocus blur, motion blur, transparent materials, and specularities. We show that our results are superior when compared to the state-of-the-art depth-based rendering methods. Finally, we showcase the method in the context of a real-time 3D videoconferencing system that requires only two cameras.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.