No abstract
One of the most successful approaches to modern high quality HDR-video capture is to use camera setups with multiple sensors imaging the scene through a common optical system. However, such systems pose several challenges for HDR reconstruction algorithms. Previous reconstruction techniques have considered debayering, denoising, resampling (alignment) and exposure fusion as separate problems. In contrast, in this paper we present a unifying approach, performing HDR assembly directly from raw sensor data. Our framework includes a camera noise model adapted to HDR video and an algorithm for spatially adaptive HDR reconstruction based on fitting of local polynomial approximations to observed sensor data. The method is easy to implement and allows reconstruction to an arbitrary resolution and output mapping. We present an implementation in CUDA and show real-time performance for an experimental 4 Mpixel multi-sensor HDR video system. We further show that our algorithm has clear advantages over existing methods, both in terms of flexibility and reconstruction quality.
The natural world presents our visual system with a wide, everchanging range of colors and intensities. Existing video cameras are only capable of capturing a limited part of this wide range with sufficient resolution. High-dynamic-range (HDR) images can represent most of the real world's luminances, but until now capturing HDR images with a linear-response function has been limited to static scenes. This demonstration showcases a novel complete HDR video solution. The system includes a unique HDR video camera capable of capturing a full HDTV video stream consisting of 20 f-stops dynamic range at a resolution of 1920 x 1080 pixels at 30 frames per second; an encoding method for coping with the huge amount of data generated by the camera (achieving a compression ratio of up to 100:1 and real-time decompression); and a new 22-inch desktop HDR display for directly visualizing the dynamic HDR content.This HDR video solution should be of great interest to cinematographers. The camera accurately captures real-world lighting, from lions moving in deep shadow on the bright African veldt to recording surgery with its vast range of lighting from dark body cavities to bright operating-theater lights. In addition, HDR video content can be incorporated into dynamic visualization systems, allowing virtual objects to be viewed under dynamic real-world settings. So, for example, rather than taking a physical mock-up of a proposed new car to a remote location to produce advertising material, a camera crew can take the HDR video system to the location and capture the desired lighting and environment, including any moving objects (such as clouds, people, etc.), then combine the video material with the car CAD model and paint BRDFs to produce highly compelling imagery.
We present an overview of our recently developed systems pipeline for capture, reconstruction, modeling and rendering of real world scenes based on state-of-the-art high dynamic range video (HDRV). The reconstructed scene representation allows for photo-realistic Image Based Lighting (IBL) in complex environments with strong spatial variations in the illumination. The pipeline comprises the following essential steps:1.) Capture -The scene capture is based on a 4MPixel global shutter HDRV camera with a dynamic range of more than 24 f-stops at 30 fps. The HDR output stream is stored as individual un-compressed frames for maximum flexibility. A scene is usually captured using a combination of panoramic light probe sequences [1], and sequences with a smaller field of view to maximize the resolution at regions of special interest in the scene. The panoramic sequences ensure full angular coverage at each position and guarantee that the information required for IBL is captured. The position and orientation of the camera is tracked during capture.2.) Scene recovery -Taking one or more HDRV sequences as input, a geometric proxy model of the scene is built using a semi-automatic approach. First, traditional computer vision algorithms such as structure from motion [2] and Manhattan world stereo [3] are used. If necessary, the recovered model is then modified using an interaction scheme based on visualizations of a volumetric representation of the scene radiance computed from the input HDRV sequence. The HDR nature of this volume also enables robust extraction of direct light sources and other high intensity regions in the scene.3.) Radiance processing -When the scene proxy geometry has been recovered, the radiance data captured in the HDRV sequences are re-projected onto the surfaces and the recovered light sources. Since most surface points have been imaged from a large number of directions, it is possible to reconstruct view dependent texture maps at the proxy geometries. These 4D data sets describe a combination of detailed geometry that has not been recovered and the radiance reflected from the underlying real surfaces. The view dependent textures are then processed and compactly stored in an adaptive data structure. 4.) Rendering -Once the geometric and radiometric scene information has been recovered, it is possible to place virtual objects into the real scene and create photo-realistic renderings as illustrated above. The extracted light sources enable efficient sampling and rendering times that are fully comparable to that of traditional virtual computer graphics light sources. No previously described method is capable of capturing and reproducing the angular and spatial variation in the scene illumination in comparable detail.We believe that the rapid development of high quality HDRV systems will soon have a large impact on both computer vision and graphics. Following this trend, we are developing theory and algorithms for efficient processing HDRV sequences and using the abundance of radiance data that is going to...
Stray light is the part of an image that is formed by misdirected light. I.e. an ideal optic would map a point of the scene onto a point of the image. With real optics however, some parts of the light get misdirected. This is due to effects like scattering at edges, Fresnel reflections at optical surfaces, scattering at parts of the housing, scattering from dust and imperfections -on and inside of the lenses -and further reasons. These effects lead to errors in colour-measurements using spectral radiometers and other systems like scanners. Stray light is further limiting the dynamic range that can be achieved with High-Dynamic-Range-Technologies (HDR) and can lead to the rejection of cameras due to quality considerations. Therefore it is of interest, to measure, quantify and correct these effects. Our work aims at measuring the stray light point spread function (stray light PSF) of a system which is composed of a lens and an imaging sensor. In this paper we present a framework for the evaluation of PSF-models which can be used for the correction of straylight. We investigate if and how our evaluation framework can point out errors of these models and how these errors influence straylight correction.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.