Exposure Fusion is a high dynamic range imaging technique to fuse a bracketed exposure sequence into a high quality image, introduced in 2009 by Mertens et al. Contrarily to most HDR imaging methods, exposure fusion does not construct an intermediate HDR image but directly constructs the final LDR one by seamlessly fusing the best regions of the input sequence, using the Laplacian pyramid. Since its publication, this method received considerable attention, being both effective and efficient. We propose in this paper a precise description of the method and an analysis of its main limitation, an out-of-range artifact. Source Code The source code, the code documentation, and the online demo are accessible at the web page of this article 1. It uses the Matlab implementation provided by T. Mertens 2 , merely adapted for its execution with Octave on the IPOL platform.
Extended Exposure Fusion (EEF) is a high dynamic range imaging technique. It was recently proposed as an improvement of an earlier technique called Exposure Fusion (EF), itself widely used thanks to its very good results. But EF has two well-known artifacts: an out-of-range artifact and a low-frequency halo. The extended version solves them, and delivers fused results with enhanced contrast everywhere in the image. We give in this paper a precise description, analysis and implementation of the extended exposure fusion. We notably verify that the artifacts are removed and that the local contrast is improved compared to the original exposure fusion. Source Code The Matlab/Octave source code, the code documentation, and the online demo are accessible at the web page of this article 1. Usage instructions are included in the README.txt file of the archive.
This paper proposes a cloud detection algorithm for Earth observation images obtained by pushbroom satellite imagers. The pushbroom technology induces an inter-band acquisition delay leading to a parallax effect for the clouds. We propose a method exploiting this characteristic thanks to the analysis of the inter-band disparity. Several other features discriminating clouds are also defined and all are merged to build a robust a contrario statistical decision. Experiments applied on scenes acquired by various pushbroom satellites such as Sentinel-2, RapidEye and WorldView-2 show the effectiveness of the proposed method. In particular, we demonstrate a balanced accuracy rate close to 98% for cloud and non cloud classification for Sentinel-2 images.
Abstract. Assessing ground visibility is a crucial step in automatic satellite image analysis. Nevertheless, several recent Earth observation satellite constellations lack specially designed spectral bands and use a frame camera, precluding spectrum-based and parallax-based cloud detection methods. An alternative approach is to detect the parts of each image where the ground is visible. This can be done by comparing locally pairs of registered images in a temporal series: matching regions are necessarily cloud free. Indeed, the ground has persistent patterns that can be observed repetitively in the time series while the appearance of clouds changes at each date. To detect reliably the “visible” ground, we propose here an a contrario local image matching method coupled with an efficient greedy algorithm.
Simulated Exposure Fusion (SEF) is a single-image contrast enhancement method. It is built upon a high dynamic range imaging technique called Exposure Fusion (EF), introduced in 2007 and widely used since then, which aims at fusing a bracketed exposure sequence into a high quality image. Simulated Exposure Fusion extends the initial method to the case where only one image is available, and delivers an image with enhanced contrast. We propose in this paper an implementation of this method, along with its precise description and analysis. Its results are compared to state-of-the-art enhancement algorithms and appear to be artifact-free, even in extreme enhancement conditions. Furthermore, they inherit from EF's celebrated natural aspect. Source Code The Matlab/Octave source code, the code documentation, and the online demo are accessible at the web page of this article 1. Usage instructions are included in the README.txt file of the archive.
Abstract. We propose a method for the relative radiometric normalization of long, multi-sensor image time series. This allows to increase the revisit time under comparable conditions. Although the relative radiometric normalization is a well-studied problem in the remote sensing community, the availability of an increasing number of images gives rise to new problems. For example, given long series spanning several years, finding features that are maintained through the whole period of time becomes arduous. Instead, we propose in this paper to use automatically detected reference images chosen by maximization of a quality metric. For each image, two affine correction models are robustly estimated using random sample consensus, using the two closest reference images; the final correction is obtained by linear interpolation. For each pair of source and reference images, pseudo-invariant features are obtained using a similarity measure invariant to radiometric changes. A final tone-mapping step outputs the images in the standard 8-bits range. This method is illustrated by the fusion of time series of Sentinel-2 at correction levels 1C, 2A, and Landsat-8 images. By using only the atmospherically corrected Sentinel-2 L2A images as anchors, the full output series inherits this atmospheric correction.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.