The advent of direct electron detectors has enabled the routine use of single-particle cryo-electron microscopy (EM) approaches to determine structures of a variety of protein complexes at near-atomic resolution. Here, we report the development of methods to account for local variations in defocus and beam-induced drift, and the implementation of a data-driven dose compensation scheme that significantly improves the extraction of high-resolution information recorded during exposure of the specimen to the electron beam. These advances enable determination of a cryo-EM density map for β-galactosidase bound to the inhibitor phenylethyl β-D-thiogalactopyranoside where the ordered regions are resolved at a level of detail seen in X-ray maps at ∼ 1.5 Å resolution. Using this density map in conjunction with constrained molecular dynamics simulations provides a measure of the local flexibility of the non-covalently bound inhibitor and offers further opportunities for structure-guided inhibitor design.
Abstract-The performance of multi-image alignment, bringing different images into one coordinate system, is critical in many applications with varied signal-to-noise ratio (SNR) conditions. A great amount of effort is being invested into developing methods to solve this problem. Several important questions thus arise, including: Which are the fundamental limits in multi-image alignment performance? Does having access to more images improve the alignment? Theoretical bounds provide a fundamental benchmark to compare methods and can help establish whether improvements can be made. In this work, we tackle the problem of finding the performance limits in image registration when multiple shifted and noisy observations are available. We derive and analyze the Cramér-Rao and Ziv-Zakai lower bounds under different statistical models for the underlying image. The accuracy of the derived bounds is experimentally assessed through a comparison to the maximum likelihood estimator. We show the existence of different behavior zones depending on the difficulty level of the problem, given by the SNR conditions of the input images. We find that increasing the number of images is only useful below a certain SNR threshold, above which the pairwise MLE estimation proves to be optimal. The analysis we present here brings further insight into the fundamental limitations of the multi-image alignment problem.Index Terms-Multi-image alignment, performance bounds, Cramér-Rao bound, Ziv-Zakai bound, Bayesian Cramér-Rao, maximum likelihood estimator.
Abstract. Since the seminal work of Mann and Picard in 1994, the standard way to build high dynamic range (hdr) images from regular cameras has been to combine a reduced number of photographs captured with different exposure times. The algorithms proposed in the literature differ in the strategy used to combine these frames. Several experimental studies comparing their performances have been reported, showing in particular that a maximum likelihood estimation yields the best results in terms of mean squared error. However, no theoretical study aiming at establishing the performance limits of the hdr estimation problem has been conducted. Another common aspect of all hdr estimation approaches is that they discard saturated values. In this paper, we address these two issues. More precisely, we derive theoretical bounds for the hdr estimation problem, and we show that, even with a small number of photographs, the maximum likelihood estimator performs extremely close to these bounds. As a second contribution, we propose a general strategy to integrate the information provided by saturated pixels in the estimation process, hence improving the estimation results. Finally, we analyze the sensitivity of the hdr estimation process to camera parameters, and we show that small errors in the camera calibration process may severely degrade the estimation results.Key words. high dynamic range imaging, irradiance estimation, exposure bracketing, multi-exposure fusion, camera acquisition model, noise modeling, censored data, exposure saturation, Cramér-Rao lower bound.1. Introduction. The human eye has the ability to capture scenes of very high dynamic range, retaining details in both dark and bright regions. This is not the case for current standard digital cameras. Indeed, the limited capacity of the sensor cells makes it impossible to record the irradiance from very bright regions for long exposures. Pixels saturate incurring in information loss under the form of censored data. On the other hand, if the exposure time is reduced in order to avoid saturation, very few photons will be captured in the dark regions and the result will be masked by the acquisition noise. Therefore the result of a single shot picture of a high dynamic range scene, taken with a regular digital camera, contains pixels which are either overexposed or too noisy.High dynamic range imaging (hdr for short) is the field of imaging that seeks to accurately capture and represent scenes with the largest possible irradiance range. The representation problem of how to display an hdr image or irradiance map in a lower range image (for computer monitors or photographic prints) while retaining localized contrast, known as tone mapping, will not be addressed here. Due to technological and physical limitations of current optical sensors, nowadays the most common way to reach high irradiance dynamic ranges is by combining multiple low dynamic range photographs, acquired with different exposure times τ 1 , τ 2 , . . . , τ T . Indeed, for a given irradiance C and expos...
Abstract-Recently, impressive denoising results have been achieved by Bayesian approaches which assume Gaussian models for the image patches. This improvement in performance can be attributed to the use of per-patch models. Unfortunately such an approach is particularly unstable for most inverse problems beyond denoising. In this work, we propose the use of a hyperprior to model image patches, in order to stabilize the estimation procedure. There are two main advantages to the proposed restoration scheme: Firstly it is adapted to diagonal degradation matrices, and in particular to missing data problems (e.g. inpainting of missing pixels or zooming). Secondly it can deal with signal dependent noise models, particularly suited to digital cameras. As such, the scheme is especially adapted to computational photography. In order to illustrate this point, we provide an application to high dynamic range imaging from a single image taken with a modified sensor, which shows the effectiveness of the proposed scheme.Index Terms-Non-local patch-based restoration, Bayesian restoration, Maximum a Posteriori, Gaussian Mixture Models, hyper-prior, conjugate distributions, high dynamic range imaging, single shot HDR, hierarchical models.
Deeper exploration of the brain’s vast synaptic networks will require new tools for high-throughput structural and molecular profiling of the diverse populations of synapses that compose those networks. Fluorescence microscopy (FM) and electron microscopy (EM) offer complementary advantages and disadvantages for single-synapse analysis. FM combines exquisite molecular discrimination capacities with high speed and low cost, but rigorous discrimination between synaptic and non-synaptic fluorescence signals is challenging. In contrast, EM remains the gold standard for reliable identification of a synapse, but offers only limited molecular discrimination and is slow and costly. To develop and test single-synapse image analysis methods, we have used datasets from conjugate array tomography (cAT), which provides voxel-conjugate FM and EM (annotated) images of the same individual synapses. We report a novel unsupervised probabilistic method for detection of synapses from multiplex FM (muxFM) image data, and evaluate this method both by comparison to EM gold standard annotated data and by examining its capacity to reproduce known important features of cortical synapse distributions. The proposed probabilistic model-based synapse detector accepts molecular-morphological synapse models as user queries, and delivers a volumetric map of the probability that each voxel represents part of a synapse. Taking human annotation of cAT EM data as ground truth, we show that our algorithm detects synapses from muxFM data alone as successfully as human annotators seeing only the muxFM data, and accurately reproduces known architectural features of cortical synapse distributions. This approach opens the door to data-driven discovery of new synapse types and their density. We suggest that our probabilistic synapse detector will also be useful for analysis of standard confocal and super-resolution FM images, where EM cross-validation is not practical.
Building high dynamic range (HDR) images by combining photographs captured with different exposure times present several drawbacks, such as the need for global alignment and motion estimation in order to avoid ghosting artifacts. The concept of spatially varying pixel exposures (SVE) proposed by Nayar et al. enables to capture in only one shot a very large range of exposures while avoiding these limitations. In this paper, we propose a novel approach to generate HDR images from a single shot acquired with spatially varying pixel exposures. The proposed method makes use of the assumption stating that the distribution of patches in an image is well represented by a Gaussian Mixture Model. Drawing on a precise modeling of the camera acquisition noise, we extend the piecewise linear estimation strategy developed by Yu et al. for image restoration. The proposed method permits to reconstruct an irradiance image by simultaneously estimating saturated and under-exposed pixels and denoising existing ones, showing significant improvements over existing approaches.
High dynamic range (HDR) images are usually generated by combining multiple photographs acquired with different exposure times. This approach, while effective, suffers from various drawbacks. The irradiance estimation is performed by combining, for each pixel, different exposure values at the same spatial position. This estimation scheme does not take advantage of the redundancy present in most images. Moreover, images must be perfectly aligned and objects must be in the exact same position in all frames in order to combine the different exposures. In this work, we propose a new HDR image generation approach that simultaneously copes with these problems and exploits image redundancy to produce a denoised result. A reference image is chosen and a patch-based approach is used to find similar pixels that are then combined for the irradiance estimation. This patch-based approach permits to obtain a denoised result and is robust to image misalignments and object motions. Results show significant improvements in terms of noise reduction over previous HDR image generation techniques, while being robust to motion and changes between the exposures.
Exemplar-based texture synthesis aims at creating, from an input sample, new texture images that are visually similar to the input, but are not plain copy of it. The Efros-Leung algorithm is one of the most celebrated approaches to this problem. It relies on a Markov assumption and generates new textures in a non-parametric way, directly sampling new values from the input sample. In this paper, we provide a detailed analysis and implementation of this algorithm. The code closely follows the algorithm description from the original paper. It also includes a PCA-based acceleration of the method, yielding results that are generally visually indistinguishable from the original results. To the best of our knowledge, this is the first publicly available implementation of this algorithm running in acceptable time. Even though numerous improvements have been proposed since this seminal work, we believe it is of interest to provide an easy way to test the initial approach from Efros and Leung. In particular, we provide the user with a graphical illustration of the innovation capacity of the algorithm. Experimentation often shows that the path between verbatim copy of the exemplar and garbage growing is somewhat narrow, and that in most favorable cases the algorithm produces new texture images by stitching together entire regions from the exemplar. Source CodeThe ANSI C source code, the code documentation, and the online demo are accessible from the IPOL web page of this article here 1 .
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.