This paper develops a mathematical theory of super-resolution. Broadly speaking, super-resolution is the problem of recovering the fine details of an objectthe high end of its spectrum-from coarse scale information only-from samples at the low end of the spectrum. Suppose we have many point sources at unknown locations in OE0; 1 and with unknown complex-valued amplitudes. We only observe Fourier samples of this object up to a frequency cutoff f c . We show that one can super-resolve these point sources with infinite precision-i.e., recover the exact locations and amplitudes-by solving a simple convex optimization problem, which can essentially be reformulated as a semidefinite program. This holds provided that the distance between sources is at least 2=f c . This result extends to higher dimensions and other models. In one dimension, for instance, it is possible to recover a piecewise smooth function by resolving the discontinuity points with infinite precision as well. We also show that the theory and methods are robust to noise. In particular, in the discrete setting we develop some theoretical results explaining how the accuracy of the super-resolved signal is expected to degrade when both the noise level and the super-resolution factor vary.
This paper studies the recovery of a superposition of point sources from noisy bandlimited data. In the fewest possible words, we only have information about the spectrum of an object in the lowfrequency band [−f lo , f lo ] and seek to obtain a higher resolution estimate by extrapolating the spectrum up to a frequency f hi > f lo . We show that as long as the sources are separated by 2/f lo , solving a simple convex program produces a stable estimate in the sense that the approximation error between the higher-resolution reconstruction and the truth is proportional to the noise level times the square of the super-resolution factor (SRF) f hi /f lo .
Mapping single-cell sequencing profiles to comprehensive reference datasets represents a powerful alternative to unsupervised analysis. Reference datasets, however, are predominantly constructed from single-cell RNA-seq data, and cannot be used to annotate datasets that do not measure gene expression. Here we introduce 'bridge integration', a method to harmonize single-cell datasets across modalities by leveraging a multi-omic dataset as a molecular bridge. Each cell in the multi-omic dataset comprises an element in a 'dictionary', which can be used to reconstruct unimodal datasets and transform them into a shared space. We demonstrate that our procedure can accurately harmonize transcriptomic data with independent single cell measurements of chromatin accessibility, histone modifications, DNA methylation, and protein levels. Moreover, we demonstrate how dictionary learning can be combined with sketching techniques to substantially improve computational scalability, and harmonize 8.6 million human immune cell profiles from sequencing and mass cytometry experiments. Our approach aims to broaden the utility of single-cell reference datasets and facilitate comparisons across diverse molecular modalities. Availability: Installation instructions, documentations, and vignettes are available at http://www.satijalab.org/seurat
During the coronavirus disease 2019 (COVID-19) pandemic, rapid and accurate triage of patients at the emergency department is critical to inform decision-making. We propose a data-driven approach for automatic prediction of deterioration risk using a deep neural network that learns from chest X-ray images and a gradient boosting model that learns from routine clinical variables. Our AI prognosis system, trained using data from 3661 patients, achieves an area under the receiver operating characteristic curve (AUC) of 0.786 (95% CI: 0.745–0.830) when predicting deterioration within 96 hours. The deep neural network extracts informative areas of chest X-ray images to assist clinicians in interpreting the predictions and performs comparably to two radiologists in a reader study. In order to verify performance in a real clinical setting, we silently deployed a preliminary version of the deep neural network at New York University Langone Health during the first wave of the pandemic, which produced accurate predictions in real-time. In summary, our findings demonstrate the potential of the proposed system for assisting front-line physicians in the triage of COVID-19 patients.
Recent work has shown that convex programming allows to recover a superposition of point sources exactly from low-resolution data as long as the sources are separated by 2/fc, where fc is the cut-off frequency of the sensing process. The proof relies on the construction of a certificate whose existence implies exact recovery. This certificate has since been used to establish that the approach is robust to noise and to analyze related problems such as compressed sensing off the grid and the super-resolution of splines from moment measurements. In this work we construct a new certificate that allows to extend all these results to signals with minimum separations above 1.26/fc. This is close to 1/fc, the threshold at which the problem becomes inherently ill posed, in the sense that signals with a smaller minimum separation may have low-pass projections with negligible energy.
Current state-of-the-art climate models solve geophysical fluid equations on horizontal grids of size 25 km and coarser. Models at this resolution are not able to accurately and sufficiently resolve processes with physical length scales smaller than the model grid, for example, convection in the atmosphere and mesoscale eddies in the ocean. Since increases in computational power will likely not enable climate models to resolve these processes before the effects of climate change ensue (Fox-Kemper et al., 2014;Schneider et al., 2017), we must represent subgrid-scale (SGS) processes with closure models, also known as parameterizations. Yet, these SGS models are some of the largest sources of bias and uncertainties in climate simulations: for example, insufficient representations of transient eddies cause biases in modeled currents and sea surface temperature in the ocean (Griffies et al., 2015;Hewitt et al., 2020), and the precipitation pattern is strongly sensitive to the different subgrid cloud closures, thereby causing significant errors in climate projections (Stevens & Bony, 2013). Therefore, developing robust parameterizations remains an important task toward reliable climate projections.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.