In this paper, we propose a very simple and elegant patch-based, machine learning technique for image denoising using the higher order singular value decomposition (HOSVD). The technique simply groups together similar patches from a noisy image (with similarity defined by a statistically motivated criterion) into a 3D stack, computes the HOSVD coefficients of this stack, manipulates these coefficients by hard thresholding, and inverts the HOSVD transform to produce the final filtered image. Our technique chooses all required parameters in a principled way, relating them to the noise model. We also discuss our motivation for adopting the HOSVD as an appropriate transform for image denoising. We experimentally demonstrate the excellent performance of the technique on grayscale as well as color images. On color images, our method produces state-of-the-art results, outperforming other color image denoising algorithms at moderately high noise levels. A criterion for optimal patch-size selection and noise variance estimation from the residual images (after denoising) is also presented.
The COVID-19 pandemic has strained testing capabilities worldwide. There is an urgent need to find economical and scalable ways to test more people. We present Tapestry, a novel quantitative nonadaptive pooling scheme to test many samples using only a few tests. The underlying molecular diagnostic test is any real-time RT-PCR diagnostic panel approved for the detection of the SARS-CoV-2 virus. In cases where most samples are negative for the virus, Tapestry accurately identifies the status of each individual sample with a single round of testing in fewer tests than simple two-round pooling. We also present a companion Android application BYOM Smart Testing which guides users through the pipetting steps required to perform the combinatorial pooling. The results of the pooled tests can be fed into the application to recover the status and estimated viral load for each individual sample. NOTE: This protocol has been validated with in vitro experiments that used synthetic RNA and DNA fragments and additionally, its expected behavior has been confirmed using computer simulations. Validation with clinical samples is ongoing. We are looking for clinical collaborators with access to patient samples. Please contact the corresponding author if you wish to validate this protocol on clinical samples.
Blind compressive sensing (CS) is considered for reconstruction of hyperspectral data imaged by a coded aperture camera. The measurements are manifested as a superposition of the coded wavelengthdependent data, with the ambient three-dimensional hyperspectral datacube mapped to a two-dimensional measurement. The hyperspectral datacube is recovered using a Bayesian implementation of blind CS.Several demonstration experiments are presented, including measurements performed using a coded aperture snapshot spectral imager (CASSI) camera. The proposed approach is capable of efficiently reconstructing large hyperspectral datacubes. Comparisons are made between the proposed algorithm and other techniques employed in compressive sensing, dictionary learning and matrix factorization. Index Termshyperspectral images, image reconstruction, projective transformation, dictionary learning, non-parametric Bayesian, Beta-Bernoulli model, coded aperture snapshot spectral imager (CASSI).
We present a new method for compact representation of large image datasets. Our method is based on treating small patches from a 2-D image as matrices as opposed to the conventional vectorial representation, and encoding these patches as sparse projections onto a set of exemplar orthonormal bases, which are learned a priori from a training set. The end result is a low-error, highly compact image/patch representation that has significant theoretical merits and compares favorably with existing techniques (including JPEG) on experiments involving the compression of ORL and Yale face databases, as well as a database of miscellaneous natural images. In the context of learning multiple orthonormal bases, we show the easy tunability of our method to efficiently represent patches of different complexities. Furthermore, we show that our method is extensible in a theoretically sound manner to higher-order matrices ("tensors"). We demonstrate applications of this theory to compression of well-known color image datasets such as the GaTech and CMU-PIE face databases and show performance competitive with JPEG. Lastly, we also analyze the effect of image noise on the performance of our compression schemes.
We present a new geometric approach for determining the probability density of the intensity values in an image. We drop the notion of an image as a set of discrete pixels and assume a piecewise-continuous representation. The probability density can then be regarded as being proportional to the area between two nearby isocontours of the image surface. Our paper extends this idea to joint densities of image pairs. We demonstrate the application of our method to affine registration between two or more images using information-theoretic measures such as mutual information. We show cases where our method outperforms existing methods such as simple histograms, histograms with partial volume interpolation, Parzen windows, etc., under fine intensity quantization for affine image registration under significant image noise. Furthermore, we demonstrate results on simultaneous registration of multiple images, as well as for pairs of volume data sets, and show some theoretical properties of our density estimator. Our approach requires the selection of only an image interpolant. The method neither requires any kind of kernel functions (as in Parzen windows), which are unrelated to the structure of the image in itself, nor does it rely on any form of sampling for density estimation.
This paper proposes a dictionary learning framework that combines the proposed block/group (BGSC) or reconstructed block/group (R-BGSC) sparse coding schemes with the novel Intra-block Coherence Suppression Dictionary Learning (ICS-DL) algorithm. An important and distinguishing feature of the proposed framework is that all dictionary blocks are trained simultaneously with respect to each data group while the intra-block coherence being explicitly minimized as an important objective. We provide both empirical evidence and heuristic support for this feature that can be considered as a direct consequence of incorporating both the group structure for the input data and the block structure for the dictionary in the learning process. The optimization problems for both the dictionary learning and sparse coding can be solved efficiently using block-gradient descent, and the details of the optimization algorithms are presented. We evaluate the proposed methods using well-known datasets, and favorable comparisons with state-of-the-art dictionary learning methods demonstrate the viability and validity of the proposed framework.
We propose 'Tapestry', a novel approach to pooled testing with application to COVID-19 testing with quantitative Reverse Transcription Polymerase Chain Reaction (RT-PCR) that can result in shorter testing time and conservation of reagents and testing kits. Tapestry combines ideas from compressed sensing and combinatorial group testing with a novel noise model for RT-PCR used for generation of synthetic data. Unlike Boolean group testing algorithms, the input is a quantitative readout from each test and the output is a list of viral loads for each sample relative to the pool with the highest viral load. While other pooling techniques require a second confirmatory assay, Tapestry obtains individual sample-level results in a single round of testing, at clinically acceptable false positive or false negative rates. We also propose designs for pooling matrices that facilitate good prediction of the infected samples while remaining practically viable. When testing n samples out of which k n are infected, our method needs only O(k log n) tests when using random binary pooling matrices, with high probability. However, we also use deterministic binary pooling matrices based on combinatorial design ideas of Kirkman Triple Systems to balance between good reconstruction properties and matrix sparsity for ease of pooling. A lower bound on the number of tests with these matrices for satisfying a sufficient condition for guaranteed recovery is k √ n. In practice, we have observed the need for fewer tests with such matrices than with random pooling matrices. This makes Tapestry capable of very large savings at low prevalence rates, while simultaneously remaining viable even at prevalence rates as high as 9.5%. Empirically we find that single-round Tapestry pooling improves over two-round Dorfman pooling by almost a factor of 2 in the number of tests required. We describe how to combine combinatorial group testing and compressed sensing algorithmic ideas together to create a new kind of algorithm that is very effective in deconvoluting pooled tests. We validate Tapestry in simulations and wet lab experiments with oligomers in quantitative RT-PCR assays. An accompanying Android application Byom Smart Testing makes the Tapestry protocol straightforward to implement in testing centres, and is made available for free download. Lastly, we describe use-case scenarios for deployment.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2023 scite Inc. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.