our paper on the influence of medieval ornaments on contemporary art is due tomorrow. Luckily you have the latest wireless modem for your laptop, and hundreds of pieces from the Metropolitan Museum of Art collection are displayed on its web site. But as you examine the pictures, your web browser repeatedly gets stuck with partially loaded web pages. You see a reliquary, but an empty box sits where a scepter should appear; you find a Modigliani, but compression artifacts cause a Magritte to appear wrongly cubist. A lot of things could be going wrong; thus, many technological improvements could save you. Having more antennas on your laptop would make multipath a virtue instead of an impediment and could result in higher throughput and better reliability. More cellular base stations imply smaller cells and could lead to fewer conflicts with other users in your cell. Your wireless service provider could have higher capacities in the wired connections to its base stations. The whole wired infrastructure could be better, with fewer packets lost due to buffer overflows. The museum web site could handle more simultaneous connections or could be cached closer to you. Each of these changes could improve your browsing experience. This article focuses on the compressed representations of the pictures. The representation does not affect how many bits get from the web server to your laptop, but it determines the usefulness of the bits that arrive. Many different representations are possible, and there is more involved in the choice than merely selecting a compression ratio. The techniques presented here represent a single information source with several chunks of data ("descriptions") so that the source can be approximated from any subset of the chunks. By allowing image reconstruction to continue even after a packet is lost, this type of representation can prevent a web browser from becoming dormant. Separate Layers, Separate Responsibilities Network communication has many separations of functions and levels of abstraction. This is both the cause and product of assigning various design and implementation tasks to different groups of people. In networking, there is the canonical seven-layer open systems interconnection (OSI) reference model. The layers range from the physical layer, characterized by voltage levels and physical connectors, to the application layer, which interacts with the user's software application. All these layers are involved in the example of accessing the Met web site from an untethered laptop. Beyond the OSI layering, there is a further separation that most people take for granted. This separation is between generating data to be transmitted (creating content) and the delivery of content. The artistic aspect of content generation-writing, drawing, photographing, and composing-is not an engineering function. However, the engineer has great flexibility in creating representations of audio, images, and video to deliver an artistic vision. This article addresses the generation of content and how it is affec...
Bismuth telluride (Bi(2)Te(3)) and its alloys are the best bulk thermoelectric materials known today. In addition, stacked quasi-two-dimensional (2D) layers of Bi(2)Te(3) were recently identified as promising topological insulators. In this Letter we describe a method for "graphene-inspired" exfoliation of crystalline bismuth telluride films with a thickness of a few atoms. The atomically thin films were suspended across trenches in Si/SiO(2) substrates, and subjected to detail material characterization, which included atomic force microscopy and micro-Raman spectroscopy. The presence of the van der Waals gaps allowed us to disassemble Bi(2)Te(3) crystal into its quintuple building blocks-five monatomic sheets-consisting of Te((1))-Bi-Te((2))-Bi-Te((1)). By altering the thickness and sequence of atomic planes, we were able to create "designer" nonstoichiometric quasi-2D crystalline films, change their composition and doping, the type of charge carriers as well as other properties. The exfoliated quintuples and ultrathin films have low thermal conductivity, high electrical conductivity, and enhanced thermoelectric properties. The obtained results pave the way for producing stacks of crystalline bismuth telluride quantum wells with the strong spatial confinement of charge carriers and acoustic phonons, beneficial for thermoelectric devices. The developed technology for producing free-standing quasi-2D layers of Te((1))-Bi-Te((2))-Bi-Te((1)) creates an impetus for investigation of the topological insulators and their possible practical applications.
Frames have been used to capture significant signal characteristics, provide numerical stability of reconstruction, and enhance resilience to additive noise. This paper places frames in a new setting, where some of the elements are deleted. Since proper subsets of frames are sometimes themselves frames, a quantized frame expansion can be a useful representation even when some transform coefficients are lost in transmission. This yields robustness to losses in packet networks such as the Internet. With a simple model for quantization error, it is shown that a normalized frame minimizes mean-squared error if and only if it is tight. With one coefficient erased, a tight frame is again optimal among normalized frames, both in average and worst-case scenarios. For more erasures, a general analysis indicates some optimal designs. Being left with a tight frame after erasures minimizes distortion, but considering also the transmission rate and possible erasure events complicates optimizations greatly.
Abstract-Coefficient quantization has peculiar qualitative effects on representations of vectors in IR N with respect to overcomplete sets of vectors. These effects are investigated in two settings: frame expansions (representations obtained by forming inner products with each element of the set) and matching pursuit expansions (approximations obtained by greedily forming linear combinations). In both cases, based on the concept of consistency, it is shown that traditional linear reconstruction methods are suboptimal, and better consistent reconstruction algorithms are given. The proposed consistent reconstruction algorithms were in each case implemented, and experimental results are included. For frame expansions, results are proven to bound distortion as a function of frame redundancy r and quantization step size for linear, consistent, and optimal reconstruction methods. Taken together, these suggest that optimal reconstruction methods will yield O(1=r 2 ) mean-squared error (MSE), and that consistency is sufficient to insure this asymptotic behavior. A result on the asymptotic tightness of random frames is also proven. Applicability of quantized matching pursuit to lossy vector compression is explored. Experiments demonstrate the likelihood that a linear reconstruction is inconsistent, the MSE reduction obtained with a nonlinear (consistent) reconstruction algorithm, and generally competitive performance at low bit rates.
Abstract-The replica method is a non-rigorous but wellknown technique from statistical physics used in the asymptotic analysis of large, random, nonlinear problems. This paper applies the replica method, under the assumption of replica symmetry, to study estimators that are maximum a posteriori (MAP) under a postulated prior distribution. It is shown that with random linear measurements and Gaussian noise, the replica-symmetric prediction of the asymptotic behavior of the postulated MAP estimate of an n-dimensional vector "decouples" as n scalar postulated MAP estimators. The result is based on applying a hardening argument to the replica analysis of postulated posterior mean estimators of Tanaka and of Guo and Verdú.The replica-symmetric postulated MAP analysis can be readily applied to many estimators used in compressed sensing, including basis pursuit, lasso, linear estimation with thresholding, and zero norm-regularized estimation. In the case of lasso estimation the scalar estimator reduces to a soft-thresholding operator, and for zero norm-regularized estimation it reduces to a hardthreshold. Among other benefits, the replica method provides a computationally-tractable method for precisely predicting various performance metrics including mean-squared error and sparsity pattern recovery probability.
This comprehensive and engaging textbook introduces the basic principles and techniques of signal processing, from the fundamental ideas of signals and systems theory to real-world applications. • Introduces students to the powerful foundations of modern signal processing, including the basic geometry of Hilbert space, the mathematics of Fourier transforms, and essentials of sampling, interpolation, approximation, and compression. • Discusses issues in real-world use of these tools such as effects of truncation and quantization, limitations on localization, and computational costs. • Includes over 160 homework problems and over 220 worked examples, specifically designed to test and expand students' understanding of the fundamentals of signal processing. • Accompanied by extensive online materials designed to aid learning, including Mathematica resources and interactive demonstrations.
The problem of detecting the sparsity pattern of a k-sparse vector in R n from m random noisy measurements is of interest in many areas such as system identification, denoising, pattern recognition, and compressed sensing. This paper addresses the scaling of the number of measurements m, with signal dimension n and sparsity-level nonzeros k, for asymptotically-reliable detection. We show a necessary condition for perfect recovery at any given SNR for all algorithms, regardless of complexity, is m = Ω(k log(n − k)) measurements. Conversely, it is shown that this scaling of Ω(k log(n − k)) measurements is sufficient for a remarkably simple "maximum correlation" estimator. Hence this scaling is optimal and does not require more sophisticated techniques such as lasso or matching pursuit. The constants for both the necessary and sufficient conditions are precisely defined in terms of the minimum-toaverage ratio of the nonzero components and the SNR. The necessary condition improves upon previous results for maximum likelihood estimation. For lasso, it also provides a necessary condition at any SNR and for low SNR improves upon previous work. The sufficient condition provides the first asymptotically-reliable detection guarantee at finite SNR.
Imagers that use their own illumination can capture three-dimensional (3D) structure and reflectivity information. With photon-counting detectors, images can be acquired at extremely low photon fluxes. To suppress the Poisson noise inherent in low-flux operation, such imagers typically require hundreds of detected photons per pixel for accurate range and reflectivity determination. We introduce a low-flux imaging technique, called first-photon imaging, which is a computational imager that exploits spatial correlations found in real-world scenes and the physics of low-flux measurements. Our technique recovers 3D structure and reflectivity from the first detected photon at each pixel. We demonstrate simultaneous acquisition of sub-pulse duration range and 4-bit reflectivity information in the presence of high background noise. First-photon imaging may be of considerable value to both microscopy and remote sensing.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.