Auditory neurons preserve exquisite temporal information about sound features, but we do not know how the brain uses this information to process the rapidly changing sounds of the natural world. Simple arguments for effective use of temporal information led us to consider the reassignment class of time-frequency representations as a model of auditory processing. Reassigned timefrequency representations can track isolated simple signals with accuracy unlimited by the time-frequency uncertainty principle, but lack of a general theory has hampered their application to complex sounds. We describe the reassigned representations for white noise and show that even spectrally dense signals produce sparse reassignments: the representation collapses onto a thin set of lines arranged in a froth-like pattern. Preserving phase information allows reconstruction of the original signal. We define a notion of ''consensus,'' based on stability of reassignment to time-scale changes, which produces sharp spectral estimates for a wide class of complex mixed signals. As the only currently known class of time-frequency representations that is always ''in focus'' this methodology has general utility in signal analysis. It may also help explain the remarkable acuity of auditory perception. Many details of complex sounds that are virtually undetectable in standard sonograms are readily perceptible and visible in reassignment.auditory ͉ reassignment ͉ spectral ͉ spectrograms ͉ uncertainty T ime-frequency analysis seeks to decompose a onedimensional signal along two dimensions, a time axis and a frequency axis; the best known time-frequency representation is the musical score, which notates frequency vertically and time horizontally. These methods are extremely important in fields ranging from quantum mechanics (1-5) to engineering (6, 7), animal vocalizations (8, 9), radar (10), sound analysis and speech recognition (11-13), geophysics (14, 15), shaped laser pulses (16-18), the physiology of hearing, and musicography. ¶ A central question of auditory theory motivates our study: what algorithms do the brain use to parse the rapidly changing sounds of the natural world? Auditory neurons preserve detailed temporal information about sound features, but we do not know how the brain uses it to process sound. Although it is accepted that the auditory system must perform some type of time-frequency analysis, we do not know which type. The many inequivalent classes of time-frequency distributions (2, 3, 6) require very different kinds of computations: linear transforms include the Gabor transform (19), quadratic transforms [known as Cohen's class (2, 6)] include the Wigner-Ville (1) and Choi-Williams (20) distributions, and higher-order in the signal, include multitapered spectral estimates (21-24), the Hilbert-Huang distribution (25,26), and the reassigned spectrograms (27-32) whose properties are the subject of this article.
Results and DiscussionThe auditory nerve preserves information about phases of oscillations much more accurately than info...