The sound received at the ears is processed by humans using signalprocessing that separates the signal along intensity, pitch and timbre dimensions. Conventional Fourier-based signal processing, while endowed with fast algorithms, is unable to easily represent signal along these attributes. In this paper we use a recently proposed cortical representation to represent and manipulate sound. We briefly overview algorithms for obtaining, manipulating and inverting cortical representation of a sound and describe algorithms for manipulating signal pitch and timbre separately. The algorithms are first used to create sound of an instrument between a "guitar" and a "trumpet". Applications to creating maximally separable sounds in auditory user interfaces are discussed.Partial support of ONR grant N000140110571 is gratefully acknowledged.1. INTRODUCTION When a natural source such as a human voice or a musical instrument produces a sound, the resulting acoustic wave is generated by a time-varying excitation pattern of a possibly time-varying channel, and the sound characteristics depend both on the excitation signal and on the production system. The production system (e.g., human vocal tract, the guitar box, or the flute tube) has its own characteristic response; variation of the excitation parameters produces a sound signal that has different frequency components, but still retains perceptual characteristics of the uniqueness of the production instrument (identity of the person, type of instrumentpiano, violin, etc.) When one is asked to characterize this sound source using descriptions based on Fourier analysis one discovers that concepts such as frequency and amplitude are insufficient to explain the characteristics of the sound source. Human linguistic descriptions characterize the sound in terms of pitch and timbre.The perceived sound pitch is closely coupled with its harmonic structure. On the other hand, the timbre of the sound is defined broadly as everything other than the pitch, loudness, and the spatial location of the sound. For example, two musical instruments might have the same pitch if they play the same note, but it is their different timbre that allows us to distinguish between them. Specifically, the spectral envelope in frequency and the spectral envelope variations in time are related to the timbre percept. Most conventional techniques of sound manipulation result in simultaneous changes in both the pitch and timbre and cannot be used to assess the effects of the pitch and timbre dimensions independently. A goal of this paper is the development of controls for independent manipulation of pitch and timbre of a sound source using a cortical sound representation that was introduced in [1] and used for assessment of speech intelligibility and for prediction of the cortical response to an arbitrary stimulus. We simulate the multiscale audio representation and processing believed to occur in the primate brain (supported by recent psychophysiological papers [2]), and while our sound decomposition is p...