2002
DOI: 10.1201/b19597
|View full text |Cite
|
Sign up to set email alerts
|

Real Sound Synthesis for Interactive Applications

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
94
0

Year Published

2003
2003
2016
2016

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 190 publications
(94 citation statements)
references
References 0 publications
0
94
0
Order By: Relevance
“…In general, the sound design of these approaches can be distinguished between sample based implementations using recordings of real-life footsteps and synthesized sounds. These are further classified into models aiming to simulate real-world walking sounds on different ground textures [11,12,18,19,30] and the design of abstract sounds for the purpose of providing additional information about gait characteristics to the recipient [21,22]. Bresin et al [7,8] analyzed the impact of acoustically augmented footsteps on walkers and investigated how far their emotional state was represented by the audio recordings of their walking movements.…”
Section: Sonification Of Force Sensor Datamentioning
confidence: 99%
“…In general, the sound design of these approaches can be distinguished between sample based implementations using recordings of real-life footsteps and synthesized sounds. These are further classified into models aiming to simulate real-world walking sounds on different ground textures [11,12,18,19,30] and the design of abstract sounds for the purpose of providing additional information about gait characteristics to the recipient [21,22]. Bresin et al [7,8] analyzed the impact of acoustically augmented footsteps on walkers and investigated how far their emotional state was represented by the audio recordings of their walking movements.…”
Section: Sonification Of Force Sensor Datamentioning
confidence: 99%
“…On the other end of the spectrum, data-driven methods focus on using, or reproducing characteristics of, recorded data when the underlying physical systems are either too expensive to simulate or methods simply cannot produce convincing sound [Cook 2002;Picard et al 2009;Peltola et al 2007]. These techniques often utilize general synthesis algorithms, such as inverse-FFT synthesis [Rodet et al 1992;Marelli et al 2010], and additive and subtractive synthesis [Serra and Smith 1990].…”
Section: Related Workmentioning
confidence: 99%
“…A goal of this paper is the development of controls for independent manipulation of pitch and timbre of a sound source using a cortical sound representation that was introduced in [1] and used for assessment of speech intelligibility and for prediction of the cortical response to an arbitrary stimulus. We simulate the multiscale audio representation and processing believed to occur in the primate brain (supported by recent psychophysiological papers [2]), and while our sound decomposition is partially similar to existing pitch and timbre separation and sound morphing algorithms (in particular, MFCC decomposition algorithm in [3], sinusoid plus noise model and effects generated with it in [4], and parametric source models using LPC and physics-based synthesis in [5]), the neuromorphic framework allows to view the processing from a different perspective, supply supporting evidence to justify the procedure performed and tailor it to the way the human nervous system processes auditory information, and extend approach to include decomposition in time domain in addition to frequency.…”
Section: Introductionmentioning
confidence: 99%
“…In musical instrument synthesis, synthesizers often use sampled sound that have to be pitch-shifted to produce different notes [5] or generate a new instrument with the perceptual timbre lying inbetween two known instruments. Development of advanced auditory user interfaces requires mapping of arbitrary data streams into auditory percepts, and is commonly called "sonification" [6].…”
Section: Introductionmentioning
confidence: 99%