1996
DOI: 10.1016/0167-6393(96)00028-3
|View full text |Cite
|
Sign up to set email alerts
|

Articulatory-to-acoustic mapping for inverse problem

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

1998
1998
2011
2011

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(2 citation statements)
references
References 8 publications
0
2
0
Order By: Relevance
“…Charpentier [7], Sorokin [8] or Ouni [9] use first-order polynomial interpolation which they describe as local linear functions, or piece-wise linear functions in the case of Sorokin. Charpentier uses an interesting method, in which the articulatory space is subdivided according to the curvature of the acoustic images along specific articulatory trajectories, the points of highest curvature defining reference points, and the rest of the articulatory space being interpolated using the Jacobian matrix around these references points.…”
Section: Hypercuboid Structurementioning
confidence: 99%
“…Charpentier [7], Sorokin [8] or Ouni [9] use first-order polynomial interpolation which they describe as local linear functions, or piece-wise linear functions in the case of Sorokin. Charpentier uses an interesting method, in which the articulatory space is subdivided according to the curvature of the acoustic images along specific articulatory trajectories, the points of highest curvature defining reference points, and the rest of the articulatory space being interpolated using the Jacobian matrix around these references points.…”
Section: Hypercuboid Structurementioning
confidence: 99%
“…Genetic algorithms that do not use a codebook have also been used. 24 VT outlines estimated by inversion for static vowels and fricatives have been compared against XRMB measurements of gold pellets placed on the tongue, 18,25 and VT outlines estimated for static vowels have been compared against real VT shapes from the x-ray images. 26 Simultaneously recorded articulatory and acoustic data that are publicly available include the XRMB speech production database from the University of Wisconsin, Madison, 27 and the Edinburgh multi-channel articulatory (MOCHA) database.…”
Section: Introduction and Review Of Previous Workmentioning
confidence: 99%