2014
DOI: 10.1121/1.4862880
|View full text |Cite
|
Sign up to set email alerts
|

Co-registration of speech production datasets from electromagnetic articulography and real-time magnetic resonance imaging

Abstract: This paper describes a spatio-temporal registration approach for speech articulation data obtained from electromagnetic articulography (EMA) and real-time Magnetic Resonance Imaging (rtMRI). This is motivated by the potential for combining the complementary advantages of both types of data. The registration method is validated on EMA and rtMRI datasets obtained at different times, but using the same stimuli. The aligned corpus offers the advantages of high temporal resolution (from EMA) and a complete mid-sagi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
7
0

Year Published

2014
2014
2023
2023

Publication Types

Select...
7
1

Relationship

2
6

Authors

Journals

citations
Cited by 14 publications
(7 citation statements)
references
References 9 publications
(12 reference statements)
0
7
0
Order By: Relevance
“…Electromagnetic articulography provides only a spatially sparse representation of the tongue movements. Co-registration methods of EMA and real-time magnetic resonance imaging (rtMRI) data (Kim et al 2014) provides richer spatio-temporal data to animate the tongue movements and create a better 3D tongue model. The proposed method could be applied on the USC-TIMIT (Narayanan et al 2014) which is an extensive database of multimodal (EMA, rtMRI, acoustics) speech production.…”
Section: Discussionmentioning
confidence: 99%
“…Electromagnetic articulography provides only a spatially sparse representation of the tongue movements. Co-registration methods of EMA and real-time magnetic resonance imaging (rtMRI) data (Kim et al 2014) provides richer spatio-temporal data to animate the tongue movements and create a better 3D tongue model. The proposed method could be applied on the USC-TIMIT (Narayanan et al 2014) which is an extensive database of multimodal (EMA, rtMRI, acoustics) speech production.…”
Section: Discussionmentioning
confidence: 99%
“…Multimodality -data provided by different modalities and concerning similar phenomena or providing complementary data might benefit from joint analysis. For example, several technologies that support speech production studies (e.g., EMA, Kim et al 2014 andultrasound, Laprie et al 2014) are used in combination with MRI (Scott et al, 2014). Regardless of how the different data is analysed, if their individual contributions to the understanding of specific phenomena could be gathered in joint representations it might motivate a generalization of multimodal studies and an easier interpretation of the data.…”
Section: Challengesmentioning
confidence: 99%
“…In addition, tagged-MRI (Parthasarathy et al 2007) allows us to observe internal tissue point motion, thereby detailing our understanding of the role of internal muscles during speech. Further, recent advances in various MRI methods have accelerated new advances in image and motion analyses such as segmentation of the tongue (Harandi et al 2014; Lee et al 2014) and internal muscles (Ibragimov et al 2015), internal motion tracking (Parthasarathy et al 2007), motion clustering (Woo et al 2014), and registration (Woo et al 2015c; Kim et al 2014) for various applications.…”
Section: Introductionmentioning
confidence: 99%