2015
DOI: 10.1007/s10579-015-9299-2
|View full text |Cite
|
Sign up to set email alerts
|

Multimodal corpus of multiparty conversations in L1 and L2 languages and findings obtained from it

Abstract: To investigate the differences in communicative activities by the same interlocutors in Japanese (their L1) and in English (their L2), an 8-h multimodal corpus of multiparty conversations was collected. Three subjects participated in each conversational group, and they had conversations on free-flowing and goaloriented topics in Japanese and in English. Their utterances, eye gazes, and gestures were recorded with microphones, eye trackers, and video cameras. The utterances and eye gazes were manually annotated… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
7
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
5
1
1

Relationship

2
5

Authors

Journals

citations
Cited by 10 publications
(7 citation statements)
references
References 26 publications
0
7
0
Order By: Relevance
“…A multimodal corpus created by Yamamoto et al [17] was used for comparing the effect of eye gaze on selection of the next speaker between native-and second-language conversations. Three subjects participated in a conversational group, sitting in a triangular formation around a table as shown in Figure 1.…”
Section: Multimodal Corpusmentioning
confidence: 99%
See 2 more Smart Citations
“…A multimodal corpus created by Yamamoto et al [17] was used for comparing the effect of eye gaze on selection of the next speaker between native-and second-language conversations. Three subjects participated in a conversational group, sitting in a triangular formation around a table as shown in Figure 1.…”
Section: Multimodal Corpusmentioning
confidence: 99%
“…They argued that this was because video transmitting facial information and gestures helped the non-native pairs to negotiate a common ground, whereas this did not provide significant help for the native pairs. These observations suggest that eye gaze and visual information play more important roles in establishing mutual understanding in L2 conversations than To quantitatively and precisely analyze the difference in eye gaze between L1 and L2 conversations, Yamamoto et al [17] created a multimodal corpus of three-party conversations for two different conversation topics in L1 and L2. In this way, it was possible to compare the features of utterance, eye gaze, and body posture in L1 and L2 conversations conducted by the same interlocutors [17].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…To quantitatively and precisely analyze the difference in eye gaze between L1 and L2 conversations, Yamamoto et al ( 2015 ) created a multimodal corpus of three-party conversations for two different conversation topics in L1 and L2. In this way, it was possible to compare the features of utterance, eye gaze, and body posture in L1 and L2 conversations conducted by the same interlocutors (Yamamoto et al 2015 ).…”
Section: Introductionmentioning
confidence: 99%
“…To quantitatively and precisely analyze the difference in eye gaze between L1 and L2 conversations, Yamamoto et al ( 2015 ) created a multimodal corpus of three-party conversations for two different conversation topics in L1 and L2. In this way, it was possible to compare the features of utterance, eye gaze, and body posture in L1 and L2 conversations conducted by the same interlocutors (Yamamoto et al 2015 ). To compare the features of eye gaze in L1 and L2 conversations, they used two metrics: (1) how long the speaker was gazed at by other participants during her or his utterance (listener’s gazing ratio) and (2) how long the speaker gazed at other participants during her or his utterance (speaker’s gazing ratio).…”
Section: Introductionmentioning
confidence: 99%