2022
DOI: 10.48550/arxiv.2204.02905
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

EMMT: A simultaneous eye-tracking, 4-electrode EEG and audio corpus for multi-modal reading and translation scenarios

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3

Citation Types

0
3
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 0 publications
0
3
0
Order By: Relevance
“…In this direction, Bhattacharya et al (2022) conducted an experiment on human participants with translation enhanced by image modality, which is parallel to our experiment with language modelling using the same modality. Finally, some researchers (Hladká et al, 2009(Hladká et al, , 2011 have utilized the Games With a Purpose methodology (Von Ahn and Dabbish, 2008) to frame tasks that are difficult for computers but relatively easy for humans as games.…”
Section: Introductionmentioning
confidence: 99%
“…In this direction, Bhattacharya et al (2022) conducted an experiment on human participants with translation enhanced by image modality, which is parallel to our experiment with language modelling using the same modality. Finally, some researchers (Hladká et al, 2009(Hladká et al, , 2011 have utilized the Games With a Purpose methodology (Von Ahn and Dabbish, 2008) to frame tasks that are difficult for computers but relatively easy for humans as games.…”
Section: Introductionmentioning
confidence: 99%
“…We analyze of the Eyetracked Multi-Modal Translation (EMMT) corpus (Bhattacharya et al, 2022). It consists of observations from a set of psycholinguistic experiments involving translation under multi-modal settings with a number of Czech native speakers with an advanced level of English language proficiency.…”
Section: Introductionmentioning
confidence: 99%
“…The experiment design setup implemented during the collection of data used a combination of sight translation, reading aloud and thinking aloud (Tirkkonen-Condit, 1990) protocols. Bhattacharya et al (2022) intend to compare the behavioural data of the participants when they read out a textual stimulus (sentence) or looked at a combination of text and visual (image) stimulus versus when they actually translated the textual stimulus. The experiment thus followed four stages corresponding to each stimulus: Stage 1 (READ: reading of the source English sentence), Stage 2 (TRANSLATE: translating the English sentence into Czech), Stage 3 (SEE: observing the corresponding image) and Stage 4 (UPDATE: producing the final translation of the English sentence given the image).…”
Section: Introductionmentioning
confidence: 99%