Interspeech 2011 2011
DOI: 10.21437/interspeech.2011-819
|View full text |Cite
|
Sign up to set email alerts
|

ELAN - aspects of interoperability and functionality

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
23
0

Year Published

2013
2013
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 42 publications
(45 citation statements)
references
References 2 publications
0
23
0
Order By: Relevance
“…Samples were recorded using an H1n Zoom Handy Recorder for interviews in person and using Zoom for virtual assessments. Audio files of each discourse sample were imported and transcribed in the EUDICO Language Annotator (ELAN; Sloetjes & Wittenburg, 2008). Recordings were fully transcribed orthographically and included all verbal behaviours such as fillers.…”
Section: Methodsmentioning
confidence: 99%
“…Samples were recorded using an H1n Zoom Handy Recorder for interviews in person and using Zoom for virtual assessments. Audio files of each discourse sample were imported and transcribed in the EUDICO Language Annotator (ELAN; Sloetjes & Wittenburg, 2008). Recordings were fully transcribed orthographically and included all verbal behaviours such as fillers.…”
Section: Methodsmentioning
confidence: 99%
“…The corpus consisted of pairs of acquaintances holding a casual conversation for 1 h while being recorded. Questions and responses, social actions of questions (e.g., information requests, which ask for new information of factual or specific nature), question types (polar or content, as well as types of polar questions), and facial signals (like eyebrow frowns and raises) of the speakers in the corpus were manually transcribed by Dutch speakers using ELAN (5.5; Sloetjes & Wittenburg, 2008; for more details on the corpus conventions, see Nota et al., 2021, 2022; Trujillo & Holler, 2021). The transcription of questions and responses largely followed the coding scheme of Stivers and Enfield (2010), with additional rules to account for the complexity of the corpus data.…”
Section: Methodsmentioning
confidence: 99%
“…After that, all visual signals started before or at the onset of the verbal utterance (determined by the timing in the corpus), and had a gradual fade in and fade out that was largely based on the original fade lengths of those signals. Facial signal fades were coded in ELAN (5.5; Sloetjes & Wittenburg, 2008) from the first evidence of movement until the movement peak, or from the movement peak until the last evidence of movement. Fades under two frames were changed to 80 ms, to make the gradual build-up of the visual signals look more natural.…”
Section: Designmentioning
confidence: 99%
“…Manual video annotation, such as using the ELAN tool [55], has long been in existence, but recent years have also seen automatic methods that could be used for scalable evaluation based on video. EgoScanning [28] processes first-person (egocentric) videos to detect important passages and adapts playback speed accordingly.…”
Section: Data-driven Evaluation In Hci and Visualizationmentioning
confidence: 99%