2014
DOI: 10.1007/s12193-013-0140-1
|View full text |Cite
|
Sign up to set email alerts
|

Annotation and interpretation of prosodic data in the HuComTech corpus for multimodal user interfaces

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2014
2014
2019
2019

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(2 citation statements)
references
References 9 publications
0
2
0
Order By: Relevance
“…Audio was annotated for the classes of intonation phrase, emotions and discourse, in addition to phonetic events within speech, such as silence, hesitation, restart, non-phonemic sounds, and noise. Automatic methods were applied to the annotation of the phonetic features of the sound track: in addition to marking the absolute values for F0 and intensity, a special algorithm (Szekrényes, 2014, 2015) was used to annotate stylized intonation and intensity contours of speech in order to capture the contribution of speech prosody to the multimodal expression of the pragmatic content of the interaction. The pragmatic levels of annotation included the classes of turn management, attention, agreement, deixis and information structure.…”
Section: The Hucomtech Corpus: Its Structure and Annotation Schemementioning
confidence: 99%
“…Audio was annotated for the classes of intonation phrase, emotions and discourse, in addition to phonetic events within speech, such as silence, hesitation, restart, non-phonemic sounds, and noise. Automatic methods were applied to the annotation of the phonetic features of the sound track: in addition to marking the absolute values for F0 and intensity, a special algorithm (Szekrényes, 2014, 2015) was used to annotate stylized intonation and intensity contours of speech in order to capture the contribution of speech prosody to the multimodal expression of the pragmatic content of the interaction. The pragmatic levels of annotation included the classes of turn management, attention, agreement, deixis and information structure.…”
Section: The Hucomtech Corpus: Its Structure and Annotation Schemementioning
confidence: 99%
“…And this is where one meets the challenge: how can we capture those data that are significant for our perception and disregard those which are not. For the HuComTech project an annotation tool, ProsoTool was developed ( [12]) to produce the automatic prosody annotation of the corpus with the aim to model human perception. As for pitch annotation, five levels of fundamental frequency were assigned to the pitch range of the given speaker, named, from the deepest tone space to the highest tone space, as L2, L1, M, H1, H2.…”
Section: Prosody: Pitch Movement (Intonation)mentioning
confidence: 99%