IISA 2014, the 5th International Conference on Information, Intelligence, Systems and Applications 2014
DOI: 10.1109/iisa.2014.6878749
|View full text |Cite
|
Sign up to set email alerts
|
Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
12
0

Year Published

2016
2016
2023
2023

Publication Types

Select...
3
2

Relationship

1
4

Authors

Journals

citations
Cited by 8 publications
(12 citation statements)
references
References 12 publications
0
12
0
Order By: Relevance
“…Through the corresponding communication channels, i.e., audio and visual, we communicate with other people, express our thoughts and ideas, entertain ourself and other persons, and perceive knowledge for our surroundings and environs. Along with these we also discern, transmit, and elicit emotions [3][4][5]. Focusing on sound, it is reported that there are three types of audio stimuli: (i) speech, (ii) music, and (iii) non-verbal and non-musical sounds termed as general sounds, everyday sounds or sound events (e.g., environmental sound events such as a car passing by, dog barking, etc.)…”
Section: Introductionmentioning
confidence: 94%
See 4 more Smart Citations
“…Through the corresponding communication channels, i.e., audio and visual, we communicate with other people, express our thoughts and ideas, entertain ourself and other persons, and perceive knowledge for our surroundings and environs. Along with these we also discern, transmit, and elicit emotions [3][4][5]. Focusing on sound, it is reported that there are three types of audio stimuli: (i) speech, (ii) music, and (iii) non-verbal and non-musical sounds termed as general sounds, everyday sounds or sound events (e.g., environmental sound events such as a car passing by, dog barking, etc.)…”
Section: Introductionmentioning
confidence: 94%
“…We utilized two datasets with emotionally annotated sound events. One without spatial information of the source, i.e., the IADS [19], and the Binaural Emotionally Annotated Digital Sounds (BEADS), which consists of binaural rendered (i.e., with spatial information) versions of sound events present in IADS [20]. Both of these datasets employ the widely adopted Arousal-Valence (AV) space with clustering according to Self Assessment Manikin (SAM) values [21].…”
Section: Papersmentioning
confidence: 99%
See 3 more Smart Citations