2006
DOI: 10.1109/msp.2006.1621452
|View full text |Cite
|
Sign up to set email alerts
|

Extracting moods from pictures and sounds: towards truly personalized TV

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

4
147
0

Year Published

2007
2007
2021
2021

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 213 publications
(151 citation statements)
references
References 19 publications
4
147
0
Order By: Relevance
“…However, such detectors are available only for a small number of concepts and often provide low accuracy rates. If concepts such as 'water', 'sky', 'cars', 'faces' or 'outdoors' are relatively well-detected, concepts such as 'entertainment' or aspects such as preferences or moods Hanjalic (2006) are far from being correctly identified. To cope with the automatic concept detection challenges, the multimedia communities are currently establishing concept lexicons Naphade et al (2006); Snoek, Worring, van Gemert, Geusebroek & Smeulders (2006), focusing on the concepts that are feasible for automatic detection Hauptmann et al (2007);Snoek, Worring, Geusebroek, Koelma, Seinstra & Smeulders (2006); Yang & Hauptmann (2006 Deselaers et al (2004);Jiang et al (2007)?…”
Section: Features and Descriptorsmentioning
confidence: 99%
See 2 more Smart Citations
“…However, such detectors are available only for a small number of concepts and often provide low accuracy rates. If concepts such as 'water', 'sky', 'cars', 'faces' or 'outdoors' are relatively well-detected, concepts such as 'entertainment' or aspects such as preferences or moods Hanjalic (2006) are far from being correctly identified. To cope with the automatic concept detection challenges, the multimedia communities are currently establishing concept lexicons Naphade et al (2006); Snoek, Worring, van Gemert, Geusebroek & Smeulders (2006), focusing on the concepts that are feasible for automatic detection Hauptmann et al (2007);Snoek, Worring, Geusebroek, Koelma, Seinstra & Smeulders (2006); Yang & Hauptmann (2006 Deselaers et al (2004);Jiang et al (2007)?…”
Section: Features and Descriptorsmentioning
confidence: 99%
“…Such an answer is difficult to obtain, because the retrieval quality is a problem that surpasses the choice of descriptors. However, we can say that retrieval based on low-level features is already mature Flickner et al (1995); Rehatschek et al (2004); Smith & Chang (1996); Wactlar et al (1996), but inferring high-level features is still a challenging task Gevers & Smeulders (2004);Hanjalic (2006); Huijbregts et al (2007); Smeulders et al (2000); Xiong et al (2006). As the high-level feature extraction depends, up to some extent, on the low-level features, we expect that the use of low-level features will still grow Burghouts & Geusebroek (2009).…”
Section: Features and Descriptorsmentioning
confidence: 99%
See 1 more Smart Citation
“…While the CBIR systems are conventionally designed for recognizing object and scene such as plants, animals, people etc., an Emotional Semantic Image Retrieval (ESIR) [17] system aims at incorporating the emotional reflections to enable queries like "beautiful flowers", "lovely dogs", "happy faces" etc. In analogy to the concept of semantic gap implying the limitations of image recognition, the emotional gap can be defined as "the lack of coincidence between the measurable signal properties, commonly referred to as features, and the expected affective state in which the user is brought by perceiving the signal" [6].…”
Section: Introductionmentioning
confidence: 99%
“…Intensive research efforts in the field of multimedia content analysis in the past 15 years have resulted in an abundance of theoretical and algorithmic solutions for extracting the content-related information from audiovisual information [1]. However, due to the inscrutable nature of human emotions and seemingly broad affective gap from low-level features, the video affective content analysis is seldom addressed [2].…”
Section: Introductionmentioning
confidence: 99%