2012 IEEE International Conference on Multimedia and Expo 2012
DOI: 10.1109/icme.2012.75
|View full text |Cite
|
Sign up to set email alerts
|

A Synaesthetic Approach for Image Slideshow Generation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2020
2020
2020
2020

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 15 publications
0
2
0
Order By: Relevance
“…They separately extracted hand-crafted features, learned emotion classifiers, and composited images and music based on the predicted emotions. Many methods follow this pipeline [34,48,53,61,86]. They (1) extracted more discriminative emotion features, such as low-level color [9,34,48,53,61] and mid-level principlesof-art [86] for image; (2) employed different emotion representation models, from categorical states [9,34,53,61] to dimensional space [48,86]; (3) correspondingly learned different classifiers, from Support Vector Machine [9], Naive Bayes, and Decision Tree [53] to Support Vector Regression [86]; and (4) used different composition strategies to match image and music, from emotion category comparison [9,34,53,61] to Euclidean distance [48,86].…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…They separately extracted hand-crafted features, learned emotion classifiers, and composited images and music based on the predicted emotions. Many methods follow this pipeline [34,48,53,61,86]. They (1) extracted more discriminative emotion features, such as low-level color [9,34,48,53,61] and mid-level principlesof-art [86] for image; (2) employed different emotion representation models, from categorical states [9,34,53,61] to dimensional space [48,86]; (3) correspondingly learned different classifiers, from Support Vector Machine [9], Naive Bayes, and Decision Tree [53] to Support Vector Regression [86]; and (4) used different composition strategies to match image and music, from emotion category comparison [9,34,53,61] to Euclidean distance [48,86].…”
Section: Related Workmentioning
confidence: 99%
“…Such emotion-based matching is essential for various applications [57], such as affective cross-modal retrieval, emotion-based multimedia slideshow, and emotion-aware recommendation systems. The early emotion-based matching methods mainly employ a shallow pipeline [9,34,48,53,61,86], i.e. extracting hand-crafted features and training matching classifiers (or training emotion classifiers for both modalities and then learning matching similarities).…”
Section: Introductionmentioning
confidence: 99%