2019
DOI: 10.1145/3232233
|View full text |Cite
|
Sign up to set email alerts
|

AttentiveVideo

Abstract: Understanding a target audience's emotional responses to a video advertisement is crucial to evaluate the advertisement's effectiveness. However, traditional methods for collecting such information are slow, expensive, and coarse grained. We propose AttentiveVideo, a scalable intelligent mobile interface with corresponding inference algorithms to monitor and quantify the effects of mobile video advertising in real time. Without requiring additional sensors, AttentiveVideo employs a combination of implicit phot… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(5 citation statements)
references
References 50 publications
0
5
0
Order By: Relevance
“…This was done given that Circumplex (continuous) and SAM (discrete) are the most widely used, and have shown to exhibit good usability. Fourth, we restricted our work to mobile video trailers from the MAHNOB database [82,97], and do not test other types of content (e.g., MOOC videos [109] or advertisements [78]). While MAHNOB is widely used and contains validated emotion annotation labels, it does limit the CHI 2020, April 25-30, 2020, Honolulu, HI, USA Algorithm 1: Annotation Fusion Input: P∈R I×J for j =1 to J ←number of annotation samples for i = 1 to I ←number of participants Calculate D j of P i j using Eq.…”
Section: Limitations and Future Workmentioning
confidence: 99%
See 3 more Smart Citations
“…This was done given that Circumplex (continuous) and SAM (discrete) are the most widely used, and have shown to exhibit good usability. Fourth, we restricted our work to mobile video trailers from the MAHNOB database [82,97], and do not test other types of content (e.g., MOOC videos [109] or advertisements [78]). While MAHNOB is widely used and contains validated emotion annotation labels, it does limit the CHI 2020, April 25-30, 2020, Honolulu, HI, USA Algorithm 1: Annotation Fusion Input: P∈R I×J for j =1 to J ←number of annotation samples for i = 1 to I ←number of participants Calculate D j of P i j using Eq.…”
Section: Limitations and Future Workmentioning
confidence: 99%
“…Our work attempts to tie in together multiple research areas: small mobile form factor, mobility context, capturing emotion experience, and designing for divided attention. One can ask: why not automatically sense behavioral signals (e.g., facial emotional expressions [78]) given that smartphones have front-facing cameras that do not require users to annotate at all? While scientists generally agree that facial movements convey a range of information that serves to express emotional states, to use facial expressions as sole indicators of emotion is misleading.…”
Section: Designing For Momentary Self-reports While Mobilementioning
confidence: 99%
See 2 more Smart Citations
“…There is increasing amount of research on supporting advertisement using computational methods, including modeling audience's visual interests [64] and attention flow [49] or emotion responses to mobile ads [51], and understanding image and video ads [26,62,27]. Beyond commercial ads, VisiBlends introduced techniques to combine objects based on their semantic visuals and constraints in graphical design, which can be used to convey a marketing message [15].…”
Section: Design Understandingmentioning
confidence: 99%