2018
DOI: 10.3390/sym10090387
|View full text |Cite
|
Sign up to set email alerts
|

Smart Doll: Emotion Recognition Using Embedded Deep Learning

Abstract: Computer vision and deep learning are clearly demonstrating a capability to create engaging cognitive applications and services. However, these applications have been mostly confined to powerful Graphic Processing Units (GPUs) or the cloud due to their demanding computational requirements. Cloud processing has obvious bandwidth, energy consumption and privacy issues. The Eyes of Things (EoT) is a powerful and versatile embedded computer vision platform which allows the user to develop artificial vision and dee… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
12
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
6
2

Relationship

1
7

Authors

Journals

citations
Cited by 11 publications
(12 citation statements)
references
References 19 publications
0
12
0
Order By: Relevance
“…In [70], a motivational interview provided by a social robot was used to elicit qualitative data from participants, including their assessment of the robot"s usability during the interaction and its effect on their motivation. In paper [71], the smart doll was implemented with a high level of engagement to the children interacting with it across all ages to demonstrate advanced vision capabilities that can lead to novel, engaging products. A comprehensive environment was offered to children (1-5 years old) and early education teachers; this environment includes a set of robotic assistants designed as stuffed toys, a smartphone application, a knowledge base of educational activities and an expert system to capture children"s attention naturally [72].…”
Section: Motivation-based Educational Robotmentioning
confidence: 99%
See 1 more Smart Citation
“…In [70], a motivational interview provided by a social robot was used to elicit qualitative data from participants, including their assessment of the robot"s usability during the interaction and its effect on their motivation. In paper [71], the smart doll was implemented with a high level of engagement to the children interacting with it across all ages to demonstrate advanced vision capabilities that can lead to novel, engaging products. A comprehensive environment was offered to children (1-5 years old) and early education teachers; this environment includes a set of robotic assistants designed as stuffed toys, a smartphone application, a knowledge base of educational activities and an expert system to capture children"s attention naturally [72].…”
Section: Motivation-based Educational Robotmentioning
confidence: 99%
“…[82] Custom colour ball dataset Primary Colour ball datasets are composed of approximately 500 photos that have been gathered and labelled. [27], [94], [75], [9], [84], [31], [45], [46], [33], [41], [42], [43], [50], [38], [39], [53] Extended Cohn-Kanade (CK+) Secondary It is a database of facial expressions that contains 327 annotated video sequences from 123 participants in eight different expression states. The dataset"s subjects are diverse in age, gender and ethnic origin, making it one of the most preferred datasets for FER research.…”
Section: Name Of Dataset Type Descriptionmentioning
confidence: 99%
“…However, there were 2 studies (Rosenstein and Oster, 1988;Brown et al, 2014) where it was unclear if the AUs were coded as defined in the FACS system and 4 studies (Soussignan et al, 1999;Bezerra Alves et al, 2013;Kodra et al, 2013;Zacche Sa et al, 2015) that reported coding/defining the AUs in a manner that diverged from FACS. AU's were not identified in 4 studies in which the automatic emotion coding software was being developed/piloted to measure emotions in response to consumer products (Brown et al, 2014;Espinosa-Aranda et al, 2018;Gurbuz and Toga, 2018;Tussyadiah and Park, 2018). As suggested in the FACS Investigators' Guide (Ekman et al, 2002), it is possible to map AUs onto the basic emotion categories using a finite number of rules (Table 3).…”
Section: Relationship Between Action Units and Emotion Expressionmentioning
confidence: 99%
“…A similar approach has been implemented by Espinosa-Aranda et al in [14]. A convolutional neural network (CNN), which is a deep learning technology, is used for a real-life facial emotions application based on of the Eyes of ings (EoT) device, which is a computer vision platform for analyzing images locally and take control of the surrounding environment accordingly.…”
Section: Literature Reviewmentioning
confidence: 99%