2018 IEEE Third Ecuador Technical Chapters Meeting (ETCM) 2018
DOI: 10.1109/etcm.2018.8580319
|View full text |Cite
|
Sign up to set email alerts
|

Actuation Confirmation and Negation via Facial-Identity and -Expression Recognition

Abstract: This paper presents the implementation of a facial-identity and-expression recognition mechanism that confirms or negates physical and/or computational actuations in an intelligent built-environment. Said mechanism is built via Google Brain's TensorFlow (as regards facial identity recognition) and Google Cloud Platform's Cloud Vision API (as regards facial gesture recognition); and it is integrated into the ongoing development of an intelligent built-environment framework, viz., Design-to-Robotic-Production &-… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
4
3

Relationship

4
3

Authors

Journals

citations
Cited by 7 publications
(7 citation statements)
references
References 11 publications
(14 reference statements)
0
7
0
Order By: Relevance
“…Furthermore, said mechanism is one of several cloud-based mechanisms subsumable by the system that enhance HCI. For example, the authors have previously developed Machine Learning (ML)based Human Activity Recognition [21] and Object and Facial-Identity and -Expression Recognition mechanisms [22], two mechanisms that may be implemented into the present system in order to increase context-awareness and interaction pertinence.…”
Section: Resultsmentioning
confidence: 99%
“…Furthermore, said mechanism is one of several cloud-based mechanisms subsumable by the system that enhance HCI. For example, the authors have previously developed Machine Learning (ML)based Human Activity Recognition [21] and Object and Facial-Identity and -Expression Recognition mechanisms [22], two mechanisms that may be implemented into the present system in order to increase context-awareness and interaction pertinence.…”
Section: Resultsmentioning
confidence: 99%
“…The mechanical consists of the physical parts that instantiate the reconfiguration modes particular to each user-type. Each of the computational mechanisms inherits and/or builds upon previous developments via Application Programming Interfaces (APIs) by the authors (with respect to facial- [7], object- [8], and voice-recognition [9]). That is to say:…”
Section: Methodology and Implementationmentioning
confidence: 99%
“…Said nodes continuously send and receive information with microcontrollers (MCUs) embedded in the interior space that serve a variety of other service features inherited from previous work. Some of these MCUs, which are strategically installed in specific locations within the interior space, are equipped with a Raspberry Pi Camera v2 capable of engaging in computer vision functions such as facial-recognition [175] as well as gesture-recognition. A user in the interior of a built-environment covered by the service range of a given set of nodes (again, depending on where the nodes are strategically installed considering the building's geolocation) engages the light-tracking and -redirecting system by effecting an initializing hand-gesture (viz., Figure 5.19, Gesture 1).…”
Section: Pertinencementioning
confidence: 99%