2016 13th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI) 2016
DOI: 10.1109/urai.2016.7734025
|View full text |Cite
|
Sign up to set email alerts
|

Evaluation of a Korean Lip-sync system for an android robot

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(2 citation statements)
references
References 9 publications
0
2
0
Order By: Relevance
“…A recent speech to video application for virtual characters by [30] implements the MFCC method of audio signal analysis and derived compelling results. The application has a 0.35s response time to generate lip synchronisation patterns for the virtual characters, which is higher than [7] model of 0.5s and [28] 0.4s-3.14s robotic mouth response time . Thus, the MFCC approach is significant in developing a robotic mouth that can respond to incoming live audio transmissions due to its high speed of computation and a low field of noise interference.…”
Section: The Uncanny Valleymentioning
confidence: 94%
See 1 more Smart Citation
“…A recent speech to video application for virtual characters by [30] implements the MFCC method of audio signal analysis and derived compelling results. The application has a 0.35s response time to generate lip synchronisation patterns for the virtual characters, which is higher than [7] model of 0.5s and [28] 0.4s-3.14s robotic mouth response time . Thus, the MFCC approach is significant in developing a robotic mouth that can respond to incoming live audio transmissions due to its high speed of computation and a low field of noise interference.…”
Section: The Uncanny Valleymentioning
confidence: 94%
“…This mouth configuration is significant, as discussed previously, immobile or muted robotic lip actuation has a high potential to be interpreted as aggressive or unemotional and generate negative perceptual stimulus. In support, [28] developed a robotic mouth system to examine common lip-syncing factors affecting humanoid robots. However, the study claims that the robot developed for this research can perform complex mouth shapes for replicating human vowel and consonant lip patterns.…”
Section: The Uncanny Valleymentioning
confidence: 99%