The Internet of Things (IoT) has become one of the most widely research paradigms, having received much attention from the research community in the last few years. IoT is the paradigm that creates an internet-connected world, where all the everyday objects capture data from our environment and adapt it to our needs. However, the implementation of IoT is a challenging task and all the implementation scenarios require the use of different technologies and the emergence of new ones, such as Edge Computing (EC). EC allows for more secure and efficient data processing in real time, achieving better performance and results. Energy efficiency is one of the most interesting IoT scenarios. In this scenario sensors, actuators and smart devices interact to generate a large volume of data associated with energy consumption. This work proposes the use of an Edge-IoT platform and a Social Computing framework to build a system aimed to smart energy efficiency in a public building scenario. The system has been evaluated in a public building and the results make evident the notable benefits that come from applying Edge Computing to both energy efficiency scenarios and the framework itself. Those benefits included reduced data transfer from the IoT-Edge to the Cloud and reduced Cloud, computing and network resource costs.
A spoken language system combines speech recognition, natural language processing and h h a n interface technology. It functions by recognizing the pervn's words, interpreting the sequence of words to obtain a meaning in terms of the application, and providing an appropriate respinse back to the user. Potential applications of spoken lan 8e"systems range from simple tasks, such as retrieving informgo frdm an existing database (traffic reports, airline schedules),$to interactive problem solving tasks involving complex planning and reasoning (travel planning, traflic routing), to support for multilingual interactions. We examine eight key areas in which basic research is needed to produce spoken language systems: 1) robust speech recognition; 2) automatic training and adaptation; 3) spontaneous speech; 4) dialogue models; 5) natural language response generation; 6) speech synthesis and speech generation; 7) multilingual systems; and 8) interactive multimodal systems. In each area, we identify key research challenges, the infrastructure needed to support research, and the expected benefits. We conclude by reviewing the need for multidisciplinary research, for development of shared corpora and related resources, for computational support and for rapid communication among researchers. The successful development of this technology will increase accessibility of computers to a wide range of users, will facilitate multinational communication and trade, and will create new research specialties and jobs in this rapidly expanding area.
Abstract-This paper presents an integral system capable of generating animations with realistic dynamics, including the individualized nuances, of three-dimensional (3-D) human faces driven by speech acoustics. The system is capable of capturing short phenomena in the orofacial dynamics of a given speaker by tracking the 3-D location of various MPEG-4 facial points through stereovision. A perceptual transformation of the speech spectral envelope and prosodic cues are combined into an acoustic feature vector to predict 3-D orofacial dynamics by means of a nearest-neighbor algorithm. The Karhunen-Loéve transformation is used to identify the principal components of orofacial motion, decoupling perceptually natural components from experimental noise. We also present a highly optimized MPEG-4 compliant player capable of generating audio-synchronized animations at 60 frames/s. The player is based on a pseudo-muscle model augmented with a nonpenetrable ellipsoidal structure to approximate the skull and the jaw. This structure adds a sense of volume that provides more realistic dynamics than existing simplified pseudo-muscle-based approaches, yet it is simple enough to work at the desired frame rate. Experimental results on an audiovisual database of compact TIMIT sentences are presented to illustrate the performance of the complete system. Index Terms-face image analysis and synthesis, lip synchronization, 3-D audio/video processing.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.