Cloud infrastructures can provide resource sharing between many applications and usually can meet the requirements of most of them. However, in order to enable an efficient usage of these resources, automatic orchestration is required. Commonly, automatic orchestration tools are based on the observability of the infrastructure itself, but that is not enough in some cases. Certain classes of applications have specific requirements that are difficult to meet, such as low latency, high bandwidth and high computational power. To properly meet these requirements, orchestration must be based on multilevel observability, which means collecting data from both the application and the infrastructure levels. Thus in this work we developed a platform aiming to show how multilevel observability can be implemented and how it can be used to improve automatic orchestration in cloud environments. As a case study, an application of computer vision and robotics, with very demanding requirements, was used to perform two experiments and illustrate the issues addressed in this paper. The results confirm that cloud orchestration can largely benefit from multilevel observability by allowing specific application requirements to be met, as well as improving the allocation of infrastructure resources.
This paper describes an intelligent space, whose objective is to localize and control robots or robotic wheelchairs to help people. Such an intelligent space has 11 cameras distributed in two laboratories and a corridor. The cameras are fixed in the environment, and image capturing is done synchronously. The system was programmed as a client/server with TCP/IP connections, and a communication protocol was defined. The client coordinates the activities inside the intelligent space, and the servers provide the information needed for that. Once the cameras are used for localization, they have to be properly calibrated. Therefore, a calibration method for a multi-camera network is also proposed in this paper. A robot is used to move a calibration pattern throughout the field of view of the cameras. Then, the captured images and the robot odometry are used for calibration. As a result, the proposed algorithm provides a solution for multi-camera calibration and robot localization at the same time. The intelligent space and the calibration method were evaluated under different scenarios using computer simulations and real experiments. The results demonstrate the proper functioning of the intelligent space and validate the multi-camera calibration method, which also improves robot localization.
The research field of the Intelligent Spaces has experienced increasing attention in the last decade. As an instance of the ubiquitous computing paradigm, the general idea is to extract information from the ambient and use it to interact and provide services to the actors present in the environment. The sensory analysis is mandatory in this area and humans are usually the principal actors involved. In this sense, we propose a human detector to be used in an Intelligent Space based on a multi-camera network. Our human detector is implemented in the same paradigm of our Intelligent Space. As a contribution of the present work, the human detector is designed to be a service that is scalable, reliable and parallelizable. It is also a concern of our service to be flexible, less structured as possible, attending different Intelligent Space applications and services, as well as their requirements. As it can be found in different everyday environments, a multicamera system is used to overcome some difficulties traditionally faced by existing human detection approaches. To validate our approach, we implement three different applications that are proof of concept of many day-to-day real tasks. Two of these applications involve human-robot interaction. With respect to time and detection performance requirements, our human detection service has proved to be suitable for interacting with the other services of our Intelligent Space, in order to successfully complete the tasks of each application.
Real‐time and mission‐critical applications for Industry 4.0 demand fast and reliable communication. Therefore, knowing devices' location is essential, but GPS is of little use indoors, whereas electromagnetic impairments and interferences demand new approaches to ensure reliability. The challenges include real‐time feedback with end‐to‐end (E2E) low latency; high data density due to large number of IoT devices per area; and smaller communication cells, which increases the handover frequency and complexity. To tackle these issues, we introduce a programmable intelligent space (PIS) to deploy attocells, enable E2E programmability, and provide a precise computer vision localization system and networking programmability based on software‐defined networking. To validate our approach, experiments were conducted, controlling a mobile robot through a trajectory. We demonstrate the need for higher camera frame rate to achieve tighter precision, evaluating different trade‐offs on localization, bandwidth, and latency. Results have shown that PIS wireless attocell handover achieves seamlessly mobile communication, delivering packets within the deadline window, with similar performance to a no handover baseline.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.