The aim of this work is to prove that it is possible to develop a system able to detect gestures based only on ultrasonic signals and Edge devices. A set of 7 gestures plus idle has been defined, being possible to combine them to increase the recognized gestures. In order to recognize them, Ultrasound transceivers will be used to detect the 2 dimensional gestures. The Edge device approach implies that the whole data is processed in the device at the network edge rather than depending on external devices or services such as Cloud Computing. The system presented in this paper has been proven to be able to measure Time of Flight (ToF) signals that can be used to recognize multiple gestures by the integration of two transceivers, with an accuracy between 84.18% and 98.4%. Due to the optimization of the preprocessing correlation technique to extract the ToF from the echo signals and our specific firmware design to enable the parallelization of concurrent processes, the system can be implemented as an Edge Device. INDEX TERMS Edge computing, Gesture recognition, Human System Interaction (HSI), Ultrasound.
The name Edge Intelligence , also known as Edge AI , is a recent term used in the last few years to refer to the confluence of Machine Learning, or broadly speaking Artificial Intelligence, with Edge Computing. In this manuscript, we revise the concepts regarding Edge Intelligence, such as Cloud, Edge and Fog Computing, the motivation to use Edge Intelligence, and compare current approaches and analyze application scenarios. To provide a complete review of this technology, previous frameworks and platforms for Edge Computing have been discusses in this manuscript in order to provide the general view of the basis for Edge AI. Similarly, the emerging techniques to deploy Deep Learning (DL) models at the network edge, as well as specialized platforms and frameworks to do so, are review in this manuscript. These devices, techniques and frameworks are analyzed based on relevant criteria at the network edge such as latency, energy consumption and accuracy of the models to determine the current state of the art as well as current limitations of the proposed technologies. Because of this, it is possible to understand what are the current possibilities to efficiently deploy state-of-the-art DL models at the network edge based on technologies such as AI accelerators, Tensor Processing Units and techniques that include Federated Learning and Gossip Training. Finally, the challenges of Edge AI are discusses in the manuscript as well as the Future directions that can be extracted from the evolution of the Edge Computing and Internet of Things (IoT) approaches.
There have been significant advances regarding target detection in the autonomous vehicle context. To develop more robust systems that can overcome weather hazards as well as sensor problems, the sensor fusion approach is taking the lead in this context. Laser Imaging Detection and Ranging (LiDAR) and camera sensors are two of the most used sensors for this task since they can accurately provide important features such as target´s depth and shape. However, most of the current state-of-the-art target detection algorithms for autonomous cars do not take into consideration the hardware limitations of the vehicle such as the reduced computing power in comparison with Cloud servers as well as the reduced latency. In this work, we propose Edge Computing Tensor Processing Unit (TPU) devices as hardware support due to their computing capabilities for machine learning algorithms as well as their reduced power consumption. We developed an accurate and small target detection model for these devices. Our proposed Multi-Level Sensor Fusion model has been optimized for the network edge, specifically for the Google Coral TPU. As a result, high accuracy results are obtained while reducing the memory consumption as well as the latency of the system using the challenging KITTI dataset.
Robust functionality of autonomous driving vehicles relies on their ability to detect obstables and various scenarios on the road. This can be only achieved by applying robust, fast and efficient AI-based signal processing to radar data. In this work we present an empirical investigation on the question, whether one can apply artificial neural networks (ANNs) directly to frequency modulated continuous wave (FMCW) radar raw data. We show that preproceessing is not necessary if one has enough raw data.In our experiment we have data of 153 648 frames collected with a 60 GHz FMCW radar. We compare systematically the options of preprocessing the data using variational autoencoder, applying traditional preprocessing or omit data-preprocessing and apply ANN directly to raw data. We show that the last option results in 28% faster signal processing and highest accuracy. This is a promising result, since it enables edge computing and direct signal processing at the sensor level.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.