Indoor localization and navigation have a great potential of application, especially in large indoor spaces where people tend to get lost. The indoor localization problem is the fundamental of an indoor navigation system. Existing research and commercial efforts have leveraged wireless-based approaches to locate users in indoor environments. However, the predominant wireless-based approaches, such as WiFi and Bluetooth, are still not satisfactory, either not supporting commodity devices, or being vulnerable to environmental changes. These issues make them hard to deploy and maintain. In this paper, we present Vivid, a mobile device-friendly indoor localization and navigation system that leverages visual cues as the cornerstone of localization. By leveraging the computation power at the extreme internet edges, Vivid to a large extent overcomes the difficulties brought by resource-intensive image processing tasks. We propose a grid-based algorithm that transforms the feature map into a grid, with which finding the path between two positions can be easily obtained. We also leverage deep learning techniques to assist in automatic map maintenance to adapt to the visual changes and make the system more robust. With edge computing, user privacy is preserved since the visual data is mainly processed locally and detected dynamic objects are removed immediately without saving to databases. The evaluation results show that: i) our system easily outperforms the existing solutions on COTS devices in localization accuracy, yielding decimeter-level error; ii) our choice of the system architecture is scalable and optimal among the available ones; iii) the automatic map maintenance mechanism effectively ameliorates the localization robustness of the system.INDEX TERMS Last mile delivery, loTs-based indoor localization and navigation, edge computing for loTs sensors.
Real-time capturing of vehicle motion is the foundation of connected vehicles (CV) and safe driving. This study develops a novel vehicle motion detection system (VMDS) that detects lane-change, turning, acceleration, and deceleration using mobile sensors, that is, global positioning system (GPS) and inertial ones in real-time. To capture a large amount of real-time vehicle state data from multiple sensors, we develop a dynamic time warping based algorithm combined with principal component analysis (PCA). Further, the designed algorithm is trained and evaluated on both urban roads and highway using an Android platform. The aim of the algorithm is to alert adjacent drivers, especially distracted drivers, of potential crash risks. Our evaluation results based on driving traces, covering over 4000 miles, conclude that VMDS is able to detect lane-change and turning with an average precision over 76% and speed, acceleration, and brake with an average precision over 91% under the given testing data dataset 1 and 4. Finally, the alerting tests are conducted with a simulator vehicle, estimating the effect of alerting back or front vehicle the surrounding vehicles’ motion. Nearly two seconds are gained for drivers to make a safe operation. As is expected, with the help of VMDS, distracted driving decreases and driving safety improves.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.