Neural Networks (NN) are a family of models for a broad range of emerging machine learning and pattern recondition applications. NN techniques are conventionally executed on general-purpose processors (such as CPU and GPGPU), which are usually not energy-efficient since they invest excessive hardware resources to flexibly support various workloads. Consequently, application-specific hardware accelerators for neural networks have been proposed recently to improve the energy-efficiency. However, such accelerators were designed for a small set of NN techniques sharing similar computational patterns, and they adopt complex and informative instructions (control signals) directly corresponding to high-level functional blocks of an NN (such as layers), or even an NN as a whole. Although straightforward and easy-to-implement for a limited set of similar NN techniques, the lack of agility in the instruction set prevents such accelerator designs from supporting a variety of different NN techniques with sufficient flexibility and efficiency. In this paper, we propose a novel domain-specific Instruction Set Architecture (ISA) for NN accelerators, called Cambricon, which is a load-store architecture that integrates scalar, vector, matrix, logical, data transfer, and control instructions, based on a comprehensive analysis of existing NN techniques. Our evaluation over a total of ten representative yet distinct NN techniques have demonstrated that Cambricon exhibits strong descriptive capacity over a broad range of NN techniques, and provides higher code density than general-purpose ISAs such as x86, MIPS, and GPGPU. Compared to the latest state-ofthe-art NN accelerator design DaDianNao [5] (which can only accommodate 3 types of NN techniques), our Cambricon-based accelerator prototype implemented in TSMC 65nm technology incurs only negligible latency/power/area overheads, with a versatile coverage of 10 different NN benchmarks.
Position-estimation systems for indoor localization play an important role in everyday life. The global positioning system (GPS) is a popular positioning system, which is mainly efficient for outdoor environments. In indoor scenarios, GPS signal reception is weak. Therefore, achieving good position estimation accuracy is a challenge. To overcome this challenge, it is necessary to utilize other positionestimation systems for indoor localization. However, other existing indoor localization systems, especially based on inertial measurement unit (IMU) sensor data, still face challenges such as accumulated errors from sensors and external magnetic field effects. This paper proposes a position-estimation algorithm that uses the combined features of the accelerometer, magnetometer, and gyroscope data from an IMU sensor for position estimation. In this paper, we first estimate the pitch and roll values based on a fusion of accelerometer and gyroscope sensor values. The estimated pitch values are used for step detection. The step lengths are estimated by using the pitching amplitude. The heading of the pedestrian is estimated by the fusion of magnetometer and gyroscope sensor values. Finally, the position is estimated based on the step length and heading information. The proposed pitch-based step detection algorithm achieves 2.5% error as compared with acceleration-based step detection approaches. The heading estimation proposed in this paper achieves a mean heading error of 4.72 • as compared with the azimuth-and magnetometer-based approaches. The experimental results show that the proposed position-estimation algorithm achieves a high position accuracy that significantly outperforms that of conventional estimation methods used for validation in this paper. INDEX TERMS Indoor positioning system (IPS), pedestrian dead reckoning (PDR), heading estimation, indoor navigation, Android-based smartphone, quaternion, Kalman filter, sensor fusion.
Wearable inertial measurement unit (IMU) sensors are powerful enablers for acquisition of motion data. Specifically, in human activity recognition (HAR), IMU sensor data collected from human motion are categorically combined to formulate datasets that can be used for learning human activities. However, successful learning of human activities from motion data involves the design and use of proper feature representations of IMU sensor data and suitable classifiers. Furthermore, the scarcity of labelled data is an impeding factor in the process of understanding the performance capabilities of data-driven learning models. To tackle these challenges, two primary contributions are in this article: first; by using raw IMU sensor data, a spectrogram-based feature extraction approach is proposed. Second, an ensemble of data augmentations in feature space is proposed to take care of the data scarcity problem. Performance tests were conducted on a deep long term short term memory (LSTM) neural network architecture to explore the influence of feature representations and the augmentations on activity recognition accuracy. The proposed feature extraction approach combined with the data augmentation ensemble produces state-of-the-art accuracy results in HAR. A performance evaluation of each augmentation approach is performed to show the influence on classification accuracy. Finally, in addition to using our own dataset, the proposed data augmentation technique is evaluated against the University of California, Irvine (UCI) public online HAR dataset and yields state-of-the-art accuracy results at various learning rates.
Sensor fusion frameworks for indoor localization are developed with the specific goal of reducing positioning errors. Although many conventional localization frameworks without fusion have been improved to reduce positioning error, sensor fusion frameworks generally provide a further improvement in positioning accuracy. In this paper, we propose a sensor fusion framework for indoor localization using the smartphone inertial measurement unit (IMU) sensor data and Wi-Fi received signal strength indication (RSSI) measurements. The proposed sensor fusion framework uses location fingerprinting and trilateration for Wi-Fi positioning. Additionally, a pedestrian dead reckoning (PDR) algorithm is used for position estimation in indoor scenarios. The proposed framework achieves a maximum of 1.17 m localization error for the rectangular motion of a pedestrian and a maximum of 0.44 m localization error for linear motion.
Positioning using Wi-Fi received signal strength indication (RSSI) signals is an effective method for identifying the user positions in an indoor scenario. Wi-Fi RSSI signals in an autonomous system can be easily used for vehicle tracking in underground parking. In Wi-Fi RSSI signal based positioning, the positioning system estimates the signal strength of the access points (APs) to the receiver and identifies the user’s indoor positions. The existing Wi-Fi RSSI based positioning systems use raw RSSI signals obtained from APs and estimate the user positions. These raw RSSI signals can easily fluctuate and be interfered with by the indoor channel conditions. This signal interference in the indoor channel condition reduces localization performance of these existing Wi-Fi RSSI signal based positioning systems. To enhance their performance and reduce the positioning error, we propose a hybrid deep learning model (HDLM) based indoor positioning system. The proposed HDLM based positioning system uses RSSI heat maps instead of raw RSSI signals from APs. This results in better localization performance for Wi-Fi RSSI signal based positioning systems. When compared to the existing Wi-Fi RSSI based positioning technologies such as fingerprint, trilateration, and Wi-Fi fusion approaches, the proposed approach achieves reasonably better positioning results for indoor localization. The experiment results show that a combination of convolutional neural network and long short-term memory network (CNN-LSTM) used in the proposed HDLM outperforms other deep learning models and gives a smaller localization error than conventional Wi-Fi RSSI signal based localization approaches. From the experiment result analysis, the proposed system can be easily implemented for autonomous applications.
Localization using ultra-wide band (UWB) signals gives accurate position results for indoor localization. The penetrating characteristics of UWB pulses reduce the multipath effects and identify the user position with precise accuracy. In UWB-based localization, the localization accuracy depends on the distance estimation between anchor nodes (ANs) and the UWB tag based on the time of arrival (TOA) of UWB pulses. The TOA errors in the UWB system, reduce the distance estimation accuracy from ANs to the UWB tag and adds the localization error to the system. The position accuracy of a UWB system also depends on the line of sight (LOS) conditions between the UWB anchors and tag, and the computational complexity of localization algorithms used in the UWB system. To overcome these UWB system challenges for indoor localization, we propose a deep learning approach for UWB localization. The proposed deep learning model uses a long short-term memory (LSTM) network for predicting the user position. The proposed LSTM model receives the distance values from TOA-distance model of the UWB system and predicts the current user position. The performance of the proposed LSTM model-based UWB localization system is analyzed in terms of learning rate, optimizer, loss function, batch size, number of hidden nodes, timesteps, and we also compared the mean localization accuracy of the system with different deep learning models and conventional UWB localization approaches. The simulation results show that the proposed UWB localization approach achieved a 7 cm mean localization error as compared to conventional UWB localization approaches.
Advances in deep learning (DL) model design have pushed the boundaries of the areas in which it can be applied. The fields with an immense availability of complex big data have been big beneficiaries of these advances. One such field is human activity recognition (HAR). HAR is a popular area of research in a connected world because internet-of-things (IoT) devices and smartphones are becoming more prevalent. A major research goal of recent research work has been to improve predictive accuracy for devices with limited computational resources. In this paper, we propose iSPLInception, a DL model motivated by the Inception-ResNet architecture from Google, that not only achieves high predictive accuracy but also uses fewer device resources. We evaluate the proposed model's performance on four public HAR datasets from the University of California, Irvine (UCI) machine learning repository. The proposed model's performance is compared to that of existing DL architectures that have been proposed in the recent past to solve the HAR problem. The proposed model outperforms these approaches on several metrics of accuracy, crossentropy loss, and F 1 score on all the four datasets. The performance of the proposed iSPLInception model is validated on the UCI HAR using smartphones dataset, Opportunity activity recognition dataset, Daphnet freezing of gait dataset, and PAMAP2 physical activity monitoring dataset. The experiments and result analysis indicate that the proposed iSPLInception model achieves remarkable performance for HAR applications.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.