Empirical propagation models are vital tools for planning and deployment of any wireless communication network as they depend less on terrain data and are faster to execute. In this paper, NS3 is used to simulate radio propagation for Long range wide area network (LoRaWAN) at 868 MHz in an urban environment using the Okumura-Hata model, the COST-231 Hata, and the COST 231 Walfish-Ikegami (COST-WI). The predicted received signal strength values are compared with the real-world measurements taken in the city of Glasgow to analyse the validity and accuracy of the empirical models, when used for planning of radio-coverage in LoRaWAN networks. The comparison between models and measurements shows that Okumura-Hata under-estimated the received signal strength in Glasgow city scenario while COST-WI over-estimated the same power. Similarly, Okumura-Hata model showed higher accurate predictions whereas COST-WI accuracy was the least. Magnitude of mean absolute error indicates how big or small models prediction error can be expected. This study can be used to give an insight into the effectiveness and accuracy of empirical propagation models for evaluation of Internet of Things (IoT) connectivity with LoRaWAN networks in a non-line of sight (NLOS) urban environment.
Among the many technologies competing for the Internet of Things (IoT), one of the most promising and fast-growing technologies in this landscape is the Low-Power Wide-Area Network (LPWAN). Coverage of LoRa, one of the main IoT LPWAN technologies, has previously been studied for outdoor environments. However, this article focuses on end-to-end propagation in an outdoor-indoor scenario. This article will investigate how the reported and documented outdoor metrics are interpreted for an indoor environment. Furthermore, to facilitate network planning and coverage prediction, a novel hybrid propagation estimation method has been developed and examined. This hybrid model is comprised of an artificial neural network (ANN) and an optimized Multi-Wall Model (MWM). Subsequently, real-world measurements were collected and compared against different propagation models. For benchmarking, log-distance and COST231 models were used due to their simplicity. It was observed and concluded that: (a) the propagation of the LoRa Wide-Area Network (LoRaWAN) is limited to a much shorter range in this investigated environment compared with outdoor reports; (b) log-distance and COST231 models do not yield an accurate estimate of propagation characteristics for outdoor-indoor scenarios; (c) this lack of accuracy can be addressed by adjusting the COST231 model, to account for the outdoor propagation; (d) a feedforward neural network combined with a COST231 model improves the accuracy of the predictions. This work demonstrates practical results and provides an insight into the LoRaWAN's propagation in similar scenarios. This could facilitate network planning for outdoor-indoor environments.
The Microsoft Kinect RGB-D sensor has been proven to be a reliable tool for gait analysis and rehabilitation purposes. Although it is accurate for detecting upper body part movements, even the second iteration of the Kinect sensor lacks the accuracy when it comes to lower extremities. while detecting foot-off and foot contact phases of a gait cycle is an important part of a gait performance analysis, The Kinect's intrinsic inaccuracies make it an unreliable tool to detect them accurately. We propose a new Kinect based technique for detecting foot-off and foot contact phases in a gait cycle that solely relies on a subject's knee joint relative angle. The system was tested on 11 healthy subjects walking in pre-defined pathways in 12 walking sessions while the Kinect v2 camera was placed at different heights ranging from 0.65 to 1.57 and angles ranging from 0 to 45 degrees to the ground. The algorithm's accuracy was also compared to another footstep detection method based on the subject's ankle joints height to the ground. The results showed 86.52% accuracy in detecting foot-off and foot contact events on average for both feet.
The creation of unwrapped stitched images of pipework internal surfaces is being increasingly used to augment routine visual inspection. A significant challenge to the creation of these stitched images is the need to estimate the pose and position of the camera for each frame, which is often alleviated through the use of a mechanical centralizer to ensure the camera is held in the center of the pipe. This article proposes a novel method for image centralization and pose estimation, which is particularly beneficial to circumstances where mechanical centralization is impractical. The approach involves post-inspection centralization of the captured video, by first estimating the probe's position relative to the pipe, using an integrated laser ring projector combined with the camera sensor, and then using this position to unwrap the image, so it produces an undistorted view of the pipe interior (equivalent to unwrapping a centralized view). These unwrapped images are then stacked to produce a stitched image of the pipe interior. In this paper pose estimation was successfully demonstrated to have a 90% confidence interval of ±0.5 mm and ±0.5° in translation and rotation changes. This pose estimation is then used to create stitched images for both a visual test card image mounted inside a pipe and an aluminum pipe sample with artificial defects, in both cases demonstrating near equivalent results to those obtained using traditional mechanical centralization.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.