LoRaWAN is an emerging Low-Power Wide Area Network (LPWAN) technology, which is gaining momentum thanks to its flexibility and ease of deployment. Conversely to other LPWAN solutions, LoRaWAN indeed permits the configuration of several network parameters that affect different network performance indexes, such as energy efficiency, fairness, and capacity, in principle making it possible to adapt the network behavior to the specific requirements of the application scenario. Unfortunately, the complex and sometimes elusive interactions among the different network components make it rather difficult to predict the actual effect of a certain parameters setting, so that flexibility can turn into a stumbling block if not deeply understood. In this paper we shed light on such complex interactions, by observing and explaining the effect of different parameters settings in some illustrative scenarios. The simulation-based analysis reveals various trade-offs and highlights some inefficiencies in the design of the LoRaWAN standard. Furthermore, we show how significant performance gains can be obtained by wisely setting the system parameters, possibly in combination with some novel network management policies.
LoRaWAN networks are growing in popularity and adoption, mainly thanks to the inexpensive devices, the affordable gateway costs, and the possibility to opt for a private deployment or the use of global network providers. While the main use case for these networks is sensor data collection, the standard also defines confirmed messages, for which downlink transmissions are required. In this paper we show that an incautious use of this feature can bring a sharp decrease in the performance of the network, especially for large scale deployments. Additionally, we present some insights on how certain design choices for downlink communication in LoRaWAN are impairing confirmed traffic usage and propose some solutions to mitigate the issue.
The collision resolution mechanism in the Random Access Channel (RACH) procedure of the Long-Term Evolution (LTE) standard is known to represent a serious bottleneck in case of machine-type traffic. Its main drawbacks are seen in the facts that Base Stations (eNBs) typically cannot infer the number of collided User Equipments (UEs) and that collided UEs learn about the collision only implicitly, through the lack of the feedback in the later stage of the RACH procedure. The collided UEs then restart the procedure, thereby increasing the RACH load and making the system more prone to collisions. In this paper, we leverage machine learning techniques to design a system that outperforms the state-of-the-art schemes in preamble detection for the LTE RACH procedure. Most importantly, our scheme can also estimate the collision multiplicity, and thus gather information about how many devices chose the same preamble. This data can be used by the eNB to resolve collisions, increase the supported system load and reduce transmission latency. The presented approach is applicable to novel 3GPP standards that target massive IoT, e.g., LTE-M and NB-IoT.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.