LoRaWAN has become popular as an IoT enabler. The low cost, ease of installation and the capacity of fine-tuning the parameters make this network a suitable candidate for the deployment of smart cities. In northern Sweden, in the smart region of Skellefteå, we have deployed a LoRaWAN to enable IoT applications to assist the lives of citizens. As Skellefteå has a subarctic climate, we investigate how the extreme changes in the weather happening during a year affect a real LoRaWAN deployment in terms of SNR, RSSI and the use of SF when ADR is enabled. Additionally, we evaluate two propagation models (Okumura-Hata and ITM) and verify if any of those models fit the measurements obtained from our real-life network. Our results regarding the weather impact show that cold weather improves the SNR while warm weather makes the sensors select lower SFs, to minimize the time-on-air. Regarding the tested propagation models, Okumura-Hata has the best fit to our data, while ITM tends to overestimate the RSSI values.
Recent literature demonstrated promising results of Long-Term Evolution (LTE) deployments over unlicensed bands when coexisting with Wi-Fi networks via the Duty-Cycle (DC) approach. However, it is known that performance in coexistence is strongly dependent on traffic patterns and on the duty-cycle ON-OFF rate of LTE. Most DC solutions rely on static coexistence parameters configuration, hence real-life performance in dynamically varying scenarios might be affected. Advanced reinforcement learning techniques may be used to adjust DC parameters towards efficient coexistence, and we propose a Q-learning Carrier-Sensing Adaptive Transmission mechanism which adapts LTE duty-cycle ON-OFF time ratio to the transmitted data rate, aiming at maximizing the Wi-Fi and LTE-Unlicensed (LTE-U) aggregated throughput. The problem is formulated as a Markov decision process, and the Q-learning solution for finding the best LTE-U ON-OFF time ratio is based on the Bellman's equation. We evaluate the performance of the proposed solution for different traffic load scenarios using the ns-3 simulator. Results demonstrate the benefits from the adaptability to changing circumstances of the proposed method in terms of Wi-Fi/LTE aggregated throughput, as well as achieving a fair coexistence.
Cellular broadband Internet of Things (IoT) applications are expected to keep growing year-by-year, generating demands from high throughput services. Since some of these applications are deployed over licensed mobile networks, as long term evolution (LTE), one already common problem is faced: the scarcity of licensed spectrum to cope with the increasing demand for data rate. The LTE-Unlicensed (LTE-U) forum, aiming to tackle this problem, proposed LTE-U to operate in the 5 GHz unlicensed spectrum. However, Wi-Fi is already the consolidated technology operating in this portion of the spectrum, besides the fact that new technologies for unlicensed band need mechanisms to promote fair coexistence with the legacy ones. In this work, we extend the literature by analyzing a multi-cell LTE-U/Wi-Fi coexistence scenario, with a high interference profile and data rates targeting a cellular broadband IoT deployment. Then, we propose a centralized, coordinated reinforcement learning framework to improve LTE-U/Wi-Fi aggregate data rates. The added value of the proposed solution is assessed by a ns-3 simulator, showing improvements not only in the overall system data rate but also in average user data rate, even with the high interference of a multi-cell environment.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.