The Internet of Things (IoT) is large-scale by nature, which is manifested by the massive number of connected devices as well as their vast spatial existence. Cellular networks, which provide ubiquitous, reliable, and efficient wireless access, will play fundamental rule in delivering the first-mile access for the data tsunami to be generated by the IoT. However, cellular networks may have scalability problems to provide uplink connectivity to massive numbers of connected things. To characterize the scalability of cellular uplink in the context of IoT networks, this paper develops a traffic-aware spatiotemporal mathematical model for IoT devices supported by cellular uplink connectivity. The developed model is based on stochastic geometry and queueing theory to account for the traffic requirement per IoT device, the different transmission strategies, and the mutual interference between the IoT devices. To this end, the developed model is utilized to characterize the extent to which cellular networks can accommodate IoT traffic as well as to assess and compare three different transmission strategies that incorporate a combination of transmission persistency, backoff, and power-ramping. The analysis and the results clearly illustrate the scalability problem imposed by IoT on cellular network and offer insights into effective scenarios for each transmission strategy.
Airborne base stations (carried by drones) have a great potential to enhance coverage and capacity of 6G cellular networks. However, one of the main challenges facing the deployment of airborne BSs is the limited available energy at the drone, which limits the flight time. In fact, most of the currently used unmanned aerial vehicles (UAVs) can only operate for one hour maximum. This limits the performance of the UAV-enabled cellular network due to the need to frequently visit the ground station to recharge, leaving the UAV's coverage area temporarily out of service. In this article, we propose a UAV-enabled cellular network setup based on tethered UAVs (tUAVs). In the proposed setup, the tUAV is connected to a ground station (GS) through a tether, which provides the tUAV with both energy and data. This enables a flight that can stay for days. We describe in detail the components of the proposed system. Furthermore, we enlist the main advantages of a tUAV-enabled cellular network compared to typical untethered UAVs (uUAVs). Next, we discuss the potential applications and use cases for tUAVs. We also provide Monte-Carlo simulations to compare the performance of tUAVs and uUAVs in terms of coverage probability. For instance, for a uUAV that is available 70% of the time (while unavailable charging or changing battery for 30% of the time), the simulation results show that tUAVs with 120 m tether length can provide upto 30% increase in the coverage probability, compared to uUAVs. Finally, we discuss the challenges, design considerations, and future research directions to realize the proposed setup.
One of the main challenges slowing the deployment of airborne base stations (BSs) using unmanned aerial vehicles (UAVs) is the limited on-board energy and flight time. One potential solution to such problem, is to provide the UAV with power supply through a tether that connects the UAV to the ground. In this paper, we study the optimal placement of tethered UAVs (TUAVs) to minimize the average path-loss between the TUAV and a receiver located on the ground. Given that the tether has a maximum length, and the launching point of the TUAV (the starting point of the tether) is placed on a rooftop, the TUAV is only allowed to hover within a specific hovering region. Beside the maximum tether length, this hovering region also depends on the heights of the buildings surrounding the rooftop, which requires the inclination angle of the tether not to be below a given minimum value, in order to avoid tangling and ensure safety. We first formulate the optimization problem for such setup and provide some useful insights on its solution. Next, we derive upper and lower bounds for the optimal values of the tether length and inclination angle. We also propose a suboptimal closed-form solution for the tether length and its inclination angle that is based on maximizing the line-of-sight probability. Finally, we derive the probability distribution of the minimum inclination angle of the tether length. We show that its mean value varies depending on the environment from 10 • in suburban environments to 31 • in high rise urban environments. Our numerical results show that the derived upper and lower bounds on the optimal values of the tether length and inclination angle lead to tight suboptimal values of the average path-loss that are only 0 − 3 dBs above the minimum value.
The Internet-of-things (IoT) is the paradigm where anything will be connected. There are two main approaches to handle the surge in the uplink (UL) traffic the IoT is expected to generate, namely, Scheduled UL (SC-UL) and random access uplink (RA-UL) transmissions. SC-UL is perceived as a viable tool to control Quality-of-Service (QoS) levels while entailing some overhead in the scheduling request prior to any UL transmission. On the other hand, RA-UL is a simple single-phase transmission strategy. While this obviously eliminates scheduling overheads, very little is known about how scalable RA-UL is. At this critical junction, there is a dire need to analyze the scalability of these two paradigms.To that end, this paper develops a spatiotemporal mathematical framework to analyze and assess the performance of SC-UL and RA-UL. The developed paradigm jointly utilizes stochastic geometry and queueing theory. Based on such a framework, we show that the answer to the "scheduling vs. random access paradox" actually depends on the operational scenario. Particularly, RA-UL scheme offers low access delays but suffers from limited scalability, i.e., cannot support a large number of IoT devices.On the other hand, SC-UL transmission is better suited for higher device intensities and traffic rates.
The year 2020 is witnessing a global health and economic crisis due to the COVID-19 pandemic. Countries across the world are using digital technologies to fight this global crisis. These digital technologies strongly rely, in one way or another, on the availability of wireless communication systems. This paper aims to outline the role of wireless communications in the COVID-19 pandemic from multiple perspectives. First, we show how wireless communication technologies are helping to combat this pandemic by monitoring the spread of the virus, enabling healthcare automation, and enabling virtual education and conferencing. We emphasize the importance of digital inclusiveness in the pandemic and possible solutions to connect the unconnected. Next, we discuss the challenges posed by the use of wireless technologies, including concerns about privacy, security, and misinformation. Later, we highlight the importance of wireless technologies in the survival of the global economy, such as automation of industries and supply chain, e-commerce, and supporting occupations that are at risk. Finally, we outline that the rapid development of wireless technologies during the pandemic is likely to be useful in the post-pandemic era.
Abstract. We give sufficient conditions which allow the study of the exponential stability of systems closely related to the linear thermoelasticity systems by a decoupling technique. Our approach is based on the multipliers technique and our result generalizes (from the exponential stability point of view) the earlier one obtained by Henry et al. Résumé.Nous donnons des conditions suffisantes qui permettent l'étude de la stabilité exponentielle de systèmes similairesà celui de la thermoélasticité en utilisant une technique de découplage. Notre approche est basée sur la technique des multiplicateurs et notre résultat généralise (du point de vue de la stabilité exponentielle) celui obtenu par Henry et al.
In one of the several manifestations, the future cellular networks are required to accommodate a massive number of devices; several orders of magnitude compared to today's networks. At the same time, the future cellular networks will have to fulfill stringent latency constraints. To that end, one problem that is posed as a potential showstopper is extreme congestion for requesting uplink scheduling over the physical random access channel (PRACH). Indeed, such congestion drags along scheduling delay problems. In this paper, the use of self-organized device-to-device (D2D) clustering is advocated for mitigating PRACH congestion. To this end, the paper proposes two D2D clustering schemes, namely; Random-Based Clustering (RBC) and Channel-Gain-Based Clustering (CGBC). Accordingly, this paper sheds light on random access within the proposed D2D clustering schemes and presents a case study based on a stochastic geometry framework. For the sake of objective evaluation, the D2D clustering is benchmarked by the conventional scheduling request procedure. Accordingly, the paper offers insights into useful scenarios that minimize the scheduling delay for each clustering scheme. Finally, the paper discusses the implementation algorithm and some potential implementation issues and remedies.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.