Connected and autonomous vehicles will play a pivotal role in future Intelligent Transportation Systems (ITSs) and smart cities, in general. High-speed and low-latency wireless communication links will allow municipalities to warn vehicles against safety hazards, as well as support cloud-driving solutions to drastically reduce traffic jams and air pollution. To achieve these goals, vehicles need to be equipped with a wide range of sensors generating and exchanging high rate data streams. Recently, millimeter wave (mmWave) techniques have been introduced as a means of fulfilling such high data rate requirements. In this paper, we model a highway communication network and characterize its fundamental link budget metrics. In particular, we specifically consider a network where vehicles are served by mmWave Base Stations (BSs) deployed alongside the road. To evaluate our highway network, we develop a new theoretical model that accounts for a typical scenario where heavy vehicles (such as buses and lorries) in slow lanes obstruct Lineof-Sight (LOS) paths of vehicles in fast lanes and, hence, act as blockages. Using tools from stochastic geometry, we derive approximations for the Signal-to-Interference-plus-Noise Ratio (SINR) outage probability, as well as the probability that a user achieves a target communication rate (rate coverage probability). Our analysis provides new design insights for mmWave highway communication networks. In considered highway scenarios, we show that reducing the horizontal beamwidth from 90 • to 30 • determines a minimal reduction in the SINR outage probability (namely, 4 · 10 −2 at maximum). Also, unlike bi-dimensional mmWave cellular networks, for small BS densities (namely, one BS every 500 m) it is still possible to achieve an SINR outage probability smaller than 0.2.
The explosive growth of content-on-the-move, such as video streaming to mobile devices, has propelled research on multimedia broadcast and multicast schemes. Multi-rate transmission strategies have been proposed as a means of delivering layered services to users experiencing different downlink channel conditions. In this paper, we consider Point-to-Multipoint layered service delivery across a generic cellular system and improve it by applying different random linear network coding approaches. We derive packet error probability expressions and use them as performance metrics in the formulation of resource allocation frameworks. The aim of these frameworks is both the optimization of the transmission scheme and the minimization of the number of broadcast packets on each downlink channel, while offering service guarantees to a predetermined fraction of users. As a case of study, our proposed frameworks are then adapted to the LTE-A standard and the eMBMS technology. We focus on the delivery of a video service based on the H.264/SVC standard and demonstrate the advantages of layered network coding over multi-rate transmission. Furthermore, we establish that the choice of both the network coding technique and resource allocation method play a critical role on the network footprint, and the quality of each received video layer.
Point-to-multipoint communications are expected to play a pivotal role in next-generation networks. This paper refers to a cellular system transmitting layered multicast services to a multicast group of users. Reliability of communications is ensured via different Random Linear Network Coding (RLNC) techniques. We deal with a fundamental problem: the computational complexity of the RLNC decoder. The higher the number of decoding operations is, the more the user's computational overhead grows and, consequently, the faster the battery of mobile devices drains. By referring to several sparse RLNC techniques, and without any assumption on the implementation of the RLNC decoder in use, we provide an efficient way to characterize the performance of users targeted by ultra-reliable layered multicast services. The proposed modeling allows to efficiently derive the average number of coded packet transmissions needed to recover one or more service layers. We design a convex resource allocation framework that allows to minimize the complexity of the RLNC decoder by jointly optimizing the transmission parameters and the sparsity of the code. The designed optimization framework also ensures service guarantees to predetermined fractions of users. The performance of the proposed optimization framework is then investigated in a LTE-A eMBMS network multicasting H.264/SVC video services.
Ultra-reliable Point-to-Multipoint (PtM) communications are expected to become pivotal in networks offering future dependable services for smart cities. In this regard, sparse Random Linear Network Coding (RLNC) techniques have been widely employed to provide an efficient way to improve the reliability of broadcast and multicast data streams. This paper addresses the pressing concern of providing a tight approximation to the probability of a user recovering a data stream protected by this kind of coding technique. In particular, by exploiting the Stein-Chen method, we provide a novel and general performance framework applicable to any combination of system and service parameters, such as finite field sizes, lengths of the data stream and level of sparsity. The deviation of the proposed approximation from Monte Carlo simulations is negligible, improving significantly on the state of the art performance bounds.Index Terms-Sparse random network coding, broadcast communication, multicast communications, Stein-Chen method.
Google Earth Engine (GEE) is a versatile cloud platform in which pixel-based (PB) and object-oriented (OO) Land Use–Land Cover (LULC) classification approaches can be implemented, thanks to the availability of the many state-of-art functions comprising various Machine Learning (ML) algorithms. OO approaches, including both object segmentation and object textural analysis, are still not common in the GEE environment, probably due to the difficulties existing in concatenating the proper functions, and in tuning the various parameters to overcome the GEE computational limits. In this context, this work is aimed at developing and testing an OO classification approach combining the Simple Non-Iterative Clustering (SNIC) algorithm to identify spatial clusters, the Gray-Level Co-occurrence Matrix (GLCM) to calculate cluster textural indices, and two ML algorithms (Random Forest (RF) or Support Vector Machine (SVM)) to perform the final classification. A Principal Components Analysis (PCA) is applied to the main seven GLCM indices to synthesize in one band the textural information used for the OO classification. The proposed approach is implemented in a user-friendly, freely available GEE code useful to perform the OO classification, tuning various parameters (e.g., choose the input bands, select the classification algorithm, test various segmentation scales) and compare it with a PB approach. The accuracy of OO and PB classifications can be assessed both visually and through two confusion matrices that can be used to calculate the relevant statistics (producer’s, user’s, overall accuracy (OA)). The proposed methodology was broadly tested in a 154 km2 study area, located in the Lake Trasimeno area (central Italy), using Landsat 8 (L8), Sentinel 2 (S2), and PlanetScope (PS) data. The area was selected considering its complex LULC mosaic mainly composed of artificial surfaces, annual and permanent crops, small lakes, and wooded areas. In the study area, the various tests produced interesting results on the different datasets (OA: PB RF (L8 = 72.7%, S2 = 82%, PS = 74.2), PB SVM (L8 = 79.1%, S2 = 80.2%, PS = 74.8%), OO RF (L8 = 64%, S2 = 89.3%, PS = 77.9), OO SVM (L8 = 70.4, S2 = 86.9%, PS = 73.9)). The broad code application demonstrated very good reliability of the whole process, even though the OO classification process resulted, sometimes, too demanding on higher resolution data, considering the available computational GEE resources.
Characterization of the delay profile of systems employing random linear network coding is important for the reliable provision of broadcast services. Previous studies focused on network coding over large finite fields or developed Markov chains to model the delay distribution but did not look at the effect of transmission deadlines on the delay. In this work, we consider generations of source packets that are encoded and transmitted over the erasure broadcast channel. The transmission of packets associated to a generation is taken to be deadline-constrained, that is, the transmitter drops a generation and proceeds to the next one when a predetermined deadline expires. Closed-form expressions for the average number of required packet transmissions per generation are obtained in terms of the generation size, the field size, the erasure probability and the deadline choice. An upper bound on the average decoding delay, which is tighter than previous bounds found in the literature, is also derived. Analysis shows that the proposed framework can be used to fine-tune the system parameters and ascertain that neither insufficient nor excessive amounts of packets are sent over the broadcast channel.
We consider a lossy multicast network in which the reliability is provided by means of Random Linear Network Coding. Our goal is to characterise the performance of such network in terms of the probability that a source message is delivered to all destination nodes. Previous studies considered coding over large finite fields, small numbers of destination nodes or specific, often impractical, channel conditions. In contrast, we focus on a general problem, considering arbitrary field size and number of destination nodes, as well as a realistic channel. We propose a lower bound on the probability of successful delivery, which is more accurate than the approximation commonly used in the literature. In addition, we present a novel performance analysis of the systematic version of RLNC. The accuracy of the proposed performance framework is verified via extensive Monte Carlo simulations, where the impact of the network and code parameters are investigated. Specifically, we show that the mean square error of the bound for a ten-user network can be as low as 9 · 10 −5 for non-systematic RLNC.
Intelligent Transportation Systems (ITSs) require ultra-low end-to-end delays and multi-gigabit-per-second data transmission. Millimetre Waves (mmWaves) communications can fulfil these requirements. However, the increased mobility of Connected and Autonomous Vehicles (CAVs), requires frequent beamforming -thus introducing increased overhead. In this paper, a new beamforming algorithm is proposed able to achieve overhead-free beamforming training. Leveraging from the CAVs sensory data, broadcast with Dedicated Short Range Communications (DSRC) beacons, the position and the motion of a CAV can be estimated and beamform accordingly. To minimise the position errors, an analysis of the distinct error components was presented. The network performance is further enhanced by adapting the antenna beamwidth with respect to the position error. Our algorithm outperforms the legacy IEEE 802.11ad approach proving it a viable solution for the future ITS applications and services.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2023 scite Inc. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.