A lot of effort and time is utilized in the planning and building of the cellular wireless networks to use minimum infrastructural components to provide the best network coverage as well as delivery of quality of service. Generally, path loss models are used for the prediction of wireless network coverage. Therefore, detailed knowledge of the appropriate path loss model suitable for the proposed geographical area is needed to determine the coverage quality of any wireless network design. However, to the best of our knowledge, despite the importance of path loss models, as used for the prediction of wireless network coverage, there doesn't exist any comprehensive survey in this field. Therefore, the purpose of this paper is to survey the existing techniques and mechanisms which can be addressed in this domain. Briefly, the contributions of this paper are: (1) providing a comprehensive and up to date survey of the various network coverage prediction techniques, indicating the different frequency ranges the models were developed, (2) the different suitable terrains for each of the model and the best suit mobile generation were presented, and lastly, (3) providing comparative analysis to aid the planning and implementation of the cellular networks. INDEX TERMS Path loss model, prediction, wireless, propagation scenarios, mobile generations, signal. I. INTRODUCTION The remarkable changes experienced by the development of mobile communication system over the last few years has led to severe challenges to the planning of mobile wireless networks. This fruition journey of the first-Generation (1G) network started back in the year 1979 and has progressed to the presently explored Fifth Generation (5G) network. Each new generation is usually built upon the present generation's needs, which led to research and development for a better technology that will accommodate the needs, capacities, proper availability to the end-user. With this exponential increase in the use of mobile-connected devices as well as the constant expansion of mobile communication networks, the effective provision of the coverage of the mobile networks is imperative for the delivery of quality of service (QoS) [1]. Radio propagation can be defined as the behaviour of radio waves experienced while signals are transmitted from one point to another [2]. Such phenomena like absorptions, reflections, scattering, refractions, among others, affect the radio wave [3]. Therefore, mobile network coverage prediction is a vital and essential task in the planning and deployment of cellular technology.
The mobile demands and future business context are anticipated to be resolved by the fifthgeneration (5G) of mobile communication systems. It is expected to provide an utterly mobile device, connected society, and support the demanding services of various use cases (UCs). This is intended to meet the demand requirement by providing services at tens of Gbps in terms of data rates, higher mobility range, lower latencies, and massive connectivity density devices per square kilometer. A comprehensive and up-todate survey of the different developed and proposed use cases is presented in this paper. The first part of the paper presents the overview of the new 5G Architecture by introducing new features such as the new radio interface (New Radio), an overview of the 5G Core Network, minimum requirements, and the Radio Access Network, 5G spectrum requirements and other fundamentals of the network. Secondly, a detailed review of the developed and proposed use cases for 5G communications by the standards development organizations (SDO) and other key players in mobile communication is provided. Thirdly, we went ahead to propose spectrum bands for the deployment of the various use cases based on the low-, mid-, and high-band spectrum and further classified the use cases with respect to their relevance and family, identifying the IMT-2020 test environments and the usage scenarios derived by the 3GPP, fourthly, the channel capacity and the bandwidth of the spectrum was studied, simulated and compared to ascertain the spectrum proposed in this paper for each UC family. Hence, this paper serves as a guideline for understanding the usage scenarios for the future 5G deployment in various environments. This would allow system developers to design and implement 5G channel characterization models specific to the usage scenarios to meet the system requirements.
Energy consumption has risen to be a bottleneck in wireless sensor networks. This is caused by the challenges faced by these networks due to their tiny sensor nodes that have limited memory storage, small battery capacity, limited processing capability, and bandwidth. Data compression has been used to reduce energy consumption and improve network lifetime, as it reduces data size before it can be forwarded from the sensing node to the sink node in the network. In this paper, a survey and comparison of currently available data compression techniques in wireless sensor networks are conducted. Suitable sets of criteria are defined to classify existing data compression algorithms. An adaptive lossless data compression algorithm (ALDC) is analyzed through MATLAB coding and simulation from the reviewed data compression techniques. The analysis aims to discover strategies that can be used to reduce the amount of data further before it is transmitted. From this analysis, it was discovered that encoding residue samples, rather than raw data samples, reduced the bitstream from 112 bits to a range of 30 to 36 bits depending on the sample block sizes. The average length of data samples to be passed to the encoder was minimized from the original 14 bits per symbol to 1.125 bits per symbol. This demonstrated a 0.875 code efficiency or redundancy. It resulted in an energy saving of 67.8% to 73.2%. This work further proposes a data compression algorithm that encodes the residue samples with fewer bits than the ALDC algorithm. The algorithm reduced the bitstream to 26 bits. The average length of the code is equal to the entropy of the data samples, demonstrating zero redundancy and an improved energy saving of 76.8% compared to ALDC. The proposed algorithm, therefore, shows improved energy efficiency through data compression.
LoRa is a communication scheme that is part of the low power wide are network (LPWAN) technology using ISM bands. It has seen extensive documentation and use in research and industry due to its long coverage ranges of up-to 20Km or more with less than 14dB transmit power. Moreover, some applications report theoretical battery lives of upto 10years for field deployed modules utilising the scheme in WSN applications. Additionally, the scheme is very resilient to losses from noise, as well bursts of interference through its FEC. Our objective is to systematically review the empirical evidence of the use-cases of LoRa in rural landscapes, metrics and the relevant validation schemes. In addition the research is evaluated based on (i) mathematical function of the scheme (bandwidth use, spreading factor, symbol rate, chip rate and nominal bit rate) (ii) use-cases (iii) test-beds, metrics of evaluation and (iv) validation methods. A systematic literature review of published, refereed primary studies on LoRa applications was conducted. Using articles from 2010-2019. We identified 21 relevant primary studies. These reported a range of different assessments of LoRa. 10 out of 21 reported on novel use cases. As an actionable conclusion, the authors conclude that more work is needed in terms of field testing, as no articles could be found on performance/deployment in Botswana or South Africa despite the existence of LoRa networks in both countries. Thus researchers in the region can research propagation models performance, energy efficiency of the scheme and MAC layer as well as the channel access challenges for the region.
The growing interest in renewable energy and the falling prices of solar panels place solar electricity in a favourable position for adoption. However, the high-rate adoption of intermittent renewable energy introduces challenges and the potential to create power instability between the available power generation and the load demand. Hence, accurate solar Photovoltaic (PV) power forecasting is essential to maintain system reliability and maximize renewable energy integration. The current solar PV power forecasting approaches are an essential tool to maintain system reliability and maximize renewable energy integration. This paper presents a comprehensive and comparative review of existing Machine Learning (ML) based approaches used in PV power forecasting, focusing on short-term horizons. We provide an overview of factors affecting solar PV power forecasting and an overview of existing PV power forecasting methods in the literature, with a specific focus on ML-based models. To further enhance the comparison and provide more insights into the advancement in the area, we simulate the performance of different ML methods used in solar PV power forecasting and, finally, a discussion on the results of the work.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.