Abstract-In this paper we present system level simulation results of a self-optimizing load balancing algorithm in a longterm-evolution (LTE) mobile communication system. Based on previous work [2][1], we evaluate the network performance of this algorithm that requires the load of a cell as input and controls the handover parameters. We compare the results for different simulation setups: for a basic, regular network setup, a non-regular grid with different cell sizes and also for a realistic scenario based on measurements and realistic traffic setup.
Abstract-This paper presents a self-optimizing algorithm that tunes the handover (HO) parameters of a LTE (Long-Term Evolution) base station in order to improve the overall network performance and diminish negative effects (call dropping , HO failures). The proposed algorithm picks the best hysteresis and time-to-trigger combination for the current network status. We examined the effects of this self-optimizing algorithm in a realistic scenario setting and the results show an improvement from the static value settings.
The emergence of 5G enables a broad set of diversified and heterogeneous services with complex and potentially conflicting demands. For networks to be able to satisfy those needs, a flexible, adaptable, and programmable architecture based on network slicing is being proposed. Moreover, a softwarization and cloudification of the communications networks is required, where network functions (NFs) are being transformed from programs running on dedicated hardware platforms to programs running over a shared pool of computational and communication resources. This architectural framework allows the introduction of resource elasticity as a key means to make an efficient use of the computational resources of 5G systems, but adds challenges related to resource sharing and efficiency. In this paper, we propose Artificial Intelligence (AI) as a built-in architectural feature that allows the exploitation of the resource elasticity of a 5G network. Building on the work of the recently formed Experiential Network Intelligence (ENI) industry specification group of the European Telecommunications Standards Institute (ETSI) to embed an AI engine in the network, we describe a novel taxonomy for learning mechanisms that target exploiting the elasticity of the network as well as three different resource elastic use cases leveraging AI. This work describes the basis of a use case recently approved at ETSI ENI.
Abstract-In this paper we present simulation results of a self-optimizing network in a long-term-evolution (LTE) mobile communication system that uses two optimizing algorithms at the same time: load balancing (LB) and handover parameter optimization (HPO). Based on previous work [1] [2][5], we extend the optimization by a combined use case. We present the interactions of the two SON algorithms and show an example of a coordination system. The coordination system for self optimization observes system performance and controls the SON algorithms. As both SON algorithms deal with the handover decision itself, not only interactions, but also conflicts in the observation and control of the system are to be expected and are observed. The example of a coordination system here is not the optimal solution covering all aspects, but rather a working solution that shows equal performance to the individual algorithms or in the best case combining the strengths of the algorithms and achieving even better performance; although as localized gain, in time and area.
This article introduces an enhanced version of previously developed self-optimizing algorithm that controls the handover (HO) parameters of a long-term evolution base station in order to diminish and prevent the negative effects that can be introduced by HO (radio link failures, HO failures and ping-pong HOs) and thus improve the overall network performance. The default algorithm selects the best hysteresis and time-to-trigger combination based on the current network status. The enhancement proposed here aims to maximize the gain provided by the algorithm by improving its convergence time. The effects of this enhancement have been studied in a rural scenario setting and compared to the original algorithm; the results show a clear improvement, faster convergence, and better network performance, because of the enhancement.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.