Unmanned aerial vehicle (UAV)-assisted mobile edge computing (MEC) system is a prominent concept, where a UAV equipped with a MEC server is deployed to serve a number of terminal devices (TDs) of Internet of Things (IoT) in a finite period. In this paper, each TD has a certain latency-critical computation task in each time slot to complete. Three computation strategies can be available to each TD. First, each TD can operate local computing by itself. Second, each TD can partially offload task bits to the UAV for computing. Third, each TD can choose to offload task bits to access point (AP) via UAV relaying. We propose a new optimization problem formulation that aims to minimize the total energy consumption including communication-related energy, computation-related energy and UAV's flight energy by optimizing the bits allocation, time slot scheduling and power allocation as well as UAV trajectory design. As the formulated problem is nonconvex and difficult to find the optimal solution, we solve the problem by two parts, and obtain the near optimal solution with within a dozen of iterations. Finally, numerical results are given to validate the proposed algorithm, which is verified to be efficient and superior to the other benchmark cases.
As heterogeneous networks (HetNets) emerge as one of the most promising developments toward realizing the target specifications of Long Term Evolution (LTE) and LTE-Advanced (LTE-A) networks, radio resource management (RRM) research for such networks has, in recent times, been intensively pursued.Clearly, recent research mainly concentrates on the aspect of interference mitigation. Other RRM aspects, such as radio resource utilization, fairness, complexity, and QoS, have not been given much attention. In this paper, we aim to provide an overview of the key challenges arising from HetNets and highlight their importance. Subsequently, we present a comprehensive survey of the RRM schemes that have been studied in recent years for LTE/LTE-A HetNets, with a particular focus on those for femtocells and relay nodes. Furthermore, we classify these RRM schemes according to their underlying approaches. In addition, these RRM schemes are qualitatively analyzed and compared to each other. We also identify a number of potential research directions for future RRM development. Finally, we discuss the lack of current RRM research and the importance of multi-objective RRM studies.
Information-Centric Networking (ICN) is being realized as a promising approach to accomplish the shortcomings of current IP-address based networking. ICN models are based on naming the content to get rid of address-space scarcity, accessing the content via name-based-routing, caching the content at intermediate nodes to provide reliable, efficient data delivery and self-certifying contents to ensure better security. Obvious benefits of ICN in terms of fast and efficient data delivery and improved reliability raises ICN as highly promising networking model for Internet of Things (IoTs) like environments. IoT aims to connect anyone and/or anything at any time by any path on any place. From last decade, IoTs attracts both industry and research communities. IoTs is an emerging research field and still in its infancy. Thus, this paper presents the potential of ICN for IoTs by providing state-of-the-art literature survey. We discuss briefly the feasibility of ICN features and their models (and architectures) in the context of IoT. Subsequently, we present a comprehensive survey on ICN based caching, naming, security and mobility approaches for IoTs with appropriate classification. Furthermore, we present operating systems (OS) and simulation tools for ICN-IoT. Finally, we provide important research challenges and issues faced by ICN for IoTs.
Multitenant cellular network slicing has been gaining huge interest recently. However, it is not well-explored under the heterogeneous cloud radio access network (H-CRAN) architecture. This paper proposes a dynamic network slicing scheme for multitenant H-CRANs, which takes into account tenants' priority, baseband resources, fronthaul and backhaul capacities, quality of service (QoS) and interference.The framework of the network slicing scheme consists of an upper-level, which manages admission control, user association and baseband resource allocation; and a lower-level, which performs radio resource allocation among users. Simulation results show that the proposed scheme can achieve a higher network throughput, fairness and QoS performance compared to several baseline schemes.
Routing Protocol for Low power and Lossy network (RPL) topology attacks can downgrade the network performance significantly by disrupting the optimal protocol structure. To detect such threats, we propose a RPL-specification, obtained by a semi-auto profiling technique that constructs a high-level abstract of operations through network simulation traces, to use as reference for verifying the node behaviors. This specification, including all the legitimate protocol states and transitions with corresponding statistics, will be implemented as a set of rules in the intrusion detection agents, in the form of the cluster heads propagated to monitor the whole network. In order to save resources, we set the cluster members to report related information about itself and other neighbors to the cluster head instead of making the head overhearing all the communication. As a result, information about a cluster member will be reported by different neighbors, which allow the cluster head to do cross-check. We propose to record the sequence in RPL Information Object (DIO) and Information Solicitation (DIS) messages to eliminate the synchronized issue created by the delay in transmitting the report, in which the cluster head only does cross-check on information that come from sources with the same sequence. Simulation results show that the proposed Intrusion Detection System (IDS) has a high accuracy rate in detecting RPL topology attacks, while only creating insignificant overhead (about 6.3%) that enable its scalability in large-scale network.
With the terrific growth of digital data and associated technologies, there is an emerging trend, where industries become rapidly digitized. These technologies are providing great opportunities to identify and resolve different problems. In particular, the telecommunication industry is facing a serious problem of customer churn relating to, the customers who are going to abandon their established relation with the business/network in the near future. This problem cannot only affect the rapid growth of the business but can also affect the revenues. Therefore, many customer churn prediction (CCP) models have been introduced but not yielding the desired performance in CCP. This is because there can be many factors, that contribute to customer churn which are still unexplored. In this paper, we focus on determining the effectiveness of the factors, i.e. lower and upper distance between the samples, are considered by the proposed model for the CCP. Further, we demonstrate a novel solution pertaining to the telecommunication sector showing the hidden factors considered for predicting the customer churn. Finally, we investigate the effects of both types of samples: those samples that are low distance and the upper distance (in terms of relevance) to the majority samples in given publicly available dataset. As a result of the study, we found that lower distance test set (LDT) samples have obtained best performance as compare to upper distance test set (UDT) samples in term of increased in the accuracy, f-measures, precision and recall when the uncertain sample size increases. Because the classification performance on upper distance samples remain almost the same when the size of samples increased in the test set.
In this work, an integrated antenna system with Defected Ground Structure (DGS) is presented for Fourth Generation (4G) and millimeter (mm)-wave Fifth Generation (5G) wireless applications and handheld devices. The proposed design with overall dimensions of 110 mm × 75 mm is modeled on 0.508 mm thick Rogers RT/Duroid 5880 substrate. Radiating structure consists of antenna arrays excited by the T-shape 1 × 2 power divider/combiner. Dual bands for 4G centered at 3.8 GHz and 5.5 GHz are attained, whereas the 10-dB impedance bandwidth of 24.4 -29.3 GHz is achieved for the 5G antenna array. In addition, a peak gain of 5.41 dBi is demonstrated across the operating bandwidth of the 4G antenna array. Similarly, for the 5G mm-wave configuration the attained peak gain is 10.29 dBi. Moreover, significant isolation is obtained between the two antenna modules ensuring efficient dual-frequency band operation using a single integrated solution. To endorse the concept, antenna prototype is fabricated and far-field measurements are procured. Simulated and measured results exhibit coherence. Also the proposed design is investigated for the beam steering capability of the mm-wave 5G antenna array using CST MWS . The demonstrated structure offers various advantages including compactness, wide bandwidth, high gain, and planar configuration. Hence, the attained radiation characteristics prove the suitability of the proposed design for the current and future wireless handheld devices.INDEX TERMS Antenna array, integrated solution, 4G, mm-wave 5G, handheld devices.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.