Network-on-chip (NoC) is evolving as a better substitute for incorporating a large number of cores on a single system-on-chip (SoC). The dependency on multi-core systems to accomplish the highperformance constraints of composite embedded applications is on the rise. This leads to the realization of efficient mapping approaches for such complex applications. The significance of efficient application mapping approaches has increased ever since the embedded applications have become more complex and performance-oriented. This paper presents the detailed comparative analysis and categorization of application mapping approaches with current trends in NoC design implementation. These approaches target to improve the performance of the whole system by optimizing communication cost, energy, power consumption, and latency. Apart from the categorization of the discussed approaches, comparison of communication cost, power, energy, and latency of the NoC system carried out on real applications like VOPD and MPEG4. Moreover, the best technique identified in each category based on the evaluation of performance results. INDEX TERMS Network-on-Chip, application mapping, NoC design, VOPD, System-on-Chip. I. INTRODUCTION System-on-Chip (SoC) is an archetype for the design and implementation of on-chip circuits that can support multiple systems on a single chip. The ever-rising number of processing cores on a single chip has made the efficiency of onchip designs as one of the major aspects in evaluating the average performance of embedded SoC. To fulfill the performance needs and to provide flexibility in the designs the field of Network on Chip (NoC) has emerged that separates the communication from the computation. Many surveys [1]-[7] are published in general and textbooks are also available on the topic [8]-[10]. There is still a need to solve more advanced research problems in NoC. Application mapping on NoC architecture has a prominent place among all the research problems which can be arbitrated from the published The associate editor coordinating the review of this manuscript and approving it for publication was Eyuphan Bulut. papers and current trends. This survey assumes a textbooklevel acquaintance of NoC terminology. The aim is to provide a comprehensive overview of all the aspects of application mapping (task graph generation, scheduling, optimization techniques, simulation setup, and performance evaluation metrics). The organization of this survey is as follows; first, we provide the reader the need and overview of application mapping in NoC presented in Section I. A detailed review and calcification of application mapping techniques in NoCs are discussed in Section II. The current and latest trends in application mapping are summarized in Section III. In Section IV performance comparison and the points that can create the difference between these techniques are highlighted. Finally, Section VI concludes this paper. A. MOTIVATION AND SCOPE Mostly synthetic traffic patterns are used to imitate the functionalities of rea...
The next generation of the Internet of Things (IoT) networks is expected to handle a massive scale of sensor deployment with radically heterogeneous traffic applications, which leads to a congested network, calling for new mechanisms to improve network efficiency. Existing protocols are based on simple heuristics mechanisms, whereas the probability of collision is still one of the significant challenges of future IoT networks. The medium access control layer of IEEE 802.15.4 uses a distributed coordination function to determine the efficiency of accessing wireless channels in IoT networks. Similarly, the network layer uses a ranking mechanism to route the packets. The objective of this study was to intelligently utilize the cooperation of multiple communication layers in an IoT network. Recently, Q-learning (QL), a machine learning algorithm, has emerged to solve learning problems in energy and computational-constrained sensor devices. Therefore, we present a QL-based intelligent collision probability inference algorithm to optimize the performance of sensor nodes by utilizing channel collision probability and network layer ranking states with the help of an accumulated reward function. The simulation results showed that the proposed scheme achieved a higher packet reception ratio, produces significantly lower control overheads, and consumed less energy compared to current state-of-the-art mechanisms.
Edge infrastructure and Industry 4.0 required services are offered by edge-servers (ESs) with different computation capabilities to run social application's workload based on a leased-price method. The usage of Social Internet of Things (SIoT) applications increases day-to-day, which makes social platforms very popular and simultaneously requires an effective computation system to achieve high service reliability. In this regard, offloading high required computational social service requests (SRs) in a time slot based on directed acyclic graph (DAG) is an N P-complete problem. Most state-of-art methods concentrate on the energy preservation of networks but neglect the resource sharing cost and dynamic subservice execution time (SET) during the computation and resource sharing. This article proposes a two-step deep reinforcement learning (DRL)-based service offloading (DSO) approach to diminish edge server costs through a DRL influenced resource and SET analysis (RSA) model. In the first level, the service and edge server cost is considered during service offloading. In the second level, the R-retaliation method evaluates resource factors to optimize resource sharing and SET fluctuations. The simulation results show that the proposed DSO approach achieves low execution costs by streamlining dynamic service completion and transmission time, server cost, and deadline violation rate attributes. Compared to the state-of-art approaches, Manuscript
Future generation Internet of Things (IoT) communication infrastructure is expected to pave the path for innovative applications like smart cities, smart grids, smart industries, and smart healthcare. To support these diverse applications, the communication protocols are required to be adaptive and intelligent. At the network layer, an efficient and lightweight algorithm known as trickle-timer is designed to perform the route updates and it utilizes control messages to share the updated route information between IoT nodes. Trickle-timer tends to generate higher control overhead ratio and achieves lower reliability. Therefore, this article aims to propose an RL-based Intelligent Adaptive Trickle-Timer Algorithm (RIATA). The proposed algorithm performs three-fold optimization of the trickle-timer algorithm. Firstly, the RIATA assigns higher probability to control message transmission to nodes that have received an inconsistent control message in the past intervals. Secondly, the RIATA utilizes RL to learn the optimal policy to transmit or suppress a control message in the current network environment. Lastly, the RIATA selects an adaptive redundancy constant value to avoid unnecessary transmissions of control messages. Simulation results show that RIATA outperforms the other state-of-the-art mechanisms in terms of reducing control overhead ratio by an average of 21%, decreasing the average total power consumption by 10%, and increasing the packet delivery ratio by 4% on an average.INDEX TERMS Internet of Things (IoT), trickle-timer, reinforcement learning, RPL.
Fog computing brings computational services near the network edge to meet the latency constraints of cyber-physical System (CPS) applications. Edge devices enable limited computational capacity and energy availability that hamper end user performance. We designed a novel performance measurement index to gauge a device's resource capacity. This examination addresses the offloading mechanism issues, where the end user (EU) offloads a part of its workload to a nearby edge server (ES). Sometimes, the ES further offloads the workload to another ES or cloud server to achieve reliable performance because of limited resources (such as storage and computation). The manuscript aims to reduce the service offloading rate by selecting a potential device or server to accomplish a low average latency and service completion time to meet the deadline constraints of sub-divided services. In this regard, an adaptive online status predictive model design is significant for prognosticating the asset requirement of arrived services to make float decisions. Consequently, the development of a reinforcement learning-based flexible x-scheduling (RFXS) approach resolves the service offloading issues, where x = service/resource for producing the low latency and high performance of the network. Our approach to the theoretical bound and computational complexity is derived by formulating the system efficiency. A quadratic restraint mechanism is employed to formulate the service optimization issue according to a set of measurements, as well as the behavioural association rate and adulation factor. Our system managed an average 0.89% of the service offloading rate, with 39 ms of delay over complex scenarios (using three servers with a 50% service arrival rate). The simulation outcomes confirm that the proposed scheme attained a low offloading uncertainty, and is suitable for simulating heterogeneous CPS frameworks.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.