Recently, there is the widespread use of mobile devices and sensors, and rapid emergence of new wireless and networking technologies, such as wireless sensor network, device-to-device (D2D) communication, and vehicular ad hoc networks. These networks are expected to achieve a considerable increase in data rates, coverage, and the number of connected devices with a significant reduction in latency and energy consumption. Because there are energy resource constraints in user's devices and sensors, the problem of wireless network resource allocation becomes much more challenging. This leads to the call for more advanced techniques in order to achieve a tradeoff between energy consumption and network performance. In this paper, we propose to use reinforcement learning, an efficient simulation-based optimization framework, to tackle this problem so that user experience is maximized. Our main contribution is to propose a novel non-cooperative and real-time approach based on deep reinforcement learning to deal with the energy-efficient power allocation problem while still satisfying the quality of service constraints in D2D communication.INDEX TERMS Energy efficient wireless communication, power allocation, D2D communication, multiagent reinforcement learning, deep reinforcement learning.
An adaptive, highly scalable, and robust web caching system is needed to effectively handle the exponential growth and extreme dynamic environment of the World Wide Web. Our work presented last year sketched out the basic design of such a system. This sequel paper reports our progress over the past year. To assist caches making web query forwarding decisions, we sketch out the basic design of a URL routing framework. To assist fast searching within each cache group, we let neighbor caches share content information. Equipped with the URL routing table and neighbor cache contents, a cache in the revised design can now search the local group, and forward all missing queries quickly and efficiently, thus eliminating both the waiting delay and the overhead associated with multicast queries. The paper also presents a proposal for incremental deployment that provides a smooth transition from the currently deployed cache infrastructure to the new design.
With the rapid growth of mobile applications and cloud computing, mobile cloud computing has attracted great interest from both academia and industry. However, mobile cloud applications are facing security issues such as data integrity, users' confidentiality, and service availability. A preventive approach to such problems is to detect and isolate cyber threats before they can cause serious impacts to the mobile cloud computing system. In this paper, we propose a novel framework that leverages a deep learning approach to detect cyberattacks in mobile cloud environment. Through experimental results, we show that our proposed framework not only recognizes diverse cyberattacks, but also achieves a high accuracy (up to 97.11%) in detecting the attacks. Furthermore, we present the comparisons with current machine learning-based approaches to demonstrate the effectiveness of our proposed solution.
Device-to-device (D2D) communication is an emerging technology in the evolution of the 5G network enabled vehicle-to-vehicle (V2V) communications. It is a core technique for the next generation of many platforms and applications, e.g. real-time high-quality video streaming, virtual reality game, and smart city operation. However, the rapid proliferation of user devices and sensors leads to the need for more efficient resource allocation algorithms to enhance network performance while still capable of guaranteeing the quality-of-service. Currently, deep reinforcement learning is rising as a powerful tool to enable each node in the network to have a real-time self-organising ability. In this paper, we present two novel approaches based on deep deterministic policy gradient algorithm, namely ''distributed deep deterministic policy gradient'' and ''sharing deep deterministic policy gradient'', for the multi-agent power allocation problem in D2D-based V2V communications. Numerical results show that our proposed models outperform other deep reinforcement learning approaches in terms of the network's energy efficiency and flexibility. INDEX TERMS Non-cooperative D2D communication, D2D-based V2V communications, power allocation, multi-agent deep reinforcement learning, and deep deterministic policy gradient (DDPG).
To build autonomous robots capable to plan and control tasks in human environments, we need a description of trajectories that allows the robot to reason on his moves. In this paper we propose to use series of cubic polynomial curves to define the trajectories with bounded jerk, acceleration and velocity. This solution is well adapted to plan safe and acceptable moves of the robot in the vicinity of humans. It is also a simple solution to approximate any trajectory and synchronize different robots or element of the robots. These curves have a simple representation, can be computed quickly and when used in a fitting algorithm can build controller.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.