NAO is the first robot created by SoftBank Robotics. Famous around the world, NAO is a tremendous programming tool and he has especially become a standard in education and research. Aiming at the large error and poor stability of the humanoid robot NAO manipulator during trajectory tracking, a novel framework based on fuzzy controller reinforcement learning trajectory planning strategy is proposed. Firstly, the Takagi–Sugeno fuzzy model based on the dynamic equation of the NAO right arm is established. Secondly, the design and the gain solution of the state feedback controller based on the parallel feedback compensation strategy are studied. Finally, the ideal trajectory of the motion is planned by reinforcement learning algorithm so that the end of the manipulator can track the desired trajectory and realize the valid obstacle avoidance. Simulation and experiment shows that the end of the manipulator based on this scheme has good controllability and stability and can meet the accuracy requirements of trajectory tracking accuracy, which verifies the effectiveness of the proposed framework.
With significant increases in mobile device traffic slated for the foreseeable future, numerous technologies must be embraced to satisfy such demand. Notably, one of the more intriguing approaches has been blending on-device caching and device-to-device (D2D) communications. While various past research has pointed to potentially significant gains (30%+) via redundancy elimination (RE), some skepticism has emerged to whether or not such gains are truly harnessable in practice. The premise of this paper is to explore whether or not significant potential for redundancy elimination exists and whether the rise of video and encryption might blunt said efforts. Critically, we find that absent significant synchronized interests of mobile users, the actual redundancy falls well short of the promising values from the literature. In our paper, we investigate the roots for said shortcomings by exploring RE savings with regards to cache hit characteristics and to what extent client and domain diversity contribute to the realized redundancy savings.
Purpose
This paper aims to propose a novel active SLAM framework to realize avoid obstacles and finish the autonomous navigation in indoor environment.
Design/methodology/approach
The improved fuzzy optimized Q-Learning (FOQL) algorithm is used to solve the avoidance obstacles problem of the robot in the environment. To reduce the motion deviation of the robot, fractional controller is designed. The localization of the robot is based on FastSLAM algorithm.
Findings
Simulation results of avoiding obstacles using traditional Q-learning algorithm, optimized Q-learning algorithm and FOQL algorithm are compared. The simulation results show that the improved FOQL algorithm has a faster learning speed than other two algorithms. To verify the simulation result, the FOQL algorithm is implemented on a NAO robot and the experimental results demonstrate that the improved fuzzy optimized Q-Learning obstacle avoidance algorithm is feasible and effective.
Originality/value
The improved fuzzy optimized Q-Learning (FOQL) algorithm is used to solve the avoidance obstacles problem of the robot in the environment. To reduce the motion deviation of the robot, fractional controller is designed. To verify the simulation result, the FOQL algorithm is implemented on a NAO robot and the experimental results demonstrate that the improved fuzzy optimized Q-Learning obstacle avoidance algorithm is feasible and effective.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.