Reinforcement Learning (RL) is one of the emerging fields of Artificial Intelligence (AI) intended for designing agents that take actions in the physical environment. RL has many vital applications, including robotics and autonomous vehicles. The key characteristic of RL is its ability to learn from experience without requiring direct programming or supervision. To learn, an agent interacts with an environment by acting and observing the resulting states and rewards. In most practical applications, an environment is implemented as a virtual system due to cost, time, and safety concerns. Simultaneously, Multibody System Dynamics (MSD) is a framework for efficiently and systematically developing virtual systems of arbitrary complexity. MSD is commonly used to create virtual models of robots, vehicles, machinery, and humans. The features of RL and MSD make them perfect companions in building sophisticated, automated, and autonomous mechatronic systems. The research demonstrates the use of RL in controlling multibody systems. While AI methods are used to solve some of the most challenging tasks in engineering, their proper understanding and implementation are demanding. Therefore, we introduce and detail three commonly used RL algorithms to control the inverted N-pendulum on the cart. Single-, double-, and triple-pendulum configurations are investigated, showing the capability of RL methods to handle increasingly complex dynamical systems. We show 2D state space zones where the agent succeeds or fails the stabilization. Despite passing randomized tests during training, "blind spots" may occur where the agent's policy fails. Results confirm that RL is a versatile, although complex, control engineering approach.
Mobile robots and autonomous guided vehicles have become an indispensable part of modern industrial environments and are used for a wide range of handling operations. To fully use the potential of mobile platforms, omnidirectional platforms are a good choice. A prominent and widely used variant in the industry are Mecanum wheels, which allow arbitrary movement in any direction in the plane. In most applications only the kinematics is considered, however, dynamic models that take the geometry of the rollers into account are still missing. In this paper two models for Mecanum wheels with different degrees of detail are derived. The detailed model considers the rollers as single bodies undergoing contact and friction with the rolling plane. As the wheel consists of multiple rollers, a complex contact situation with temporal overlapping and additional vibrations occurs. The simplified model reproduces the overall kinematics of the rollers with orthotropic friction and only one rigid body for the wheel, thereby being computationally more efficient. Both models are well suited to reproduce essential dynamic effects of a mobile robotic platform, which can not be described by the conventional kinematics model. We implement and compare both models with experimental results, showing the good performance of both models.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.