Zezwala się na korzystanie z artykułu na warunkach licencji Creative Commons Uznanie autorstwa 3.0
IntroductionRobots are no longer associated solely with robotic arms, performing pre-programmed tasks in factories. We can observe a shift in focus from industrial robots towards mobile platforms. Such devices have found a broad spectrum of applications among many different areas of industry, services, and science. Thanks to their versatility, it is common to find mobile robots exploring unreachable or even hostile environments. Machines are able to withstand uneasy conditions, survive extreme temperatures and pressure, even endure high radiation, and are ideal to substitute people in even the most unfriendly places.Although advanced artificial intelligence algorithms may relieve the operator from some of his/her duties [1], it is undeniable that, humans' ability to interpret and analyse various phenomena is indispensable. That is why our emissaries, depending on the complexity of the mission, sometimes may need to be remotely controlled. To do so, the operator needs means and tools to remotely and directly control an on-board equipment.
Current trends in robotic arm control approachesNumerous international researchers are focused on simplifying robotic arm control. They try to address the complexity of robot programming or its direct control using traditional means i.e. teach pedant, joysticks, even command line prompts.During research, I came across various solutions to control problem. Researchers used different methods of motion capture and its conversion into control signals for manipulators. We can distinguish two leading approaches to this issue. Non-mechanical methods are mostly based on the use of image interpretation, computer vision, and various 3D sensors. Mechanical set-ups use mechanisms that translate joints movement into control signals using rotation or shift sensors embedded in its structure. One of the aforementioned approaches is using the 3D sensor. Many researchers favour Microsoft Kinect sensor. Originally built for Xbox game console, the device is capable of tracing limbs movement in real time. This feature results in easy to interpret data that can be used to control robotic arms [2,3]. Another widely used sensor is Leap motion. This device uses infrared mesh to detect objects directly above it and capture its movement [4]. Although such solutions allow natural and intuitive control over robotic arms, they lack in precision.Another solution to the control problem is utilising computer vision. A set of cameras is placed around controlling arm. They track hand's movements, using this information computer calculates coordinates in reference space. This data is used to control robotic arms [5,6]. This approach can aid [7] or be used on-its-own to control robotic arms [8]. However, vision-based control requires high processing power to obtain data needed to control the manipulator. It is also prone to disturbances like inadequate lighting resulting in inaccurate position estimation.Dissertation ...