“…The base of the robot is made using a 3 mm acrylic sheet and 3D printed parts using polylactic acid (PLA), thermoplastic polyurethane-elastomere (TPU), and carbon fiber. The principal differences between Mantis v1 [36] and Mantis v2 are the types of sensors, the materials used to build them, and their locomotion mechanisms.…”
Section: Overview Of the Mantis V2 Robotmentioning
confidence: 99%
“…The distance sensor helps to estimate the height to which the module is lifted, and the limit switch helps to cut down the lifting function when it reaches the maximum position. The robot performs the cleaning by means of a microfiber towel placed on the bottom of the pad in contact with the glass (e.g., [36]). The details are not explained in this paper.…”
Section: Overview Of the Mantis V2 Robotmentioning
confidence: 99%
“…The importance of maintaining the orientation of the robot during the transition is due to the fact that in some window frames, the width exceeds 12 cm; this is the transition limit. The robot consists of a sensory system to identify and avoid hitting the frame [36]. If the robot moves in a non-planar way during the transition, it runs the risk of hitting the frame, losing the suction, and falling down as a consequence.…”
Section: Orientation Estimationmentioning
confidence: 99%
“…For this application, the robot is currently tele-operated during navigation. However, a system was developed that starts the automatic transition by detecting the metallic frame of the window, e.g., in [36] by using inductive sensor installed at the base of robot. During the transition phase, the blower of the pad is turned off to detach it from the window surface.…”
Section: Dynamic Testsmentioning
confidence: 99%
“…The proposed glass-façade-cleaning robot (Mantis v2) [36] is designed to climb vertical surfaces; this robot uses an active suction mechanism and is used for window cleaning and glass inspection [37]. It is vital to maintain the orientation of Mantis robot during navigation on window frame, panel transitions and during lifting and displacement of modules.…”
Glass-façade-cleaning robots are an emerging class of service robots. This kind of cleaning robot is designed to operate on vertical surfaces, for which tracking the position and orientation becomes more challenging. In this article, we have presented a glass-façade-cleaning robot, Mantis v2, who can shift from one window panel to another like any other in the market. Due to the complexity of the panel shifting, we proposed and evaluated different methods for estimating its orientation using different kinds of sensors working together on the Robot Operating System (ROS). For this application, we used an onboard Inertial Measurement Unit (IMU), wheel encoders, a beacon-based system, Time-of-Flight (ToF) range sensors, and an external vision sensor (camera) for angular position estimation of the Mantis v2 robot. The external camera is used to monitor the robot’s operation and to track the coordinates of two colored markers attached along the longitudinal axis of the robot to estimate its orientation angle. ToF lidar sensors are attached on both sides of the robot to detect the window frame. ToF sensors are used for calculating the distance to the window frame; differences between beam readings are used to calculate the orientation angle of the robot. Differential drive wheel encoder data are used to estimate the robot’s heading angle on a 2D façade surface. An integrated heading angle estimation is also provided by using simple fusion techniques, i.e., a complementary filter (CF) and 1D Kalman filter (KF) utilizing the IMU sensor’s raw data. The heading angle information provided by different sensory systems is then evaluated in static and dynamic tests against an off-the-shelf attitude and heading reference system (AHRS). It is observed that ToF sensors work effectively from 0 to 30 degrees, beacons have a delay up to five seconds, and the odometry error increases according to the navigation distance due to slippage and/or sliding on the glass. Among all tested orientation sensors and methods, the vision sensor scheme proved to be better, with an orientation angle error of less than 0.8 degrees for this application. The experimental results demonstrate the efficacy of our proposed techniques in this orientation tracking, which has never applied in this specific application of cleaning robots.
“…The base of the robot is made using a 3 mm acrylic sheet and 3D printed parts using polylactic acid (PLA), thermoplastic polyurethane-elastomere (TPU), and carbon fiber. The principal differences between Mantis v1 [36] and Mantis v2 are the types of sensors, the materials used to build them, and their locomotion mechanisms.…”
Section: Overview Of the Mantis V2 Robotmentioning
confidence: 99%
“…The distance sensor helps to estimate the height to which the module is lifted, and the limit switch helps to cut down the lifting function when it reaches the maximum position. The robot performs the cleaning by means of a microfiber towel placed on the bottom of the pad in contact with the glass (e.g., [36]). The details are not explained in this paper.…”
Section: Overview Of the Mantis V2 Robotmentioning
confidence: 99%
“…The importance of maintaining the orientation of the robot during the transition is due to the fact that in some window frames, the width exceeds 12 cm; this is the transition limit. The robot consists of a sensory system to identify and avoid hitting the frame [36]. If the robot moves in a non-planar way during the transition, it runs the risk of hitting the frame, losing the suction, and falling down as a consequence.…”
Section: Orientation Estimationmentioning
confidence: 99%
“…For this application, the robot is currently tele-operated during navigation. However, a system was developed that starts the automatic transition by detecting the metallic frame of the window, e.g., in [36] by using inductive sensor installed at the base of robot. During the transition phase, the blower of the pad is turned off to detach it from the window surface.…”
Section: Dynamic Testsmentioning
confidence: 99%
“…The proposed glass-façade-cleaning robot (Mantis v2) [36] is designed to climb vertical surfaces; this robot uses an active suction mechanism and is used for window cleaning and glass inspection [37]. It is vital to maintain the orientation of Mantis robot during navigation on window frame, panel transitions and during lifting and displacement of modules.…”
Glass-façade-cleaning robots are an emerging class of service robots. This kind of cleaning robot is designed to operate on vertical surfaces, for which tracking the position and orientation becomes more challenging. In this article, we have presented a glass-façade-cleaning robot, Mantis v2, who can shift from one window panel to another like any other in the market. Due to the complexity of the panel shifting, we proposed and evaluated different methods for estimating its orientation using different kinds of sensors working together on the Robot Operating System (ROS). For this application, we used an onboard Inertial Measurement Unit (IMU), wheel encoders, a beacon-based system, Time-of-Flight (ToF) range sensors, and an external vision sensor (camera) for angular position estimation of the Mantis v2 robot. The external camera is used to monitor the robot’s operation and to track the coordinates of two colored markers attached along the longitudinal axis of the robot to estimate its orientation angle. ToF lidar sensors are attached on both sides of the robot to detect the window frame. ToF sensors are used for calculating the distance to the window frame; differences between beam readings are used to calculate the orientation angle of the robot. Differential drive wheel encoder data are used to estimate the robot’s heading angle on a 2D façade surface. An integrated heading angle estimation is also provided by using simple fusion techniques, i.e., a complementary filter (CF) and 1D Kalman filter (KF) utilizing the IMU sensor’s raw data. The heading angle information provided by different sensory systems is then evaluated in static and dynamic tests against an off-the-shelf attitude and heading reference system (AHRS). It is observed that ToF sensors work effectively from 0 to 30 degrees, beacons have a delay up to five seconds, and the odometry error increases according to the navigation distance due to slippage and/or sliding on the glass. Among all tested orientation sensors and methods, the vision sensor scheme proved to be better, with an orientation angle error of less than 0.8 degrees for this application. The experimental results demonstrate the efficacy of our proposed techniques in this orientation tracking, which has never applied in this specific application of cleaning robots.
SummaryComplete coverage planning (CCP) is a task to cover the entire area on the map, according to the job description of the autonomous mobile robot. The most widely used method for CCP in the literature is the grid‐based coverage method. In this method, the problem is processing the partially filled cell as completely filled, which reduces the coverage performance. The ability to use the clustering method, which will be created by considering the characteristics of the environment, was determined as a research question to solve this problem. In this direction, it is aimed to use K‐means++ algorithm, which is a widely used clustering algorithm and segmentation technique. In this context, an offline K‐means++ complete coverage planning (Km++CCP) method, in which the navigable area on the map of the indoor where a mobile robot will navigate is clustered using the K‐means++ algorithm and the centroids can be used as waypoints, is proposed. To test the proposed method, 2 simulations and 36 real‐world experiments were conducted. The indoor coverage ratio of Km++CCP was calculated higher than the grid‐based method in all experiments.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.