Abstract:This paper presents the use of deep Reinforcement Learning (RL) for autonomous navigation of an Unmanned Ground Vehicle (UGV) with an onboard three-dimensional (3D) Light Detection and Ranging (LiDAR) sensor in off-road environments. For training, both the robotic simulator Gazebo and the Curriculum Learning paradigm are applied. Furthermore, an Actor–Critic Neural Network (NN) scheme is chosen with a suitable state and a custom reward function. To employ the 3D LiDAR data as part of the input state of the NNs… Show more
“…The computer of Andabata employs an inertial measurement unit (IMU), with inclinometers, gyroscopes, and a compass, and a global navigation satellite system (GNSS) receiver with a horizontal resolution of 1 m included in its onboard smartphone for outdoor localization [36]. The main exteroceptive sensor for navigation is a custom 3D LiDAR sensor with 360°field of view built by rotating a 2D LiDAR [37].…”
Section: Outdoor Navigationmentioning
confidence: 99%
“…Although waypoints for the UGV are calculated in the detected paths, reactivity is still necessary to avoid steep slopes and unexpected obstacles that are not visible on satellite images. Local navigation between distant waypoints has been implemented on Andabata with a previously developed actor-critic scheme, which was trained using reinforcement and curriculum learning [36].…”
Section: Outdoor Navigationmentioning
confidence: 99%
“…Basically, acquired 3D point clouds are employed to emulate a 2D traversability scanner, which produces 32 virtual levelled ranges up to 10 m around the vehicle (see Figure 19). These data, together with the heading error of the vehicle with respect to the current waypoint (p t ), are employed by the actor neural network to directly produce steering speed commands while moving at a constant longitudinal speed [36]. When the distance to the current waypoint (d t ) is less than 1 m, the next objective from the list is chosen.…”
Section: Outdoor Navigationmentioning
confidence: 99%
“…Figure19. Representation of a virtual 2D traversability scan for Andabata[36]. A nearby obstacle is shown in grey.…”
Moving on paths or trails present in natural environments makes autonomous navigation of unmanned ground vehicles (UGV) simpler and safer. In this sense, aerial photographs provide a lot of information of wide areas that can be employed to detect paths for UGV usage. This paper proposes the extraction of paths from a geo-referenced satellite image centered at the current UGV position. Its pixels are individually classified as being part of a path or not using a convolutional neural network (CNN) which has been trained using synthetic data. Then, successive distant waypoints inside the detected paths are generated to achieve a given goal. This processing has been successfully tested on the Andabata mobile robot, which follows the list of waypoints in a reactive way based on a three-dimensional (3D) light detection and ranging (LiDAR) sensor.
“…The computer of Andabata employs an inertial measurement unit (IMU), with inclinometers, gyroscopes, and a compass, and a global navigation satellite system (GNSS) receiver with a horizontal resolution of 1 m included in its onboard smartphone for outdoor localization [36]. The main exteroceptive sensor for navigation is a custom 3D LiDAR sensor with 360°field of view built by rotating a 2D LiDAR [37].…”
Section: Outdoor Navigationmentioning
confidence: 99%
“…Although waypoints for the UGV are calculated in the detected paths, reactivity is still necessary to avoid steep slopes and unexpected obstacles that are not visible on satellite images. Local navigation between distant waypoints has been implemented on Andabata with a previously developed actor-critic scheme, which was trained using reinforcement and curriculum learning [36].…”
Section: Outdoor Navigationmentioning
confidence: 99%
“…Basically, acquired 3D point clouds are employed to emulate a 2D traversability scanner, which produces 32 virtual levelled ranges up to 10 m around the vehicle (see Figure 19). These data, together with the heading error of the vehicle with respect to the current waypoint (p t ), are employed by the actor neural network to directly produce steering speed commands while moving at a constant longitudinal speed [36]. When the distance to the current waypoint (d t ) is less than 1 m, the next objective from the list is chosen.…”
Section: Outdoor Navigationmentioning
confidence: 99%
“…Figure19. Representation of a virtual 2D traversability scan for Andabata[36]. A nearby obstacle is shown in grey.…”
Moving on paths or trails present in natural environments makes autonomous navigation of unmanned ground vehicles (UGV) simpler and safer. In this sense, aerial photographs provide a lot of information of wide areas that can be employed to detect paths for UGV usage. This paper proposes the extraction of paths from a geo-referenced satellite image centered at the current UGV position. Its pixels are individually classified as being part of a path or not using a convolutional neural network (CNN) which has been trained using synthetic data. Then, successive distant waypoints inside the detected paths are generated to achieve a given goal. This processing has been successfully tested on the Andabata mobile robot, which follows the list of waypoints in a reactive way based on a three-dimensional (3D) light detection and ranging (LiDAR) sensor.
“…Less common, but more relevant from that perspective, especially as computation capabilities increase, is the possibility of having deformable environmental objects and terrain [ 14 ]. Conducted analyses have shown that there are several simulators which offer extensive use case possibilities [ 15 , 16 , 17 , 18 ]. A solution commonly used in research is the MuJoCo [ 19 ].…”
The introduction of Unmanned Ground Vehicles (UGVs) into the field of rescue operations is an ongoing process. New tools, such as UGV platforms and dedicated manipulators, provide new opportunities but also come with a steep learning curve. The best way to familiarize operators with new solutions are hands-on courses but their deployment is limited, mostly due to high costs and limited equipment numbers. An alternative way is to use simulators, which from the software side, resemble video games. With the recent expansion of the video game engine industry, currently developed software becomes easier to produce and maintain. This paper tries to answer the question of whether it is possible to develop a highly accurate simulator of a rescue and IED manipulator using a commercially available game engine solution. Firstly, the paper describes different types of simulators for robots currently available. Next, it provides an in-depth description of a plug-in simulator concept. Afterward, an example of a hydrostatic manipulator arm and its virtual representation is described alongside validation and evaluation methodologies. Additionally, the paper provides a set of metrics for an example rescue scenario. Finally, the paper describes research conducted in order to validate the representation accuracy of the developed simulator.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.