Abstract:Light detection and ranging (LiDAR) sensors help autonomous vehicles detect the surrounding environment and the exact distance to an object’s position. Conventional LiDAR sensors require a certain amount of power consumption because they detect objects by transmitting lasers at a regular interval according to a horizontal angular resolution (HAR). However, because the LiDAR sensors, which continuously consume power inefficiently, have a fatal effect on autonomous and electric vehicles using battery power, powe… Show more
“…The third uses hybrid data, either combining data from two different sensors or using a composite sensor such as the Microsoft Kinect that provides both images and point clouds. Over the last decade 3D point clouds have been widely used in computer vision and mobile robotics applications, opening the door to important but challenging tasks such as 3D object recognition [ 1 , 2 , 3 , 4 , 5 , 6 ] and semantic segmentation [ 7 , 8 , 9 ], which are core steps for scene understanding.…”
This paper proposes a 3D object recognition method for non-coloured point clouds using point features. The method is intended for application scenarios such as Inspection, Maintenance and Repair (IMR) of industrial sub-sea structures composed of pipes and connecting objects (such as valves, elbows and R-Tee connectors). The recognition algorithm uses a database of partial views of the objects, stored as point clouds, which is available a priori. The recognition pipeline has 5 stages: (1) Plane segmentation, (2) Pipe detection, (3) Semantic Object-segmentation and detection, (4) Feature based Object Recognition and (5) Bayesian estimation. To apply the Bayesian estimation, an object tracking method based on a new Interdistance Joint Compatibility Branch and Bound (IJCBB) algorithm is proposed. The paper studies the recognition performance depending on: (1) the point feature descriptor used, (2) the use (or not) of Bayesian estimation and (3) the inclusion of semantic information about the objects connections. The methods are tested using an experimental dataset containing laser scans and Autonomous Underwater Vehicle (AUV) navigation data. The best results are obtained using the Clustered Viewpoint Feature Histogram (CVFH) descriptor, achieving recognition rates of 51.2%, 68.6% and 90%, respectively, clearly showing the advantages of using the Bayesian estimation (18% increase) and the inclusion of semantic information (21% further increase).
“…The third uses hybrid data, either combining data from two different sensors or using a composite sensor such as the Microsoft Kinect that provides both images and point clouds. Over the last decade 3D point clouds have been widely used in computer vision and mobile robotics applications, opening the door to important but challenging tasks such as 3D object recognition [ 1 , 2 , 3 , 4 , 5 , 6 ] and semantic segmentation [ 7 , 8 , 9 ], which are core steps for scene understanding.…”
This paper proposes a 3D object recognition method for non-coloured point clouds using point features. The method is intended for application scenarios such as Inspection, Maintenance and Repair (IMR) of industrial sub-sea structures composed of pipes and connecting objects (such as valves, elbows and R-Tee connectors). The recognition algorithm uses a database of partial views of the objects, stored as point clouds, which is available a priori. The recognition pipeline has 5 stages: (1) Plane segmentation, (2) Pipe detection, (3) Semantic Object-segmentation and detection, (4) Feature based Object Recognition and (5) Bayesian estimation. To apply the Bayesian estimation, an object tracking method based on a new Interdistance Joint Compatibility Branch and Bound (IJCBB) algorithm is proposed. The paper studies the recognition performance depending on: (1) the point feature descriptor used, (2) the use (or not) of Bayesian estimation and (3) the inclusion of semantic information about the objects connections. The methods are tested using an experimental dataset containing laser scans and Autonomous Underwater Vehicle (AUV) navigation data. The best results are obtained using the Clustered Viewpoint Feature Histogram (CVFH) descriptor, achieving recognition rates of 51.2%, 68.6% and 90%, respectively, clearly showing the advantages of using the Bayesian estimation (18% increase) and the inclusion of semantic information (21% further increase).
“…Compared to camera and radar data, a LiDAR pointcloud is dense, geo-referenced, and a more accurate form of a 3D representation. As an automotive sensor, LiDAR is still being improved [11,12], both hardware and software wise. Nevertheless, LiDAR's dense nature presents a challenge while processing it.…”
Explainable Artificial Intelligence (XAI) methods demonstrate internal representation of data hidden within neural network trained weights. That information, presented in a form readable to humans, could be remarkably useful during model development and validation. Among others, gradient-based methods such as Grad-CAM are broadly used in an image processing domain. On the other hand, the autonomous vehicle sensor suite consists of auxiliary devices such as radars and LiDARs, for which existing XAI methods do not apply directly. In this article, we present our adaptation approach to utilize Grad-CAM visualization for LiDAR pointcloud specific object detection architectures used in automotive perception systems. We try to solve data and network architecture compatibility problems and answer the question whether Grad-CAM methods could be used with LiDAR sensor data efficiently. We showcase successful results of our method and all the benefits that come with a Grad-CAM XAI application to a LiDAR sensor in an autonomous driving domain.
“…For airport services, the range of facilities extends from conventional surface movement radar (SMR) systems to Intelligent Cameras-based systems (Dimitropoulos et al, 2005) and associated Video Analytics techniques. New types of satellite and inertial navigation systems, as well as LiDAR devices, will be installed on board mobile objects (Lee et al, 2020). An important aspect of hardware development is also progressing in the field of the Unmanned Ground Vehicle (UGV) that in the foreseeable future will replace the Human Driven Vehicle (HDV), which performs the functions of GV at airports.…”
The paper discusses the prospects for the development and implementation of centralized ground traffic control systems at airports. The automatic control system can only work if there is accurate data on the location of mobile objects, which include both vehicles involved in the maintenance of aircraft and the aircraft themselves. In order to develop and test software for any specific centralized control system, the emulation mode should be used, in which the simulation model of the airport transport network works in conjunction with the real control software. In this case, one of the main functions of the simulation model is the generation of data streams that appropriately reflect the processes of movement of objects in the transport network of a specific airport. The paper describes a universal simulation program that allows one to simulate precisely described scenarios for the process in a transport network, which necessitates decision-making at the level of a centralized control system. The movement of objects in the model is accompanied by the recording of their coordinates in the Digital Twin. In this way, real streams of measurement data from various systems for determining the position of moving objects are modeled and stored.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.