In the field of autonomous driving and robotics, point clouds are showing their excellent real-time performance as raw data from most of the mainstream 3D sensors. Therefore, point cloud neural networks have become a popular research direction in recent years. So far, however, there has been little discussion about the explainability of deep neural networks for point clouds. In this paper, we propose new explainability approaches for point cloud deep neural networks based on local surrogate model-based methods to show which components make the main contribution to the classification. Moreover, we propose a quantitative validation method for explainability methods of point clouds which enhances the persuasive power of explainability by dropping the most positive or negative contributing features and monitoring how the classification scores of specific categories change. To enable an intuitive explanation of misclassified instances, we display features with confounding contributions. Our new explainability approach provides a fairly accurate, more intuitive and widely applicable explanation for point cloud classification tasks. Our code is available at https://github.com/Explain3D/Expla inable3D
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.