Recently, 3D understanding research sheds light on extracting features from point cloud directly [22,24], which requires effective shape pattern description of point clouds. Inspired by the outstanding 2D shape descriptor SIFT [15], we design a module called PointSIFT that encodes information of different orientations and is adaptive to scale of shape. Specifically, an orientation-encoding unit is designed to describe eight crucial orientations, and multi-scale representation is achieved by stacking several orientation-encoding units. PointSIFT module can be integrated into various PointNet-based architecture to improve the representation ability. Extensive experiments show our PointSIFT-based framework outperforms state-ofthe-art method on standard benchmark datasets. The code and trained model will be published accompanied by this paper.
We present the scientific outcomes of the 2019 Data Fusion Contest organized by the Image Analysis and Data Fusion Technical Committee of the IEEE Geoscience and Remote Sensing Society. The contest included challenges with large-scale datasets for semantic 3-D reconstruction from satellite images and also semantic 3-D point cloud classification from airborne LiDAR. 3-D reconstruction results are discussed separately in Part-A. In this Part-B, we report the results of the two best-performing approaches for 3-D point cloud classification. Both are deep learning methods that improve upon the PointSIFT model with mechanisms to combine multiscale features and task-specific postprocessing to refine model outputs.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.