Abstract. Urban areas are complex scenarios consisting of objects with various materials. This variety poses a challenge to single-data classification schemes. In this paper, we propose a feature fusion and classification network on RGB top-view point cloud and SAR images with swin-Transformer. In this network, the heterogeneous features are learned separately by an asymmetric encoder, and then they are concatenated along the channel dimension and fed into a fusing encoder. Finally, the fused features are decoded by an UperNet for generating the semantic labels. As data we use high-resolution 3D point cloud provided by Hessigheim benchmark which are complemented by TerraSAR-X images. The overall precision and the mean intersection over union (mIoU) achieves 87.25% and 73.56%, respectively, which outperforms the single-data swin-Transformer by 4.08% and 1.91%, respectively.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.