During the last few years, in the field of computer vision, sophisticated deep learning methods have been developed to accomplish semantic segmentation tasks of 3D point cloud data. Additionally, many researchers have extended the applicability of these methods, such as PointNet or PointNet++, beyond semantic segmentation tasks of indoor scene data to large-scale outdoor scene data observed using airborne laser scanning systems equipped with light detection and ranging (LiDAR) technology. Most extant studies have only investigated geometric information (x, y, and z or longitude, latitude, and height) and have omitted rich radiometric information. Therefore, we aim to extend the applicability of deep learning-based model from the geometric data into radiometric data acquired with airborne full-waveform LiDAR without converting the waveform into 2D images or 3D voxels. We simultaneously train two models: a local module for local feature extraction and a global module for acquiring wide receptive fields for the waveform. Furthermore, our proposed model is based on waveform-aware convolutional techniques. We evaluate the effectiveness of the proposed method using benchmark large-scale outdoor scene data. By integrating the two outputs from the local module and the global module, our proposed model had achieved higher mean recall value 0.92 than previous methods and higher F1 scores for all six classes than the other 3D Deep Learning method. Therefore, our proposed network consisting of the local and global module successfully resolves the semantic segmentation task of full-waveform LiDAR data without requiring expert knowledge. CCS CONCEPTS • Computing methodologies → Scene understanding.