Point clouds are one of the widely used data sources for spatial modeling. Artificial intelligence approaches have become an important tool for understanding and extracting semantic information of point clouds. In particular, the explainability of machine learning approaches for 3D data has not been sufficiently investigated. Moreover, existing studies are generally limited to object classification issues. This is a pioneer study that addresses the classification of photogrammetric point clouds in terms of explainable artificial intelligence. In this study, the explainability of black-box machine learning models in the context of the classification of photogrammetric point clouds was investigated. Each point in the point cloud is defined using geometric and spectral features. Additionally, the effect of selecting the most important of these features on the classification performance of ML models such as Random Forest, XGBoost and LightGBM was examined. The explainability of ML models was analyzed with Shapley Additive exPlanation (SHAP), an explainable artificial intelligence approach. SHAP analysis was compared with filterbased Information Gain (IG) and ReliefF methods for feature selection. Using the features selected with SHAP analysis, overall accuracy (OA) of 85.50% in the Ankeny dataset, 91.70% in the Building dataset and 83.28% in the Cadastre dataset was achieved with LightGBM. The evaluation with XGBoost shows overall accuracy of 85.22% for Ankeny, 91.21% for Building and 82.47% for Cadastre. The evaluation with RF shows overall accuracy of 83.70% for Ankeny, 89.08% for Building and 79.36% for Cadastre.