This paper conducts a comparative analysis of two prominent Explainable Artificial Intelligence (XAI) techniques, SHAP (SHapley Additive exPlanations) and Interpreted ML Partial Dependence, applied to a Telecom Churn dataset. The objective is to assess and contrast these techniques in enhancing transparency and interpretability of machine learning models, specifically in telecom churn prediction. The study emphasizes the significance of XAI in ensuring trust and comprehension in predictive modeling. The methodology outlines dataset preprocessing and model training steps. Two separate analyses using SHAP and Interpreted ML Partial Dependence are conducted to evaluate their effectiveness in elucidating model decisions and uncovering feature importance. Results, strengths, and limitations of both techniques are discussed, providing valuable insights into interpretability and robustness. The comparative analysis contributes to understanding XAI methods, emphasizing the importance of selecting appropriate techniques based on context and goals in enhancing transparency in telecom churn prediction models.