Generative AI-driven automated essay scoring (AES) is expected to revolutionize personalized education by offering customized feedback to students. However, the reliability of these systems is currently undermined by inherent limitations, such as the tendency for “hallucination,” where the AI generates factually incorrect or irrelevant information. To mitigate these issues and bolster the trustworthiness of AES, this chapter argues that the implementation of explainable AI (XAI) is crucial. Suitable XAI algorithms could make the GenAI's decision-making process transparent, allowing educators and students to understand and trust the feedback provided, thus ensuring the effective integration of AI in education. Furthermore, the chapter outlines several recommendations for achieving a responsible GenAI-driven AES system.