Knowledge tracing (KT) is a fundamental personalizedtutoring technique for learners in online learning systems. Recent KT methods employ flexible deep neural network-based models that excel at this task. However, the adequacy of KT is still challenged by the sparseness of the learners' exercise data. To alleviate the sparseness problem, most of the exiting KT studies are performed at the skill-level rather than the question-level, as questions are often numerous and associated with much fewer skills. However, at the skill level, KT neglects the distinctive information related to the questions themselves and their relations. In this case, the models can imprecisely infer the learners' knowledge states and might fail to capture the long-term dependencies in the exercising sequences. In the knowledge domain, skills are naturally linked as a graph (with the edges being the prerequisite relations between pedagogical concepts). We refer to such a graph as a knowledge structure (KS). Incorporating a KS into the KT procedure can potentially resolve both the sparseness and information loss, but this avenue has been underexplored because obtaining the complete KS of a domain is challenging and laborintensive. In this paper, we propose a novel KS-enhanced graph representation learning model for KT with an attention mechanism (KSGKT). We first explore eight methods that automatically infer the domain KS from