Concept mapping is a well-known pedagogical tool to help students organize, represent, and develop an understanding of a topic. The grading of concept maps is typically manual, time-consuming, and tedious, especially for a large class. Existing research mostly focuses on topological scoring based-on structural features of concept maps. Unfortunately, the scoring does not achieve comparable accuracy to well-defined rubrics for manual analysis on the quality of content in a concept map. This paper presents Kastor, a new method to automate the Waterloo Rubric of scoring concept maps by quantifying the rubric's quality assessment parameters. The evaluation is performed on a publicly-available dataset of 39 concept maps of two cybersecurity courses, i.e., digital forensics, and supervisory control and data acquisition (SCADA) system security. The evaluation results show that Kastor achieves the accuracy of around 84% and 95% (at accurate and close-to-accurate levels) for SCADA and forensics concept maps, respectively. Furthermore, Kastor's comparison with a topological scoring method shows improvement by around 32% and 79% on SCADA and forensics concept maps, respectively.