Graph-based scene model has been receiving increasing attention as a flexible and descriptive scene model for visual robot self-localization. In a typical self-localization application, objects, object features, and object relationship in an environment map are described respectively by nodes, node features, and edges in a scene graph, which are then matched against a query scene graph by a graph matching engine. However, its overhead for computation, storage, and communication, is proportional to the number and feature dimensionality of graph nodes, and can be significant in large-scale applications. In this study, we observe that graph-convolutional neural network (GCN) has a potential to become an efficient tool to train and predict with a graph matching engine. However, it is non-trivial to translate a given visual feature to a proper graph feature that contributes to good self-localization performance. To address this issue, we introduce a new knowledge transfer (KT) framework, which introduces an arbitrary self-localization model as a teacher to train the student, GCN-based self-localization system. Our KT framework enables lightweight storage/communication by using compact teacher's output signals as training data. Results on RobotCar datasets show that the proposed method outperforms existing comparing methods as well as the teacher self-localization system.