Because of limitations in availability of synthetic aperture radar (SAR) training data, automatic target recognition (ATR) researchers have turned to the use of synthetic SAR images. Unfortunately, training neural network classifiers on this synthetic data does not yield robust models. Assuming access to limited measured SAR data, we evaluate two natural, transfer-learning approaches to solve this problem, showing that both do not successfully lead to solutions. Motivated by the successes of contrastive, representation, and metric learning, we propose a novel graph-based pretraining approach to transfer knowledge from synthetic samples to real-world scenarios. We show that this approach is applicable to three different neural network architectures obtaining improvements over the baseline approach of 19.21%, 28.70%, and 8.27% respectively. We also demonstrate that our method is robust to the choice of hyperparameters.