Music emotion recognition (MER) aims to recognize the affective content of a piece of music, which is important for applications such as automatic soundtrack generation and music recommendation. MER is commonly formulated as a supervised learning problem. In practice, except for Pop music, there is little labeled data in most genres. In addition, emotion is genre specific in music and thus the labeled data of Pop music cannot be used for other genres. In this paper, we aim to solve the genre-specific MER problem by exploiting two kinds of auxiliary data: unlabeled songs and social tags. However, using these two kinds of data effectively is a non-trivial task, e.g. tags are noisy and therefore cannot be treated as fully trustworthy. To build an accurate model with the help from the unlabeled songs and noisy tags, we present SMART, which stands for Semi-Supervised Music Affective Emotion Recognition with Social Tagging, combining of a graph-based semisupervised learning algorithm with a novel tag refinement method. Experiments on the Million Song Dataset show that our proposed approach, trained with only 10 labeled instances, is as accurate as Support Vector Regression trained with 750 labeled songs.