Music auto-tagging refers to automatically assigning semantic labels (tags) such as genre, mood and instrument to music so as to facilitate text-based music retrieval. Although significant progress has been made in recent years, relatively little research has focused on semantic labels that are time-varying within a track. Existing approaches and datasets usually assume that different fragments of a track share the same tag labels, disregarding the tags that are time-varying (e.g., mood) or local in time (e.g., instrument solo). In this paper, we present a new dataset dedicated to time-varying music autotagging. The dataset, called CAL500exp, is an enriched version of the well-known CAL500 dataset used for conventional track-level tagging. Given the tag set of CAL500, eleven subjects with strong music background were recruited to annotate the time-varying tag labels. A new user interface for annotation is developed to reduce the subject's annotation effort yet increase the quality of labels. Moreover, we present an empirical evaluation that demonstrates the performance improvement CAL500exp brings about for time-varying music auto-tagging. By providing more accurate and consistent descriptions of music content in a finer granularity, CAL500exp may open new opportunities to understand and to model the temporal context of musical semantics.