We propose a method for measuring a text's engagement with a focal concept using distributional representations of the meaning of words. More specifically, this measure relies on Word Mover's Distance, which uses word embeddings to determine similarities between two documents. In our approach, which we call Concept Mover's Distance, a document is measured by the minimum distance the words in the document need to travel to arrive at the position of a "pseudo document" consisting of only words denoting a focal concept. This approach captures the prototypical structure of concepts, is fairly robust to pruning sparse terms as well as variation in text lengths within a corpus, and when used with pre-trained embeddings, can be used even when terms denoting concepts are absent from corpora and can be applied to bag-of-words datasets. We close by outlining some limitations of the proposed method as well as opportunities for future research.