Recent comparative studies have demonstrated the usefulness of dependencybased contexts (DEPS) for learning distributed word representations for similarity tasks. In English, DEPS tend to perform better than the more common, less informed bag-of-words contexts (BOW). In this paper, we present the first crosslinguistic comparison of different context types for three different languages. DEPS are extracted from "universal parses" without any language-specific optimization. Our results suggest that the universal DEPS (UDEPS) are useful for detecting functional similarity (e.g., verb similarity, solving syntactic analogies) among languages, but their advantage over BOW is not as prominent as previously reported on English. We also show that simple "post-parsing" filtering of useful UDEPS contexts leads to consistent improvements across languages.