This paper presents the results of a large-scale evaluation study of window-based Distributional Semantic Models on a wide variety of tasks. Our study combines a broad coverage of model parameters with a model selection methodology that is robust to overfitting and able to capture parameter interactions. We show that our strategy allows us to identify parameter configurations that achieve good performance across different datasets and tasks.
This paper presents a large-scale evaluation study of dependency-based distributional semantic models. We evaluate dependencyfiltered and dependency-structured DSMs in a number of standard semantic similarity tasks, systematically exploring their parameter space in order to give them a "fair shot" against window-based models. Our results show that properly tuned window-based DSMs still outperform the dependencybased models in most tasks. There appears to be little need for the language-dependent resources and computational cost associated with syntactic analysis. 1
This paper presents a large-scale evaluation of bag-of-words distributional models on two datasets from priming experiments involving syntagmatic and paradigmatic relations. We interpret the variation in performance achieved by different settings of the model parameters as an indication of which aspects of distributional patterns characterize these types of relations. Contrary to what has been argued in the literature (Rapp, 2002;Sahlgren, 2006) -that bag-of-words models based on secondorder statistics mainly capture paradigmatic relations and that syntagmatic relations need to be gathered from first-order models -we show that second-order models perform well on both paradigmatic and syntagmatic relations if their parameters are properly tuned. In particular, our results show that size of the context window and dimensionality reduction play a key role in differentiating DSM performance on paradigmatic vs. syntagmatic relations.
One of the central problems in the semantics of derived words is polysemy (see, for example, the recent contributions by Lieber 2016 and Plag et al. 2018 ). In this paper, we tackle the problem of disambiguating newly derived words in context by applying Distributional Semantics ( Firth 1957 ) to deverbal -ment nominalizations (e.g. bedragglement, emplacement). We collected a dataset containing contexts of low frequency deverbal -ment nominalizations (55 types, 406 tokens, see Appendix B) extracted from large corpora such as the Corpus of Contemporary American English. We chose low frequency derivatives because high frequency formations are often lexicalized and thus tend to not exhibit the kind of polysemous readings we are interested in. Furthermore, disambiguating low-frequency words presents an especially difficult task because there is little to no prior knowledge about these words from which their semantic properties can be extrapolated. The data was manually annotated according to eventive vs. non-eventive interpretations, allowing also an ambiguous label in those cases where the context did not disambiguate. Our question then was to what extent, and under which conditions, context-derived representations such as those of Distributional Semantics can be successfully employed in the disambiguation of low-frequency derivatives. Our results show that, first, our models are able to distinguish between eventive and non-eventive readings with some success. Second, very small context windows are sufficient to find the intended interpretation in the majority of cases. Third, ambiguous instances tend to be classified as events. Fourth, the performance of the classifier differed for different subcategories of nouns, with non-eventive derivatives being harder to classify correctly. We present indirect evidence that this is due to the semantic similarity of abstract non-eventive nouns to eventive nouns. Overall, this paper demonstrates that distributional semantic models can be fruitfully employed for the disambiguation of low frequency words in spite of the scarcity of available contextual information. 1
This paper describes the MARDY corpus annotation environment developed for a collaboration between political science and computational linguistics. The tool realizes the complete workflow necessary for annotating a large newspaper text collection with rich information about claims (demands) raised by politicians and other actors, including claim and actor spans, relations, and polarities. In addition to the annotation GUI, the tool supports the identification of relevant documents, text pre-processing, user management, integration of external knowledge bases, annotation comparison and merging, statistical analysis, and the incorporation of machine learning models as "pseudo-annotators".
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.