This paper proposes a compositional model-theoretic account of the way the interpretation of indicative conditionals is determined and constrained by the temporal and modal expressions in their constituents.
Legislative speech records from the 101st to 108th Congresses of the US Senate are analysed to study political ideologies. A widely-used text classification algorithm -Support Vector Machines (SVM)allows the extraction of terms that are most indicative of conservative and liberal positions in legislative speeches and the prediction of senators' ideological positions, with a 92 per cent level of accuracy. Feature analysis identifies the terms associated with conservative and liberal ideologies. The results demonstrate that cultural references appear more important than economic references in distinguishing conservative from liberal congressional speeches, calling into question the common economic interpretation of ideological differences in the US Congress.
In Ihis article, we discuss the design of party classifiers for Congressional speech daia. We then examine these party cla.s.siriers' person-dependency and lime-dependency. We found that party classifiers trained on 2005 House speeches can he generalized to the Senate speeches of the same year, but nol vice versa. The classifiers irained on 2005 House speeches performed belter on Senate speeches from recent years than on older ones, which indicates the classifiers' time-dependency. This dependency may be caused by changes in the issue agenda or ihe ideoiogicai composition of Congress.•f . V.' --
Research in historical semantics relies on the examination, selection, and interpretation of texts from corpora. Changes in meaning are tracked through the collection and careful inspection of examples that span decades and centuries. This process is inextricably tied to the researcher"s expertise and familiarity with the corpus. Consequently, the results tend to be difficult to quantify and put on an objective footing, and "big-picture" changes in the vocabulary other than the specific ones under investigation may be hard to keep track of. In this paper we present a method that uses Latent Semantic Analysis (Landauer, Foltz & Laham, 1998) to automatically track and identify semantic changes across a corpus. This method can take the entire corpus into account when tracing changes in the use of words and phrases, thus potentially allowing researchers to observe the larger context in which these changes occurred, while at the same time considerably reducing the amount of work required. Moreover, because this measure relies on readily observable co-occurrence data, it affords the study of semantic change a measure of objectivity that was previously difficult to attain. In this paper we describe our method and demonstrate its potential by applying it to several well-known examples of semantic change in the history of the English language.
The rise of causality and the attendant graph-theoretic modeling tools in the study of counterfactual reasoning has had resounding effects in many areas of cognitive science, but it has thus far not permeated the mainstream in linguistic theory to a comparable degree. In this study I show that a version of the predominant framework for the formal semantic analysis of conditionals, Kratzer-style premise semantics, allows for a straightforward implementation of the crucial ideas and insights of Pearl-style causal networks. I spell out the details of such an implementation, focusing especially on the notions of intervention on a network and backtracking interpretations of counterfactuals.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.