Abstract:Montague grammar is a theory of semantics and of its relation to syntax, developed by the logician Richard Montague and subsequently extended by linguists, philosophers, and logicians. Montague grammar had its roots in logic and the philosophy of language; it became influential in linguistics, and subsequently evolved into contemporary formal semantics. Enduring features of the theory have been a truth-conditional theory of meaning, a model-theoretic conception of semantics, and the methodological centrality o… Show more
“…For automated commonsense reasoning, negation in natural language has to be treated in a formal manner. Traditional approaches tackle this problem by using Montague grammars together with Kripke semantics for modal logics [23]. Here, negation is discussed in the context of performative verbs, e.g.…”
Section: Commonsense Reasoning and Negationmentioning
Negation is both an operation in formal logic and in natural language by which a proposition is replaced by one stating the opposite, as by the addition of "not" or another negation cue. Treating negation in an adequate way is required for cognitive reasoning, which comprises commonsense reasoning and text comprehension. One task of cognitive reasoning is answering questions given by sentences in natural language. There are tools based on discourse representation theory to convert sentences automatically into a formal logical representation. However, since the knowledge in logical databases in practice always is incomplete, forward reasoning of automated reasoning systems alone does not suffice to derive answers to questions because, instead of complete proofs, often only partial positive knowledge can be derived. In consequence, negative information from negated expressions does not help in this context, because only negative knowledge can be derived from this. Therefore, we aim at reducing syntactic negation, strictly speaking, the negated event or property, to its inverse. This lays the basis of cognitive reasoning employing both logic and machine learning for general question answering. In this paper, we describe an effective procedure to determine the negated event or property in order to replace it with it inverse and our overall system for cognitive reasoning. We demonstrate the procedure with examples and evaluate it with several benchmarks.
“…For automated commonsense reasoning, negation in natural language has to be treated in a formal manner. Traditional approaches tackle this problem by using Montague grammars together with Kripke semantics for modal logics [23]. Here, negation is discussed in the context of performative verbs, e.g.…”
Section: Commonsense Reasoning and Negationmentioning
Negation is both an operation in formal logic and in natural language by which a proposition is replaced by one stating the opposite, as by the addition of "not" or another negation cue. Treating negation in an adequate way is required for cognitive reasoning, which comprises commonsense reasoning and text comprehension. One task of cognitive reasoning is answering questions given by sentences in natural language. There are tools based on discourse representation theory to convert sentences automatically into a formal logical representation. However, since the knowledge in logical databases in practice always is incomplete, forward reasoning of automated reasoning systems alone does not suffice to derive answers to questions because, instead of complete proofs, often only partial positive knowledge can be derived. In consequence, negative information from negated expressions does not help in this context, because only negative knowledge can be derived from this. Therefore, we aim at reducing syntactic negation, strictly speaking, the negated event or property, to its inverse. This lays the basis of cognitive reasoning employing both logic and machine learning for general question answering. In this paper, we describe an effective procedure to determine the negated event or property in order to replace it with it inverse and our overall system for cognitive reasoning. We demonstrate the procedure with examples and evaluate it with several benchmarks.
“…It is not only grammatical structure that imposes consecutive restrictions on sample-space of words as the sentence progresses, the need for intelligibility has the same effect. Without (at least partial) hierarchical structures in the formation of sentences, their interpretation would become very hard [46]. However, nested structures in sentences will generally not be strictly realised.…”
Section: Fig 1: Rank Ordered Distribution Of Word Frequencies Formentioning
The formation of sentences is a highly structured and history-dependent process. The probability of using a specific word in a sentence strongly depends on the 'history' of word usage earlier in that sentence. We study a simple history-dependent model of text generation assuming that the sample-space of word usage reduces along sentence formation, on average. We first show that the model explains the approximate Zipf law found in word frequencies as a direct consequence of sample-space reduction. We then empirically quantify the amount of sample-space reduction in the sentences of 10 famous English books, by analysis of corresponding word-transition tables that capture which words can follow any given word in a text. We find a highly nested structure in these transition tables and show that this 'nestedness' is tightly related to the power law exponents of the observed word frequency distributions. With the proposed model, it is possible to understand that the nestedness of a text can be the origin of the actual scaling exponent and that deviations from the exact Zipf law can be understood by variations of the degree of nestedness on a book-by-book basis. On a theoretical level, we are able to show that in the case of weak nesting, Zipf's law breaks down in a fast transition. Unlike previous attempts to understand Zipf's law in language the sample-space reducing model is not based on assumptions of multiplicative, preferential or self-organized critical mechanisms behind language formation, but simply uses the empirically quantifiable parameter 'nestedness' to understand the statistics of word frequencies.
“…The existence of grammatical and contextual constraints allow us-at the receiving part of a communication-to complete sentences in advance, and to anticipate words that will appear later. This (at least partially) ordered hierarchical structure guides sentence formation and allows a receiver to robustly decode messages [20].…”
Sentence formation is a highly structured, history-dependent, and sample-space reducing (SSR) process. While the first word in a sentence can be chosen from the entire vocabulary, typically, the freedom of choosing subsequent words gets more and more constrained by grammar and context, as the sentence progresses. This sample-space reducing property offers a natural explanation of Zipf's law in word frequencies, however, it fails to capture the structure of the word-to-word transition probability matrices of English text. Here we adopt the view that grammatical constraints (such as subject-predicate-object) locally re-order the word order in sentences that are sampled with a SSR word generation process. We demonstrate that superimposing grammatical structure-as a local word re-ordering (permutation) process-on a sample-space reducing process is sufficient to explain both, word frequencies and word-to-word transition probabilities. We compare the quality of the grammatically ordered SSR model in reproducing several test statistics of real texts with other text generation models, such as the Bernoulli model, the Simon model, and the Monkey typewriting model.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.