THEORIES IN AI FALL INTO TWO broad categories: mechanism theories and content theories. Ontologies are content theories about the sorts of objects, properties of objects, and relations between objects that are possible in a specified domain of knowledge. They provide potential terms for describing our knowledge about the domain. In this article, we survey the recent development of the field of ontologies in AI. We point to the somewhat different roles ontolo-gies play in information systems, natural-language understanding, and knowledge-based systems. Most research on ontologies focuses on what one might characterize as domain factual knowledge, because knowl-ede of that type is particularly useful in natural language understanding. There is another class of ontologies that are important in KBS-one that helps in sharing know-eldge about reasoning strategies or problem-solving methods. In a follow-up article, we will focus on method ontologies. Ontology as vocabulary In philosophy, ontology is the study of the kinds of things that exist. It is often said that ontologies "carve the world at its joints." In AI, the term ontology has largely come to mean one of two related things. First of all, ontology is a representation vocabulary, often specialized to some domain or subject matter. More precisely, it is not the vocabulary as such that qualifies as an ontology, but the concep-tualizations that the terms in the vocabulary are intended to capture. Thus, translating the terms in an ontology from one language to another, for example from English to French, does not change the ontology conceptually. In engineering design, you might discuss the ontology of an electronic-devices domain, which might include vocabulary that describes conceptual elements-transistors, operational amplifiers, and voltages-and the relations between these elements-operational amplifiers are a type-of electronic device, and transistors are a component-of operational amplifiers. Identifying such vocabulary-and the underlying conceptualizations-generally requires careful analysis of the kinds of objects and relations that can exist in the domain. In its second sense, the term ontology is sometimes used to refer to a body of knowledge describing some domain, typically a commonsense knowledge domain, using a representation vocabulary. For example, CYC 1 often refers to its knowledge representation of some area of knowledge as its ontology. In other words, the representation vocabulary provides a set of terms with which to describe the facts in some domain, while the body of knowledge using that vocabulary is a collection of facts about a domain. However , this distinction is not as clear as it might first appear. In the electronic-device example , that transistor is a component-of operational amplifier or that the latter is a type-of electronic device is just as much a fact about
Abstract.We explore the meanings of the terms 'structure', 'behaviour', and, especially, 'function' in
The importance of network security has grown tremendously and a collection of devices have been introduced, which can improve the security of a network. Network intrusion detection systems (NIDS) are among the most widely deployed such system; popular NIDS use a collection of signatures of known security threats and viruses, which are used to scan each packet's payload. Today, signatures are often specified as regular expressions; thus the core of the NIDS comprises of a regular expressions parser; such parsers are traditionally implemented as finite automata. Deterministic Finite Automata (DFA) are fast, therefore they are often desirable at high network link rates. DFA for the signatures, which are used in the current security devices, however require prohibitive amounts of memory, which limits their practical use.In this paper, we argue that the traditional DFA based NIDS has three main limitations: first they fail to exploit the fact that normal data streams rarely match any virus signature; second, DFAs are extremely inefficient in following multiple partially matching signatures and explodes in size, and third, finite automaton are incapable of efficiently keeping track of counts. We propose mechanisms to solve each of these drawbacks and demonstrate that our solutions can implement a NIDS much more securely and economically, and at the same time substantially improve the packet throughput.
n recent years there has been increasing interest in describing complicated information processing systems in terms of the knowledge they have, rather than by the details of their implementation. This requires a means of modeling the knowledge in a system. Several different approaches to knowledge modeling have been developed by researchers working in Artificial Intelligence (AI). Most of these approaches share the view that knowledge must be modeled with respect to a goal or task. In this article, we outline our modeling approach in terms of the notion of a task-structure, which recursively links a task to alternative methods and to their subtasks. Our emphasis is on the notion of modeling domain knowledge using tasks and methods as mediating concepts. We begin by tracing the development of a number of different knowledge-modeling approaches. These approaches share many features, but their differences make it difficult to compare systems that have been modeled using different approaches. We present these approaches and describe their similarities and differences. We then give a detailed description, based on the task structure, of our knowledge-modeling approach and illustrate it with task structures for diagnosis and design. Finally, we show how the task structure can be used to compare and unify the other approaches.
Fuzzy logic methods have been used successfully in many real-world applications, but the foundations of fuzzy logic remain under attack. Taken together, these two facts constitute a paradox. A second paradox is that almost all of the successful fuzzy logic applications are embedded controllers, while most of the theoretical papers on fuzzy methods deal with knowledge representation and reasoning. I hope here to resolve these paradoxes by identifying which aspects of fuzzy logic render it useful in practice, and which aspects are inessential. My conclusions are based on a mathematical result, on a survey of literature on the use of fuzzy logic in heuristic control and in expert systems, and on practical experience developing expert systems. An apparent paradoxAs is natural in a research area as active as fuzzy logic, theoreticians have investigated many formal systems, and a variety of systems have been used in applications. Nevertheless, the basic intuitions have remained relatively constant. At its simplest, fuzzy logic is a generalization of standard propositional logic from two truth values, false and true, to degrees of truth between 0 and 1.Formally, let A denote an assertion. In fuzzy logic, A is assigned a numerical value t(A), called the degree of truth of A , such that 0 5 t(A) I 1. For a sentence composed from simple assertions and the logical connectives "and" (A), "or" (v), and "not" ( 1 ) degree of truth is defined as follows: MIT Press, 1993, pp 698-703 Definition 1: Let A and B be arbitrary as- sertions. Then t ( A A B ) = min [ t(A), t(B)) t(A v B ) = max { t ( A ) , t ( B ) ] t(A) = t(B) if , either t ( B ) = t ( A ) or t(B) = 1-t(A). WA direct proof of Theorem 1 appears in the sidebar, but it can also be proved using similar results couched in more abstractProposition: Let P be a finite Boolean algebra of propositions and let z be a truthassignment function P + [0,1], supposedly truth-functional via continuous connectives. Then for all p E P, Q) E { 0, 1 ] WThe link between Theorem 1 and this proposition is that l ( A A 4) = B v (4 A -IB) is a valid equivalence of Boolean algebra. Theorem 1 is stronger in that it relies on only one particular equivalence, while the proposition is stronger because it applies to any connectives that are truth-functional and continuous (as defined in its authors'The equivalence used in Theorem 1 is rather complicated, but it is plausible intupaper).itively, and it is natural to apply it in reasoning about a set of fuzzy rules, since 7 ( A A 4 ) and B v (4 A 4 ) are both reexpressions of the classical implication 4 4 B. It was chosen for this reason, but the same result can also be proved using many other ostensibly reasonable logical aquivalences.It is important to be clear on what exactly Theorem 1 says, and what it does not say. On the one hand, the theorem applies to any more general formal system that includes the four postulates listed in Definition 1. Any extension of fuzzy logic to accommodate first-order sentences, for example, collapses to two trut...
This paper is an informal description of some recent insights about what a device function is, how it arises in response to needs, and how function arises from the structure of a device and the functions of its components. These results formalize and clarify a set of contending intuitions about function that researchers have had. The paper relates the approaches, results, and goals of this stream of research, called functional representation~FR!, with the functional modeling~FM! stream in engineering. Despite the occurrence of the term function in the two streams, often the results and techniques in the two streams appear not to have much to do with each other. I argue that, in fact, the two streams are performing research that is mutually complementary. FR research provides the basic layer for device ontology in a formal framework that helps to clarify the meanings of terms such as function and structure, and also to support representation of device knowledge for automated reasoning. FM research provides another layer in device ontology, by attempting to identify behavior primitives that are applicable to subsets of devices, with the hope that functions can be described in those domains with an economy of terms. This can lead to useful catalogs of functions and devices in specific areas of engineering. With increased attention to formalization, the work in FM can provide domain-specific terms for FR research in knowledge representation and automated reasoning.
For many Internet services, reducing latency improves the user experience and increases revenue for the service provider. While in principle latencies could nearly match the speed of light, we find that infrastructural inefficiencies and protocol overheads cause today's Internet to be much slower than this bound: typically by more than one, and often, by more than two orders of magnitude. Bridging this large gap would not only add value to today's Internet applications, but could also open the door to exciting new applications. Thus, we propose a grand challenge for the networking research community: a speed-of-light Internet. To inform this research agenda, we investigate the causes of latency inflation in the Internet across the network stack. We also discuss a few broad avenues for latency improvement.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2023 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.