In this paper we deal with machine learning methods and algorithms applied in learning simple concepts by their refining or explication. The method of refining a simple concept of an object O consists in discovering a molecular concept that defines the same or a very similar object to the object O. Typically, such a molecular concept is a professional definition of the object, for instance a biological definition according to taxonomy, or legal definition of roles, acts, etc. Our background theory is Transparent Intensional Logic (TIL). In TIL concepts are explicated as abstract procedures encoded by natural language terms. These procedures are defined as six kinds of TIL constructions. First, we briefly introduce the method of learning with a supervisor that is applied in our case. Then we describe the algorithm 'Framework' together with heuristic methods applied by it. The heuristics is based on a plausible supply of positive and negative (near-miss) examples by which learner's hypotheses are refined and adjusted. Given a positive example, the learner refines the hypothesis learnt so far, while a near-miss example triggers specialization. Our heuristic methods deal with the way refinement is applied, which includes also its special cases generalization and specialization.
In this paper, I deal with property modifiers defined as functions that associate a given root property P with a modified property [M P]. Property modifiers typically divide into four kinds, namely intersective, subsective, privative and modal. Here I do not deal with modal modifiers like alleged, which appear to be wellnigh logically lawless, because, for instance, an alleged assassin is or is not an assassin. The goal of this paper is to logically define the three remaining kinds of modifiers together with the rule of left subsectivity that I launch as the rule of pseudo-detachment to replace the modifier M in the premise by the property M* in the conclusion, and prove that the rule of pseudodetachment is valid for all kinds of modifiers. Furthermore, it is defined in a way that avoids paradoxes like a small elephant being smaller than a large mouse.
This paper deals with two issues. First, it identifies structured propositions with logical procedures. Second, it considers various rigorous definitions of the granularity of procedures, hence also of structured propositions, and comes out in favour of one of them. As for the first point, structured propositions are explicated as algorithmically structured procedures. I show that these procedures are structured wholes that are assigned to expressions as their meanings, and their constituents are subprocedures occurring in executed mode (as opposed to displayed mode). Moreover, procedures are not mere aggregates of their parts; rather, procedural constituents mutually interact. As for the second point, there is no universal criterion of the structural isomorphism of meanings, hence of cohyperintensionality, hence of synonymy for every kind of language. The positive result I present is an ordered set of rigorously defined criteria of fine-grained individuation in terms of the structure of procedures. Hence procedural semantics provides a solution to the problem of the granularity of cohyperintensionality.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.