We study the effect of polynomial interpretation termination proofs of deterministic (resp.
non-deterministic) algorithms defined by con uent (resp. non-con uent) rewrite systems over
data structures which include strings, lists and trees, and we classify them according to the
interpretations of the constructors. This leads to the definition of six function classes which
turn out to be exactly the deterministic (resp. non-deterministic) polynomial time, linear
exponential time and linear doubly exponential time computable functions when the class is
based on con uent (resp. non-con uent) rewrite systems. We also obtain a characterisation of
the linear space computable functions. Finally, we demonstrate that functions with exponential
interpretation termination proofs are super-elementary.
International audienceThis paper presents in a reasoned way our works on resource analysis by quasi- interpretations. The controlled resources are typically the runtime, the runspace or the size of a result in a program execution. Quasi-interpretations allow analyzing system complexity. A quasi-interpretation is a numerical assignment, which provides an upper bound on computed func- tions and which is compatible with the program operational semantics. Quasi- interpretation method offers several advantages: (i) It provides hints in order to optimize an execution, (ii) it gives resource certificates, and (iii) finding quasi- interpretations is decidable for a broad class which is relevant for feasible com- putations. By combining the quasi-interpretation method with termination tools (here term orderings), we obtained several characterizations of complexity classes starting from Ptime and Pspace
In the context of lexicalized grammars, we propose general methods for lexical disambiguation based on polarization and abstraction of grammatical formalisms. Polarization makes their resource sensitivity explicit and abstraction aims at keeping essentially the mechanism of neutralization between polarities. Parsing with the simplified grammar in the abstract formalism can be used efficiently for filtering lexical selections.
We propose two characterizations of complexity classes by means of programming languages. The first concerns Logspace while the second leads to Ptime. This latter characterization shows that adding a choice command to a Ptime language (the language WHILE of Jones [1]) may not necessarily provide NPtime computations. The result is close to Cook in [2] who used "auxiliary push-down automata". Logspace is obtained through a decidable mechanism of tiering. It is based on an analysis of deforestation due to Wadler in [3]. We get also a characterization of NLogspace. Definition 2. Given a program p, its execution induces a partial function p : T (Cns) n → T (Cns) which maps d 1 , d 2 ,. .. , d n to σ(Y) if the program terminates and where σ is the last store of the computation, otherwise it is undefined. Definition 3. A program is called cons-free, if it does not use an expression of the form c(E 1 ,. .. , E k). We note WHILE cons-free the set of cons-free programs. Theorem 1 (Jones [1]). The set of decision problems computed by cons-free programs is exactely Logspace.
Most of malware detectors are based on syntactic signatures that identify known malicious programs. Up to now this architecture has been sufficiently efficient to overcome most of malware attacks. Nevertheless, the complexity of malicious codes still increase. As a result the time required to reverse engineer malicious programs and to forge new signatures is increasingly longer. This study proposes an efficient construction of a morphological malware detector, that is a detector which associates syntactic and semantic analysis. It aims at facilitating the task of malware analysts providing some abstraction on the signature representation which is based on control flow graphs. We build an efficient signature matching engine over tree automata techniques. Moreover we describe a generic graph rewriting engine in order to deal with classic mutations techniques. Finally, we provide a preliminary evaluation of the strategy detection carrying out experiments on a malware collection.
We study computer virology from an abstract point of view. Viruses and worms are self-replicating programs, whose definitions are based on Kleene's second recursion theorem. We introduce a notion of delayed recursion that we apply to both Kleene's second recursion theorem and Smullyan's double recursion theorem. This leads us to define four classes of viruses, two of them being polymorphic. Then, we work on a simple imperative programming language in order to show how those theoretical constructions can be implemented. In particular, we propose a general virus builder, and distribution engines. Topics covered. Computability theoretic aspects of programs, computer virology.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.